skip to main content
Using Hybrid Data Pipeline : Creating data sources with the Web UI : Salesforce (and Related Data Store) connection parameters : ServiceMax parameters
  

Try Now
ServiceMax parameters
The following tables describe parameters available on the tabs of a ServiceMax® Data Source setup dialog:
*General tab
*OData tab
*Mapping tab
*Advanced tab

General tab

Click the thumbnail to view the screen. Required fields are marked with an asterisk.
General tab of the ServiceMax data source setup dialogGeneral tab of the ServiceMax data source setup dialog
Table 100. General tab connection parameters for ServiceMax
Field
Description
Data Source Name
A unique name for the data source. Data source names can contain only alphanumeric characters, underscores, and dashes.
Description
A general description of the data source.
User Id, Password
The login credentials for your ServiceMax cloud data store account.
Hybrid Data Pipeline uses this information to connect to the data store. The administrator of the cloud data store must grant permission to a user with these credentials to access the data store and the target data.
You can save the Data Source definition without specifying the login credentials. In that case, when you test the Data source connection, you will be prompted to specify these details. Applications using the connectivity service will have to supply the data store credentials (if they are not saved in the Data Source) in addition to the Data Source name and the credentials for the Hybrid Data Pipeline account.
By default, the characters in the Password field you type are not shown. If you want the password to be displayed in clear text, click the eye Watchful eye password icon icon. Click the icon again to conceal the password.
ServiceMax Login URL
The data store URL.
Valid Values:
login.salesforce.com | test.salesforce.com
If set to login.salesforce.com, the production environment is used.
If set to test.salesforce.com, the test environment is used.
Security Token
The security token is required to log in to Salesforce from an untrusted network.
Salesforce automatically generates this key. If you do not have the security token, log into your account, go to Setup > My Personal Information > Reset My Security Token. A new token will be sent by e-mail.

OData tab

The following table describes the controls on the OData tab. For information on using the Configure Schema editor, see Configuring data sources for OData connectivity and working with data source groups. For information on formulating OData requests, see "Formulating queries" under Querying with OData.
Click the thumbnail to view the screen. Required fields are marked with an asterisk.
OData tabOData tab
Table 101. OData tab connection parameters for ServiceMax
Field
Description
OData Version
Enables you to choose from the supported OData versions. OData configuration made with one OData version will not work if you switch to a different OData version. If you want to maintain the data source with different OData versions, you must create different data sources for each of them.
OData Access URI
Specifies the base URI for the OData feed to access your data source, for example, https://hybridpipe.operations.com/api/odata/<DataSourceName>. You can copy the URI and paste it into your application's OData configuration.
The URI contains the case-insensitive name of the data source to connect to, and the query that you want to execute. This URI is the OData Service Root URI for the OData feed. The Service Document for the data source is returned by issuing a GET request to the data source's service root.
The OData Service Document returns the names of the entities exposed by the Data Source OData service. To get details such as the properties of the entities exposed, the data types for those properties and the relationships between entities, the Service Metadata Document can be fetched by adding /$metadata to the service root URI.
Schema Map
Enables OData support. If a schema map is not defined, the OData API cannot be used to access the data store using this data source definition. Use the Configure Schema editor to select the tables/columns to expose through OData.
Page Size
Determines the number of entities returned on each page for paging controlled on the server side. On the client side, requests can use the $top and $skip parameters to control paging. In most cases, server side paging works well for large data sets. Client side pagination works best with a smaller data sets where it is not as expensive to fetch subsequent pages.
Valid Values: 0 | n
where n is an integer from 1 to 10000.
When set to 0, the server default of 2000 is used.
Default: 0
Refresh Result
Controls what happens when you fetch the first page of a cached result when using Client Side Paging. Skip must be omitted or set to 0. You can use the cached copy of that first page, or you can re-execute the query to get a new result, discarding the previously cached result. Re-executing the query is useful when the data being fetched may change between two requests for the first page. Using the cached result is useful if you are paging back and forth through results that are not expected to change.
Valid Values:
When set to 0, the OData service caches the first page of results.
When set to 1, the OData service re-executes the query.
Default: 1
Inline Count Mode
Specifies how the connectivity service satisfies requests that include the $count parameter when it is set to true (for OData version 4) or the $inlinecount parameter when it is set to allpages (for OData version 2). These requests require the connectivity service to include the total number of entities that are defined by the OData query request. The count must be included in the first page in server-driven paging and must be included in every page when using client-driven paging.
The optimal setting depends on the data store and the size of results. The OData service can run a separate query using the count(*) aggregate to get the count, before running the query used to generate the entities. In very large results, this approach can often lead to the first page being returned faster. Alternatively, the OData service can fetch the entire result before returning the first page. This approach works well for small results and for data stores that cannot optimize the count(*) aggregate; however, it may have a longer initial response time for the first page if the result is large.
Valid Values:
When set to 1, the connectivity service runs a separate count(*) aggregate query to get the count of entities before executing the query to return results. In very large results, this approach can often lead to the first page being returned faster.
When set to 2, the connectivity service fetches all entities before returning the first page. For small results, this approach is always faster. However, the initial response time for the first page may be longer if the result is large.
Default: 1
Top Mode
Indicates how requests typically use $top and $skip for client side pagination, allowing the service to better anticipate how to process queries.
Valid Values:
Set to 0 when the application generally uses $top to limit the size of the result and rarely attempts to get additional entities by combining $top and $skip.
Set to 1 when the application uses $top as part of client-driven paging and generally combines $top and $skip to page through the result.
Default: 0
OData Read Only
Controls whether write operations can be performed on the OData service. Write operations generate a 405 Method Not Allowed response if this option is enabled.
Valid Values:
ON | OFF
When ON is selected, OData access is restricted to read-only mode.
When OFF is selected, write operations can be performed on the OData service.
Default: OFF

Mapping tab

To see a larger view of the screenshot of the Mapping tab, click the thumbnail; or, right-click the thumbnail and select an option to open the thumbnail in a different window or tab.
Mapping tab of the ServiceMax data source setup dialogMapping tab of the ServiceMax data source setup dialog
Table 102. Mapping tab connection parameters for ServiceMax
Field
Description
Map Name
Optional name of the map definition that the Hybrid Data Pipeline connectivity service uses to interpret the schema of the data store. The Hybrid Data Pipeline service automatically creates a name for the map.
If you want to name the map yourself, enter a unique name.
Refresh Schema
The Refresh Schema option specifies whether the connectivity service attempts to refresh the schema when an application first connects.
Valid Values:
When set to ON, the connectivity service attempts to refresh the schema.
When set to OFF, the connectivity service does not attempt to refresh the schema.
Default
OFF
Notes
*You can choose to refresh the schema by clicking the Refresh icon. This refreshes the schema immediately. Note that the refresh option is available only while editing the data source.
*Use the option to specify whether the connectivity service attempts to refresh the schema when an application first connects. Click the Refresh icon if you want to refresh the schema immediately, using an already saved configuration.
*If you are making other edits to the settings, you need to click update to save your configuration. Clicking the Refresh icon will only trigger a runtime call on the saved configuration.
Create Mapping
Determines whether the Salesforce table mapping files are to be (re)created.
The Hybrid Data Pipeline connectivity service automatically maps data store objects and fields to tables and columns the first time that it connects to the data store. The map includes both standard and custom objects and includes any relationships defined between objects.
Table 102. Valid values for Create Map field
Value
Description
Not Exist
Select this option for most normal operations. If a map for a data source does not exist, this option causes one to be created. If a map exists, the service uses that existing map. If a name is not specified in the Map Name field, the name will be a combination of the User Name and Data Source ID.
Force New
Select this option to force creation of a new map. A map is created on connection whether one exists or not. The connectivity service uses a combination of the User Name and Data Source ID to name the map. Map creation is expensive, so you will likely not want to leave this option set to Force New indefinitely.
No
If a map for a data source does not exist, the connectivity service does not create one.
Map System Column Names
By default, when mapping Salesforce system fields to columns in a table, Hybrid Data Pipeline changes system column names to make it evident that the column is a system column. System columns include those for name and id. If the system column names are not changed and you create a new table with id and name columns, the map will need to append a suffix to your columns to differentiate them from the system columns, even if the map option is set to strip suffixes.
If you do not want to change the names of system columns, set this parameter to 0.
Valid values are described in the following table.
Table 103. Valid values for Map System Column Names
Value
Description
0
Hybrid Data Pipeline does not change the names of the Salesforce system columns.
1
Hybrid Data Pipeline changes the names of the Salesforce system columns as described in the following table:
Field Name
Mapped Name
Id
ROWID
Name
SYS_NAME
IsDeleted
SYS_ISDELETED
CreatedDate
SYS_CREATEDDATE
CreatedById
SYS_CREATEDBYID
LastModifiedDate
SYS_LASTMODIFIEDDATE
LastModifiedid
SYS_LASTMODIFIEDID
SystemModstamp
SYS_SYSTEMMODSTAMP
LastActivityDate
SYS_LASTACTIVITYDATE
The default value is 0.
Uppercase Identifiers
Defines how Hybrid Data Pipeline maps identifiers. By default, all unquoted identifier names are mapped to uppercase. Identifiers are object names. Classes, methods, variables, interfaces, and database objects, such as tables, views, columns, indexes, triggers, procedures, constraints, and rules, can have identifiers.
Valid Values:
When set to ON, the connectivity service maps all identifier names to uppercase.
When set to OFF, Hybrid Data Pipeline maps identifiers to the mixed case name of the object being mapped. If mixed case identifiers are used, those identifiers must be quoted in SQL statements, and the case of the identifier, must exactly match the case of the identifier name.
Note: When object names are passed as arguments to catalog functions, the case of the value must match the case of the name in the database. If an unquoted identifier name was used when the object was created, the value passed to the catalog function must be uppercase because unquoted identifiers are converted to uppercase before being used. If a quoted identifier name was used when the object was created, the value passed to the catalog function must match the case of the name as it was defined. Object names in results returned from catalog functions are returned in the case that they are stored in the database.
For example, if Uppercase Identifiers is set to ON, to query the Account table you would need to specify:
SELECT "id", "name" FROM "Account"
Default: ON
Audit Columns
The audit columns added by Hybrid Data Pipeline are:
*IsDeleted
*CreatedById
*CreatedDate
*LastModifiedById
*LastModifiedDate
*SYSTEMMODSTAMP
The following table describes the valid values for the Audit Columns parameter.
Table 104. Valid values for Audit Columns
Value
Description
All
Hybrid Data Pipeline includes all of the audit columns and the MasterRecordId column in its table definitions.
AuditOnly
Hybrid Data Pipeline adds only the audit columns in its table definitions.
MasterOnly
Hybrid Data Pipeline adds only the MasterRecordId column in its table definitions.
None
Hybrid Data Pipeline does not add the audit columns or the MasterRecordId column in its table definitions.
The default value for Audit Columns is All.
In a typical Salesforce instance, not all users are granted access to the Audit or MasterRecordId columns. If Audit Columns is set to a value other than None and if Hybrid Data Pipeline cannot include the columns requested, the connection fails and an exception is thrown.
Custom Suffix
Data stores treat the creation of standard and custom objects differently. Objects you create in your organization are called custom objects, and the objects already created for you by the data store administrator are called standard objects.
When you create custom objects such as tables and columns, the data store appends a custom suffix to the name, (__c), two underscores immediately followed by a lowercase “c” character.
For example, Salesforce will create a table named emp__c if you create a new table using the following statement:
CREATE TABLE emp (id int, name varchar(30))
When you expose external objects, Salesforce appends a _x extension (__x), two underscores immediately followed by a lowercase “x” character. This extension is treated in the same way as the __c extension for custom object.
You might expect to be able to query the table using the name you gave it, emp in the example. Therefore, by default, the connectivity service strips off the suffix, allowing you to make queries without adding the suffix "__c" or "__x". The Map Options field allows you to specify a value for CustomSuffix to control whether the map includes the suffix or not:
*If set to include, the map uses the “__c” or "__x" suffix; you must therefore use it in your queries.
*If set to strip, the suffix in the map is removed in the map. Your queries should not include the suffix when referring to custom fields.
The default value for CustomSuffix is include.
The first time you save and test a connection, a map for that data store is created. Once a map is created, you cannot change the map options for that Data Source definition unless you also create a new map. For example, if a map is created with Custom Suffix set to include and then later, you change the Custom Suffixvalue to strip, you will get an error saying the configuration options do not match. Simply change the value of the Create Map option to force creation of a new map.
Keyword Conflict Suffix
The SQL standard and Hybrid Data Pipeline both define keywords and reserved words. These have special meaning in context, and may not be used as identifier names unless typed in uppercase letters and enclosed in quotation marks.
For example, the Case object is a standard object present in most Salesforce organizations but CASE is also an SQL keyword. Therefore, a table named Case cannot be used in a SQL statement unless enclosed in quotes and entered in uppercase letters:
*Execution of the SQL query Select * from Case will return the following:
Error: [DataDirect][DDHybrid JDBC Driver][Salesforce]Unexpected token: CASE in statement [select * from case]
*Execution of the SQL query Select * from "Case" will return the following:
Error: [DataDirect][DDHybrid JDBC Driver][Salesforce]Table not found in statement [select * from "Case"]
*Execution of the SQL query, Select * from "CASE" will complete successfully.
To avoid using quotes and uppercase for table or column names that match keywords and reserved words, you can instruct Hybrid Data Pipeline to add a suffix to such names. For example, if Keyword Conflict Suffix is set to TAB, the Case table will be mapped to a table named CASETAB. With such a suffix appended in the map, the following queries both work:
*Select * From CASETAB
*Select * From casetab
Number Field Mapping
In addition to the primitive data types, Hybrid Data Pipeline also defines custom field data types. The Number Field Mapping parameter defines how Hybrid Data Pipeline maps fields defined as NUMBER (custom field data type). The NUMBER data type can be used to enter any number with or without a decimal place.
Hybrid Data Pipeline type casts NUMBER data type to the SQL data type DOUBLE and stores the values as DOUBLE.
This type casting can cause problems when the precision of the NUMBER field is greater than the precision of a SQL data type DOUBLE value.
By default, Hybrid Data Pipeline maps NUMBER values with a precision of 9 or less and scale 0 to the SQL data type INTEGER type, and also maps all other NUMBER fields to the SQL data type DOUBLE. Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example: The number 123.45 has a precision of 5 and a scale of 2.
Valid values for Number Field Mapping are described in the following table.
Table 105. Valid values for Number Field Mapping
Value
Description
alwaysDouble
Hybrid Data Pipeline maps NUMBER fields to the SQL data type DOUBLE.
emulateInteger
Hybrid Data Pipeline maps NUMBER fields with a precision of 9 or less and a scale of 0 to the SQL data type INTEGER and maps all other NUMBER fields to the SQL data type DOUBLE.
The default value for Number Field Mapping is emulateInteger.

Advanced tab

Click the thumbnail to view the screen.
Advanced tab of the ServiceMax data source setup dialogAdvanced tab of the ServiceMax data source setup dialog
Table 107. Advanced tab connection parameters for ServiceMax
Field
Description
Web Service Call Limit
The maximum number of Web service calls allowed to the cloud data store for a single SQL statement or metadata query.
The default value is 0.
Web Service Retry Count
The number of times to retry a timed-out Select request. Insert, Update, and Delete requests are never retried. The Web Service Timeout parameter specifies the period between retries. A value of 0 for the retry count prevents retries. A positive integer sets the number of retries. The default value is 0.
Web Service Timeout
The time, in seconds, to wait before retrying a timed-out Select request. Valid only if the value of Web Service Retry Count is greater than zero. A value of 0 for the timeout waits indefinitely for the response to a Web service request. There is no timeout. A positive integer is considered as a default timeout for any statement created by the connection. The default value is 120.
Max Pooled Statements
The maximum number of prepared statements to cache for this connection. If the value of this property is set to 20, the connectivity service caches the last 20 prepared statements that are created by the application.
The default value is 0.
Login Timeout
The amount of time, in seconds, to wait for a connection to be established  before timing out the connection request.
If set to 0, the connectivity service does not time out a connection request.
The default value is 0.
Enable Bulk Load
Specifies whether to use the bulk load protocol for insert, update, delete, and batch operations. This increases the number of rows that the Hybrid Data Pipeline connectivity service loads to send to the data store. Bulk load reduces the number of network trips.
The default value is ON.
Bulk Load Threshold
Sets a threshold (number of rows) that, if exceeded, triggers bulk loading for insert, update, delete, or batch operations. The default is 4000.
Initialization String
A semicolon delimited set of commands to be executed on the data store after Hybrid Data Pipeline has established and performed all initialization for the connection. If the execution of a SQL command fails, the connection attempt also fails and Hybrid Data Pipeline returns an error indicating which SQL commands failed.
Syntax:
command[[; command]...]
Where:
command
is a SQL command. Multiple commands must be separated by semicolons. In addition, if this property is specified in a connection URL, the entire value must be enclosed in parentheses when multiple commands are specified. For example, assuming a schema name of SFORCE:
InitializationString=(REFRESH SCHEMA SFORCE)
The default is an empty string.
Read Only
Sets the connection to read-only mode. Indicates that the cloud data store can be read but not updated.
The default value is OFF.
Extended Options
Specifies a semi-colon delimited list of connection options and their values. Use this configuration option to set the value of undocumented connection options that are provided by Progress DataDirect technical support. You can include any valid connection option in the Extended Options string, for example:
Database=Server1;UndocumentedOption1=value[;UndocumentedOption2=value;]
If the Extended Options string contains option values that are also set in the setup dialog, the values of the options specified in the Extended Options string take precedence.
Valid Values: string
Default: empty string
If you are using a proxy server to connect to your sales cloud instance, then you have to set these options:
proxyHost = hostname of the proxy server; proxyPort = portnumber of the proxy server
If Authentication is enabled, then you have to include the following:
proxyuser=<value>; proxypassword=<value>
Metadata Exposed Schemas
Restricts the metadata exposed by Hybrid Data Pipeline to a single schema. The metadata exposed in the SQL Editor, the Configure Schema Editor, and third party applications will be limited to the specified schema. JDBC, OData, and ODBC metadata calls will also be restricted. In addition, calls made with the Schema API will be limited to the specified schema.
Warning: This functionality should not be regarded as a security measure. While the Metadata Exposed Schemas option restricts the metadata exposed by Hybrid Data Pipeline to a single schema, it does not prevent queries against other schemas on the backend data store. As a matter of best practice, permissions should be set on the backend data store to control the ability of users to query data.
Valid Values
<schema>
Where:
<schema>
is the name of a valid schema on the backend data store.
Default: No schema is specified. Therefore, all schemas are exposed.
See the steps for:
How to create a data source in the Web UI