Data Source Name
A unique name for this Data Source definition.
Note: Names can contain only alphanumeric characters and underscores.
A description of this set of connection parameters.
User Id, Password
The login credentials for your Veeva CRM data store account.
Hybrid Data Pipeline uses this information to connect to the data store. The administrator of the cloud data store must grant permission to a user with these credentials to access the data store and the target data.
You can save the Data Source definition without specifying the login credentials. In that case, when you test the Data source connection, you will be prompted to specify these details. Applications using the connectivity service will have to supply the data store credentials (if they are not saved in the Data Source) in addition to the Data Source name and the credentials for the Hybrid Data Pipeline account.
By default, the characters in the Password field you type are not shown. If you want the password to be displayed in clear text, click the eye button. Click the button again to conceal the password.
Veeva CRM Login URL
The data store URL.
login.salesforce.com | test.salesforce.com
If set to login.salesforce.com, the production environment is used.
If set to test.salesforce.com, the test environment is used.
The security token is required to log in to Salesforce from an untrusted network.
Salesforce automatically generates this key. If you do not have the security token, log into your account, go to Setup > My Personal Information > Reset My Security Token. A new token will be sent by e-mail.
Specifies the base URI for the OData feed to access your data source, for example, https://hybridpipe.operations.com/api/odata/<DataSourceName>. You can copy the URI and paste it into your application's OData configuration.
The URI contains the case-insensitive name of the data source to connect to, and the query that you want to execute. This URI is the OData Service Root URI for the OData feed. The Service Document for the data source is returned by issuing a GET request to the data source's service root.
The OData Service Document returns the names of the entities exposed by the Data Source OData service. To get details such as the properties of the entities exposed, the data types for those properties and the relationships between entities, the Service Metadata Document can be fetched by adding /$metadata to the service root URI.
Enables OData support. If a schema map is not defined, the OData API cannot be used to access the data store using this Data Source definition. Use the Configure Schema editor to select the tables to expose through OData.
See Using the Configure Schema editor for more information.
Data Source Caching
Specifies whether the connection to the backend data source is cached in a session associated with the data source. Caching the back end connection improves performance when multiple OData queries are submitted to the same data source because the connection does not need to be created on every query.
Caching of the back end connection can get in the way when trying to configure a data source for OData. If a change is made to any of the Hybrid Data Pipeline data source connection parameters, those changes will not be seen because the connection was established using the old data source definition, and was cached. The session that caches the backend connection is discarded if there is no activity to the data source for approximately 5 minutes.
When you configure a data source for OData, it is recommended that the OData session caching be disabled. Once you are satisfied with the OData configuration for the data source, enable the parameter to get the performance improvement provided by caching the connection to the backend data source.
When set to 1, session caching is enabled. This provides better performance for production.
When set to 0, session caching is disabled. Use this value when you are configuring the data source.
Determines the number of entities returned on each page for paging controlled on the server side. On the client side, requests can use the $top and $skip parameters to control paging. In most cases, server side paging works well for large data sets. Client side pagination works best with a smaller data sets where it is not as expensive to fetch subsequent pages.
Valid Values: 0 | n
where n is an integer from 1 to 10000.
When set to 0, the server default of 2000 is used.
Controls what happens when you fetch the first page of a cached result when using Client Side Paging. Skip must be omitted or set to 0. You can use the cached copy of that first page, or you can re-execute the query to get a new result, discarding the previously cached result. Re-executing the query is useful when the data being fetched may change between two requests for the first page. Using the cached result is useful if you are paging back and forth through results that are not expected to change.
When set to 0, the OData service caches the first page of results.
When set to 1, the OData service re-executes the query.
Inline Count Mode
Specifies how the connectivity service satisfies requests that include the $inlinecount parameter when it is set to allpages. These requests require the connectivity service to include the total number of entities that are defined by the OData query request. The count must be included in the first page in server-driven paging and must be included in every page when using client-driven paging.
The optimal setting depends on the data store and the size of results. The OData service can run a separate query using the count(*) aggregate to get the count, before running the query used to generate the entities. In very large results, this approach can often lead to the first page being returned faster. Alternatively, the OData service can fetch the entire result before returning the first page. This approach works well for small results and for data stores that cannot optimize the count(*) aggregate; however, it may have a longer initial response time for the first page if the result is large.
When set to 1, the connectivity service runs a separate count(*) aggregate query to get the count of entities before executing the query to return results. In very large results, this approach can often lead to the first page being returned faster.
When set to 2, the connectivity service fetches all entities before returning the first page. For small results, this approach is always faster. However, the initial response time for the first page may be longer if the result is large.
Indicates how requests typically use $top and $skip for client side pagination, allowing the service to better anticipate how to process queries.
Set to 0 when the application generally uses $top to limit the size of the result and rarely attempts to get additional entities by combining $top and $skip.
Set to 1 when the application uses $top as part of client-driven paging and generally combines $top and $skip to page through the result.
OData Read Only
Controls whether write operations can be performed on the OData service. Write operations generate a 405 Method Not Allowed response if this option is enabled.
Existing OData-enabled data sources are read only (write operations are disabled). To enable write operations for an existing OData enabled data source, clear the OData Read Only option on the OData tab. Then, on the Data Sources tab, regenerate the OData model for the data source by clicking on the OData model icon .
true | false
When the check box is selected (set to true), OData access is restricted to read-only mode.
When the check box is not selected (set to false), write operations can be performed on the OData service.
The audit columns added by Hybrid Data Pipeline are:
The following table describes the valid values for the Audit Columns parameter.
The default value for Audit Columns is All.
In a typical Salesforce instance, not all users are granted access to the Audit or MasterRecordId columns. If Audit Columns is set to a value other than None and if Hybrid Data Pipeline cannot include the columns requested, the connection fails and an exception is thrown.
Determines whether the Veeva CRM table mapping files are to be (re)created.
The Hybrid Data Pipeline connectivity service automatically maps data store objects and fields to tables and columns the first time that it connects to the data store. The map includes both standard and custom objects and includes any relationships defined between objects.
Data stores treat the creation of standard and custom objects differently. Objects you create in your organization are called custom objects, and the objects already created for you by the data store administrator are called standard objects.
When you create custom objects such as tables and columns, the data store appends a custom suffix to the name, (__c), two underscores immediately followed by a lowercase "c" character.
For example, Salesforce will create a table named emp__c if you create a new table using the following statement:
CREATE TABLE emp (id int, name varchar(30))
When you expose external objects, Salesforce appends a _x extension (__x), two underscores immediately followed by a lowercase "x" character. This extension is treated in the same way as the __c extension for custom object.
You might expect to be able to query the table using the name you gave it, emp in the example. Therefore, by default, the connectivity service strips off the suffix, allowing you to make queries without adding the suffix "__c" or "__x". The Map Options field allows you to specify a value for CustomSuffix to control whether the map includes the suffix or not:
If set to include, the map uses the "__c" or "__x" suffix; you must therefore use it in your queries.
If set to strip, the suffix in the map is removed in the map. Your queries should not include the suffix when referring to custom fields.
The default value for CustomSuffix is strip.
The first time you save and test a connection, a map for that data store is created. Once a map is created, you cannot change the map options for that Data Source definition unless you also create a new map. For example, if a map is created with Custom Suffix set to include and then later, you change the Custom Suffixvalue to strip, you will get an error saying the configuration options do not match. Simply change the value of the Create Map option to force creation of a new map.
Keyboard Conflict Suffix
The SQL standard and Hybrid Data Pipeline both define keywords and reserved words. These have special meaning in context, and may not be used as identifier names unless typed in uppercase letters and enclosed in quotation marks.
For example, the Case object is a standard object present in most Veeva CRM organizations but CASE is also an SQL keyword. Therefore, a table named Case cannot be used in a SQL statement unless enclosed in quotes and entered in uppercase letters:
Execution of the SQL query Select * from Case will return the following:
Error: [DataDirect][DDCloud JDBC Driver][Salesforce]Unexpected token: CASE in statement [select * from case]
Execution of the SQL query Select * from "Case" will return the following:
Error: [DataDirect][DDCloud JDBC Driver][Salesforce]Table not found in statement [select * from "Case"]
Execution of the SQL query, Select * from "CASE" will complete successfully.
To avoid using quotes and uppercase for table or column names that match keywords and reserved words, you can instruct Hybrid Data Pipeline to add a suffix to such names. For example, if Keyword Conflict Suffix is set to TAB, the Case table will be mapped to a table named CASETAB. With such a suffix appended in the map, the following queries both work:
Select * from CASETAB
Select * from casetab
The default value is an empty string.
Optional name of the map definition that the Hybrid Data Pipeline connectivity service uses to interpret the schema of the data store. The Hybrid Data Pipeline service automatically creates a name for the map.
If you want to name the map yourself, enter a unique name.
Map System Column Names
By default, when mapping Veeva CRM system fields to columns in a table, Hybrid Data Pipeline changes system column names to make it evident that the column is a system column. System columns include those for name and id. If the system column names are not changed and you create a new table with id and name columns, the map will need to append a suffix to your columns to differentiate them from the system columns, even if the map option is set to strip suffixes.
If you do not want to change the names of system columns, set this parameter to 0.
Valid values for Map System Column Names are described in the following table.
The default value is 1.
Number Field Mapping
In addition to the primitive data types, Hybrid Data Pipeline also defines custom field data types. The Number Field Mapping parameter defines how Hybrid Data Pipeline maps fields defined as NUMBER (custom field data type). The NUMBER data type can be used to enter any number with or without a decimal place.
Hybrid Data Pipeline type casts NUMBER data type to the SQL data type DOUBLE and stores the values as DOUBLE.
This type casting can cause problems when the precision of the NUMBER field is greater than the precision of a SQL data type DOUBLE value.
By default, Hybrid Data Pipeline maps NUMBER values with a precision of 9 or less and scale 0 to the SQL data type INTEGER type, and also maps all other NUMBER fields to the SQL data type DOUBLE. Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example: The number 123.45 has a precision of 5 and a scale of 2.
Valid values are described in the following table.
The default value for Number Field Mapping is emulateInteger.
Specifies whether the connectivity service attempts to refresh the schema when an application first connects.
When the check box is selected (set to true), the connectivity service attempts to refresh the schema.
When the check box is not selected (set to false), the connectivity service does not attempt to refresh the schema.
Defines how Hybrid Data Pipeline maps identifiers. By default, all unquoted identifier names are mapped to uppercase. Identifiers are object names. Classes, methods, variables, interfaces, and database objects, such as tables, views, columns, indexes, triggers, procedures, constraints, and rules, can have identifiers.
When the check box is selected (set to true), the connectivity service maps all identifier names to uppercase.
When the check box is not selected (set to false), Hybrid Data Pipeline maps identifiers to the mixed case name of the object being mapped. If mixed case identifiers are used, those identifiers must be quoted in SQL statements, and the case of the identifier, must exactly match the case of the identifier name.
Note: When object names are passed as arguments to catalog functions, the case of the value must match the case of the name in the database. If an unquoted identifier name was used when the object was created, the value passed to the catalog function must be uppercase because unquoted identifiers are converted to uppercase before being used. If a quoted identifier name was used when the object was created, the value passed to the catalog function must match the case of the name as it was defined. Object names in results returned from catalog functions are returned in the case that they are stored in the database.
For example, if the Uppercase Identifiers check box is selected, to query the Account table you would need to specify:
SELECT "id", "name" FROM "Account"
Bulk Load Threshold
Sets a threshold (number of rows) that, if exceeded, triggers bulk loading for insert, update, delete, or batch operations. The default is 4000.
Enable Bulk Load
Specifies whether to use the bulk load protocol for insert, update, delete, and batch operations. This increases the number of rows that the Hybrid Data Pipeline connectivity service loads to send to the data store. Bulk load reduces the number of network trips.
The default value is false.
Specifies a semi-colon separated list of connection options and their values. Use this configuration option to set the value of undocumented connection options that are provided by Progress DataDirect technical support. You can include any valid connection option in the Extended Options string, for example:
If the Extended Options string contains option values that are also set in the setup dialog, the values of the options specified in the Extended Options string take precedence.
Valid Values: string
Default: empty string
A semicolon delimited set of commands to be executed on the data store after Hybrid Data Pipeline has established and performed all initialization for the connection. If the execution of a SQL command fails, the connection attempt also fails and Hybrid Data Pipeline returns an error indicating which SQL commands failed.
SQLcommand is a SQL command. Multiple commands must be separated by semicolons.
The default is an empty string.
The amount of time, in seconds, to wait for a connection to be established before timing out the connection request.
If set to 0, the Hybrid Data Pipeline connectivity service does not time out a connection request.
The default value is 0.
Max Pooled Statements
The maximum number of prepared statements to cache for this connection. If the value of this property is set to 20, the connectivity service caches the last 20 prepared statements that are created by the application.
The default value is 0.
Sets the connection to read-only mode. Indicates that the data store can be read but not updated.
The default value is false.
Web Service Call Limit
The maximum number of Web service calls allowed to the cloud data store for a single SQL statement or metadata query.
The default value is 0.
Web Service Retry Count
The number of times to retry a timed-out Select request. Insert, Update, and Delete requests are never retried. The Web Service Timeout parameter specifies the period between retries. A value of 0 for the retry count prevents retries. A positive integer sets the number of retries. The default value is 0.
Web Service Timeout
The time, in seconds, to wait before retrying a timed-out Select request. Valid only if the value of Web Service Retry Count is greater than zero. A value of 0 for the timeout waits indefinitely for the response to a Web service request. There is no timeout. A positive integer is considered as a default timeout for any statement created by the connection. The default value is 120.