HDPREADME.TXT
     Progress(R) DataDirect(R) Hybrid Data Pipeline
     Hybrid Data Pipeline Server
     Release 4.3
     March 2018

***********************************************************************
Copyright (c) 1990-2018 Progress Software Corporation and/or its 
subsidiaries or affiliates. All Rights Reserved.

***********************************************************************

CONTENTS

Changes for Release 4.3
Product Features
Product Components
Server Requirements
Installation Directory
Notes, Known Problems, and Restrictions
Documentation
Installed Files
Third-Party Acknowledgments


     Changes for Release 4.3

LDAP authentication 
-------------------
Added support to integrate with Active Directory for user authentication using
LDAP protocol. Customers can configure an LDAP authentication configuration by
providing the details of the server and can configure users to use LDAP
authentication as opposed to the default authentication.

Permissions feature
-------------------
Support for a permissions API has been added. The Permissions API enables
administrators to manage permissions through the Users, Roles, and DataSource APIs.
In addition, the Permissions API allows administrators to create data sources on
behalf of users and manage end user access to data source details. Administrators
can also specify whether to expose change password functionality in the Web UI and
SQL editor functionality.

Password policy
---------------
Support for a password policy has been added.

Tomcat Upgrade
--------------
The Hybrid Data Pipeline server and On-Premises Connector have been upgraded to
install and use Tomcat 8.5.28.

Support for OData functions
---------------------------
Added OData Version 4 function support for IBM DB2 and Microsoft SQL Server
data stores. If the data stores contain stored functions, they can be exposed
using an OData Version 4 service. As part of OData function support, OData
schema map version has been changed. The Web UI will automatically migrate the
existing OData schema map to a newer OData schema map version when the OData
schema is modified for OData Version 4 data sources.

The following aspects of OData Version 4 functions are supported:

* Functions that are unbound (static operations)

* Function imports

* Functions that return primitive types 

* Function invocation with OData system query option $filter

Note that the following aspects of OData Version 4 functions are currently 
NOT supported:

* Functions that return result sets

* Functions that are bound to entities

* Built-in functions

* Functions with OUT/INOUT parameters

* Overloaded functions

* Function invocation as part of $select and $orderby

* Function invocation as part of parameter value

* Parameter aliases are not supported. Hence, function invocation with function 
  parameters as URL query parameters is not supported.

Installation procedures and response file
-----------------------------------------
The Hybrid Data Pipeline service has two default users, "d2cadmin" and
"d2cuser". The installer now prompts you to enter passwords for each default
user. When generating a response file to perform a silent installation, the
installer will not include values for the corresponding properties. Hence, you
will need to add the passwords manually to the response file before proceeding
with a silent installation. Also, note that a password policy is not enforced
during the installation process. The installer only ensures that a value has
been specified.

The password configuration properties for the GUI installer:
D2C_ADMIN_PASSWORD
D2C_USER_PASSWORD

The password configuration properties for the console installer:
D2C_ADMIN_PASSWORD_CONSOLE
D2C_USER_PASSWORD_CONSOLE

Web UI
------
* In cases where you are using the evaluation version of the product, the Web UI
  now mentions evaluation timeout information as 'xx Days Remaining'.

* The product version information now includes details about the licence type.
  This can be seen under the version information section of the UI. The license
  type is also returned when you query for version information via the version API.

Apache Hive data store enhancements
-----------------------------------
* Enhanced to optimize the performance of fetches.

* Enhanced to support the Binary, Char, Date, Decimal, and Varchar data types.
  
* Enhanced to support HTTP mode, which allows you to access Apache Hive data
  sources using HTTP/HTTPS requests. HTTP mode can be configured using the new
  Transport Mode and HTTP Path parameters.
  
* Enhanced to support cookie based authentication for HTTP connections. Cookie
  based authentication can be configured using the new 
  Enable Cookie Authentication and Cookie Name parameters.  
  
* Enhanced to support Apache Knox. 

* Enhanced to support Impersonation and Trusted Impersonation using the 
  Impersonate User parameter.
  
* The Batch Mechanism parameter has been added. When Batch Mechanism is set to
  multiRowInsert, the driver executes a single insert for all the rows contained
  in a parameter array. MultiRowInsert is the default setting and provides
  substantial performance gains when performing batch inserts.

* The Catalog Mode parameter allows you to determine whether the native catalog
  functions are used to retrieve information returned by DatabaseMetaData
  functions. In the default setting, Hybrid Data Pipeline employs a  balance of
  native functions and driver-discovered information for the optimal balance of
  performance and accuracy when retrieving catalog information.

* The Array Fetch Size parameter improves performance and reduces out of memory
  errors. Array Fetch Size can be used to increase throughput or, alternately,
  improve response time in Web-based applications. 

* The Array Insert Size parameter provides a workaround for memory and server
  issues that can sometimes occur when inserting a large number of rows that
  contain large values.

Apache Hive certifications
--------------------------
* Certified with Hive 2.0.x, 2.1.x 

* Apache Hive data store connectivity has been certified with the following
  distributions:
  - Cloudera (CDH) 5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 5.10, 5.11, 5.12
  - Hortonworks (HDP) 2.3, 2.4, 2.5
  - IBM BigInsights 4.1, 4.2, 4.3
  - MapR 5.2

Apache Hive version and distribution support
--------------------------------------------
* Hive versions 1.0 and higher are supported. Support for earlier version has
  been deprecated.

* The HiveServer2 protocol and higher is supported. As a result:
  - Support for the HiveServer1 protocol has been deprecated.
  - The Wire Protocol Version parameter has been deprecated.

* Support has been deprecated for the following distributions:
  - Amazon Elastic MapReduce (Amazon EMR) 2.1.4, 2.24-3.1.4, 3.2-3.7
  - Cloudera's Distribution Including Apache Hadoop (CDH) 4.0, 4.1, 4.2, 4.5,
    5.0, 5.1, 5.2, 5.3
  - Hortonworks (HDP), versions 1.3, 2.0, 2.1, 2.2
  - IBM BigInsights 3.0
  - MapR Distribution for Apache Hadoop 1.2, 2.0
  - Pivotal Enterprise HD 2.0.1, 2.1

IBM DB2 Data Store
------------------
* Certified with DB2 V12 for z/OS
* Certified with dashDB (IBM Db2 Warehouse on Cloud)

Oracle Marketing Cloud (Oracle Eloqua)
--------------------------------------
Data type support has changed. The following data types are supported for the
Oracle Eloqua data store.
  - BOOLEAN
  - DECIMAL
  - INTEGER
  - LONG
  - LONGSTRING
  - STRING

Oracle Sales Cloud
------------------
Data type support has changed. The following data types are supported for the
Oracle Sales Cloud data store.
  - ARRAY
  - BOOLEAN
  - DATETIME
  - DECIMAL
  - DURATION
  - INTEGER
  - LARGETEXT
  - LONG
  - TEXT
  - URL


     Product Features

Progress DataDirect Hybrid Data Pipeline is a data access platform that provides
simple, secure access to cloud and on-premises data sources, such as RDBMS, Big
Data, and NoSQL. Hybrid Data Pipeline allows business intelligence tools and
applications to use ODBC, JDBC, or OData to access data from supported
datasources. Hybrid Data Pipeline can be installed in the cloud or behind a
firewall. Hybrid Data Pipeline can then be configured to work with applications
and data sources in nearly any business environment.

* Supports access to over 20 data sources through a single, unified interface.

* Supports secure access to data on-premises or in the cloud.

* Can be hosted in the cloud or on premises.

* Supports SaaS, SQL, NoSQL, and Big Data data sources.

* Supports ODBC, JDBC, and OData APIs.


     Product Components

Progress DataDirect Hybrid Data Pipeline consists of four primary, separately
installed components.

* The Hybrid Data Pipeline server provides access to multiple data sources
  through a single, unified interface. The server can be hosted on premises or
  in the cloud.

* The On-Premises Connector enables Hybrid Data Pipeline to establish a
  secure connection from the cloud to an on-premises data source.

* The ODBC driver enables ODBC applications to communicate to a data source
  through the Hybrid Data Pipeline server.

* The JDBC driver enables JDBC applications to communicate to a data source
  through the Hybrid Data Pipeline server.


     Server Requirements

Hybrid Data Pipeline must be installed on a 64-bit Linux machine (4 core, 8 GB
RAM minimum) running one of the following operating systems:

* CentOS Linux x64, version 4.0 and higher

* Ubuntu Linux version 16 and higher

* Oracle Linux x64, version 4.0 and higher

* Red Hat Enterprise Linux x64, version 4.0 and higher

* SUSE Linux Enterprise Server, Linux x64, version 10.x, 11, 12, and 13


     Installation Directory

The default installation directory for the Hybrid Data Pipeline server is:

  /opt/Progress/DataDirect/Hybrid_Data_Pipeline/Hybrid_Server

  Note: If you do not have access to "/opt", your user's home directory will
  take the place of this directory.


     Notes, Known Problems, and Restrictions

The following are notes, known problems, or restrictions with
the Hybrid Data Pipeline server.

FIPS compliance with the On-Premises Connector
----------------------------------------------
The On-Premises Connector is not currently FIPS compliant. Therefore, any
connections made to an on-premises data source through an On-Premises Connector
will not be fully FIPS compliant.

The use of wildcards in SSL server certificates
-----------------------------------------------
The Hybrid Data Pipeline service will not by default connect to a backend data
store that has been configured for SSL when a wildcard is used to identify the
server name in the SSL certificate. If a server certificate contains a wildcard,
the following error will be returned.
     There is a problem connecting to the DataSource. SSL handshake failed:
     sun.security.validator.ValidatorException: PKIX path building failed:
     sun.security.provider.certpaths.SunCertPathBuilderException: unable to find
     valid certification path to requested target
To work around this issue, the exact string (with wildcard) in the server
certificate can be specified with the Host Name in Certificate option when
configuring your data source through the Hybrid Data Pipeline user interface or
management API.

Load balancer port limitation
-----------------------------
The following requirements and limitations apply to the use of non-standard
ports in a cluster environment behind a load balancer.
* For OData connections, the load balancer must supply the X-Forwarded-Port
  header.
* In the Web UI, the OData tab will not display the correct port in the URL.
* For JDBC connections, a non-standard port must be specified with the
  PortNumber connection property. The connection URL syntax is:
      //<host_name>:<port_number><key>=<value>...
* For ODBC connections, a non-standard port must be specified with the
  PortNumber connection option.
* If you are using the On-Premises Connector, a non-standard port must be
  specified with the On-Premises Connector Configuration Tool.

Web UI
------
* When a data source is configured with OData Version 4 and the OData Schema Map
  version is 'odata_mapping_v2' and it does not contain any "entityNameMode", any
  further editing of the OData Schema map upgrades the OData Schema Map version
  to 'odata_mapping_v3' and adds "entityNameMode":"pluralize". This affects how
  entity names are referred in the OData queries. To avoid this, you must set the
  entityNameMode whenever a data source is created or edited to the preferred mode.
  Alternatively, you can remove the "entityNameMode" property from the OData
  schema map json while saving the data source, if you want to use the default
  "Guess" mode.

* If an administrator creates a user with a password that contains a 
  percentage mark (%), the new user may face issues while trying to login. 
  In addition, Hybrid Data Pipeline functionality may not work as expected.

* When an administrator tries to add new users using the Add Users window, the
  the Password and Confirm Password fields occasionally do not appear properly
  in the popup window.

* 'COPY DETAILS' functionality is not currently working in Internet Explorer 11
  due to a limitation with the third party plugin Clipboard.js on bootstrap
  modals. More details on this can be found at
  https://github.com/zenorocha/clipboard.js/wiki/Known-Issues.

Management API
--------------
* When the Limits API is used to set a row limit and
  createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE) is being used, a
  row-limit-exceeded error is returned at the row limit instead of one row
  beyond the limit. For example, if a row limit is set at 45 rows when returning
  a scrollable, insensitive result set beyond the specified limit, the
  connectivity service returns the following error on the 45th row as opposed to
  the expected 46th row: "The limit on the number of rows that can be returned
  from a query -- 45 -- has been exceeded."

* If an administrator creates a user with a password that contains a 
  percentage mark (%), the new user may face issues while trying to login. 
  In addition, Hybrid Data Pipeline functionality may not work as expected.

OData
-----
* As of now, functions are not supported for $orderby.

* OData functions are not supported with the On-Premises Connector.

* Functions with default parameters are not working.

* For DB2, BOOLEAN Data type does not work with functions in OData.

* For SQL Server and DB2, OData datatypes Edm.Date and Edm.TimeofDay do not work
  in Power BI, if the function is selected from from the list of function Imports
  and parameter values are provided. However Power BI allows 'Edm.Date' and
  'Edm.TimeOfDay' types for Function imports when passed directly in OData feed.
  There is one workaround available for type Edm.TimeofDay. The columns that are
  exposed as Edm.TimeofDay should be mapped as "TimeAsString" in ODataSchemaMap.
  In this case, PowerBI works as expected.

* In a load balancer setup, when invoking Function Import (and not Function) 
  that takes datetimeoffset as a parameter, we need to encode the : character
  present in time parameter. So, the following will return an error: 
  http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
  (DATEIN=1999-12-31T00:00:00Z,INTEGERIN=5)  
  The correct URL encoded example must look like the following:
  http://NC-HDP-U13/api/odata4/D2C_ORACLE_ODATAv4_FUNCT/ODATA_FUNC_GTABLE_DATE
  (DATEIN=1999-12-31T00%3A00%3A00Z,INTEGERIN=5) 

* When invoking Function Import (and not Function) that returns null using 
  Power B.I, a data format error is returned. The resolution to this issue is
  being discussed internally as well as with Microsoft.

* OData 4.0 support for $expand does not work with the following data stores:
  Salesforce, Dynamics CRM, SugarCRM, Rollbase, Google Analytics, and Oracle
  Service Cloud.

* $expand only supports one level deep.
  For example, with the entity hierarchy:
  Customers
  |-- Orders
  | |-- OrderItems
  |-- Contacts

  The following queries are supported:
  Customers?$expand=Orders
  Customers?$expand=Contacts
  Customers?$expand=Orders,Contacts

  However, this query is not supported:
  Customers?$expand=Orders,OrderItems

  OrderItems is a second level entity with respect to Customers. To query Orders
  and OrderItems, the query must be rooted at Orders. For example:
  Orders?$expand=OrderItems
  Orders(id)?$expand=OrderItems

* When manually editing the ODataSchemaMap value, the table names and column
  names specified in the value are case-sensitive. The case of the table and
  column names must match the case of the tables and column names reported by
  the data source.
  NOTE: It is highly recommended that you use the OData Schema Editor to
  generate the value for the ODataSchemaMap data source option. The Schema
  Editor takes care of table and column name casing and other syntactic details.

* The $expand clause is not supported with OpenEdge data sources when filtering
  for more than a single table.

* The day, endswith, and cast functions are not working when specified in a
  $filter clause when querying a DB2 data source.

All data stores
---------------
* It is recommended that Login Timeout not be disabled (set to 0) for a Data
  Source.

* Using setByte to set parameter values with the Hybrid Data Pipeline JDBC
  driver fails when the data store does not support the TINYINT SQL type. Use
  setShort or setInt to set the parameter value instead of setByte.

Google Analytics
----------------
* Validation message is not showing when a user enters a Start Date value less
  than the End Date value in Create/Update Google Analytics page.

* Once a Google Analytics OAuth profile is created for a specific Google
  account, changing the Google Account associated with the profile results in
  "the configuration options used to open the database do not match the options
  used to create the database" error being returned for any existing data
  sources.

Microsoft Dynamics CRM
----------------------
* Executing certain queries against MS Dynamics CRM with the JDBC driver may
  result in a "Communication failure. Protocol error."

* Testing has shown the following two errors from Microsoft Dynamics CRM Online
  when executing queries against the ImportData and TeamTemplate tables:
  - Attribute errortype on Entity ImportData is of type picklist but has Child
    Attributes Count 0
  - Attribute issystem on Entity TeamTemplate is of type bit but has Child
    Attributes Count 0
  NOTE: We have filed a case with Microsoft and are waiting to hear back about
  the cause of the issue.

* The initial on-premise connection when the relational map is created can take
  some time. It is even possible to receive an error "504: Gateway Timeout".
  When this happens, Hybrid Data Pipeline continues to build the map in the
  background such that subsequent connection attempts are successful and have
  full access to the relational map.

OpenEdge 10.2b
--------------
* Setting the MaxPooledStatements data source option in an OpenEdge data store
  to a value other than zero can cause statement not prepared errors to be
  returned in some situations.

Oracle Marketing Cloud (Oracle Eloqua)
--------------------------------------
* Data store issues
  - There are known issues with Batch Operations
  - The Update/Delete implementation can update only one record at a time.
    Because of this, the number of APIs executed depends on the number of
    records that get updated or deleted by the query plus the number of API
    calls required to fetch the IDs for those records.
  - Lengths of certain text fields are reported as higher than the actual
    lengths supported in Oracle Eloqua.

* Oracle Eloqua REST API issues. We are currently working with Oracle to resolve
  the following issues.
  - AND operators that involve different columns are optimized. In other cases,
    the queries are only partially optimized.
  - OR operators on the same column are optimized. In other cases, the queries
    are completely post-processed.
  - The data store is not able to insert or update the NULL value to any field
    explicitly.
  - The data store is unable to update few fields. They are always reported as
    NULL after update.
  - Oracle Eloqua uses a double colon (::) as an internal delimiter for
    multivalued Select fields. Hence when a value with the semi-colon character
    (;) is inserted or updated into a multivalued Select field, the semicolon
    character gets converted into the double colon character.
  - Query SELECT count (*) from template returns incorrect results.
  - Oracle Eloqua APIs do not populate the correct values in CreatedBy and
    UpdatedBy fields. Instead of user names, they contain a Timestamp value.
  - Only equality filters on id fields are optimized. All other filter
    conditions are not working correctly with Oracle Eloqua APIs and the data
    store is doing post-processing for such filters.
  - Filters on Non-ID Integer fields and Boolean fields are not working
    correctly. Hence the driver needs to post-process all these queries.
  - The data store does not distinguish between NULL and empty string.
    Therefore, null fields are often reported back as empty strings.
  - Values with special characters such as curly braces ({,}), back slash (\),
    colon (:), slash star (/*) and star slash (*/) are not supported in where
    clause filter value.

Oracle Sales Cloud
------------------
* Currently, passing filter conditions to Oracle Sales Cloud works only for
  simple, single column conditions. If there are multiple filters with 'AND'
  and 'OR', only partial or no filters are passed to Oracle Sales Cloud.

* Oracle Sales Cloud reports the data type of String and Date fields as String.
  Therefore, when such fields are filtered or ordered in Hybrid Data Pipeline,
  they are treated as String values. However, when filter conditions are passed
  to Oracle Sales Cloud, Oracle Sales Cloud can distinguish between the actual
  data types and apply Date specific comparisons to Date fields. Therefore,
  query results can differ depending on whether filters have been passed down to
  Oracle Sales Cloud or processed by Hybrid Data Pipeline.

* There appears to be a limitation with the Oracle Sales Cloud REST API
  concerning the >=, <=, and != comparison operators when querying String
  fields. Therefore, Hybrid Data Pipeline has not been optimized to pass these
  comparison operators to Oracle Sales Cloud. We are working with Oracle on this
  issue.

* There appears to be a limitation with the Oracle Sales Cloud REST API
  concerning queries with filter operations on Boolean fields. Therefore, Hybrid
  Data Pipeline has not been optimized to pass filter operations on Boolean
  fields to Oracle Sales Cloud. We are working with Oracle on this issue.

* The drivers currently report ATTACHMENT type fields in the metadata but do not
  support retrieving data for these fields. These fields are set to NULL.

* Join queries between parent and child tables are not supported.

* Queries on child tables whose parent has a composite primary key are not
  supported. For example, the children of ACTIVITIES_ACTIVITYCONTACT and
  LEADS_PRODUCTS are not accessible.

* Queries on the children of relationship objects are not supported. For
  example, the children of ACCOUNTS_RELATIONSHIP, CONTACTS_RELATIONSHIP, and
  HOUSEHOLDS_RELATIONSHIP are not accessible.

* Queries on grandchildren with multiple sets of Parent IDs and Grand Parent IDs
  used in an OR clause are not supported. For example, the following query is
  not supported.
     select * from ACCOUNTS_ADDRESS_ADDRESSPURPOSE
        where (ACCOUNTS_PARTYNUMBER = 'OSC_12343' AND
               ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2XZKGH')
           or (ACCOUNTS_PARTYNUMBER = 'OSC_12344' AND
               ACCOUNTS_ADDRESS_ADDRESSNUMBER = 'AUNA-2YZKGH')

* When querying documented objects like "CATALOGPRODUCTITEMS" and
  "CATEGORYPRODUCTITEMS", no more than 500 records are returned, even when
  more records may be present. This behavior is also seen with some custom
  objects. We are currently working with Oracle support to resolve this issue.

* A query on OPPORTUNITIES_CHILDREVENUE_PRODUCTS or LEADS_PRODUCTGROUPS with
  a filter on the primary key column returns 0 records even when more records
  are present. We are currently working with Oracle support to resolve this
  issue.

* Queries that contain subqueries returning more than 100 records are not
  supported. For example, the following query is not supported.
     select * from ACCOUNTS_ADDRESS
     where ACCOUNTS_PARTYNUMBER
     in (select top 101 PARTYNUMBER from ACCOUNTS)

* When you create custom objects, your Oracle Sales Cloud administrator must
  enable these objects for REST API access through Application Composer.
  Otherwise, you will not be able to query against these custom objects.

Oracle Service Cloud
--------------------
* When you create a custom object, your Oracle Service Cloud administrator must
  enable all four columns of the Object Fields tab of the Object Designer, or
  you cannot query against the custom objects.

* The initial connection when the relational map is created can take some time.
  It is even possible to receive an error "504: Gateway Timeout". When this
  happens, Hybrid Data Pipeline  continues to build the map in the background
  such that subsequent connection attempts are successful and have full access
  to the relational map.

SugarCRM
--------
* Data sources that are using the deprecated enableExportMode option will still
  see a problem until they are migrated to the new data source configuration.

* Data source connections by default now use Export Mode to communicate with the
  Sugar CRM server, providing increased performance when querying large sets of
  data. Bulk export mode causes NULL values for currency columns to be returned
  as the value 0. Because of this, there is no way to differentiate between a
  NULL value and 0, when operating in export mode. This can be a problem when
  using currency columns in the SQL statements, because Hybrid Data Pipeline
  must satisfy some filter conditions on queries, such as with operations like
  =, <>, >, >=, <, <=, IS NULL and IS NOT NULL. For example, suppose a currency
  column in a table in SugarCRM has 3 null values and 5 values that are 0. When
  a query is executed to return all NULL values (SELECT * FROM <table> WHERE
  <currency column> IS NULL), then 3 rows are returned. However, if a query is
  executed to return all rows where the column performs an arithmetic operation
  (SELECT * FROM <table> WHERE <currency column> + 1 = 1), then all 8 records
  are returned because the 3 NULL values are seen as 0.

On-Premises Connector
---------------------
* See the hdpopcreadme.txt file for the latest notes when using the On-Premises
  Connector. This file is located in the installation directory for the
  On-Premises Connector.

JDBC Driver
-----------
* See the hdpjdbcreadme.txt file for the latest notes when accessing Hybrid Data
  Pipeline through the JDBC driver. This file is located in the installation
  directory for the JDBC driver.

ODBC Driver
-----------
* See the hdpodbcreadme.txt file for the latest notes when accessing Hybrid Data
  Pipeline through the ODBC driver. This file is located in the installation
  directory for the ODBC driver.


     Documentation

Hybrid Data Pipeline documentation consists of the following guides and readmes.

* PROGRESS DATADIRECT HYBRID DATA PIPELINE INSTALLATION GUIDE
  Available online at
  https://documentation.progress.com/output/DataDirect/hybridpipeinstall

* PROGRESS DATADIRECT HYBRID DATA PIPELINE QUICK START
  Available online at
  https://documentation.progress.com/output/DataDirect/hybridpipestart

* PROGRESS DATADIRECT HYBRID DATA PIPELINE USER'S GUIDE
  Available online at
  https://documentation.progress.com/output/DataDirect/hybridpipeline

* The Hybrid Data Pipeline readme file: hdpreadme.txt
  Installed file and available online at
  https://documentation.progress.com/output/DataDirect/hdpreadmes/hdpreadme.htm

* The On-Premise Connector readme file: hdpopcreadme.txt
  Installed file and available online at
  https://documentation.progress.com/output/DataDirect/hdpreadmes/hdpopcreadme.htm

* The JDBC Driver readme file: hdpjdbcreadme.txt
  Installed file and available online at
  https://documentation.progress.com/output/DataDirect/hdpreadmes/hdpjdbcreadme.htm

* The ODBC Driver readme file: hdpodbcreadme.txt
  Installed file and available online at
  https://documentation.progress.com/output/DataDirect/hdpreadmes/hdpodbcreadme.htm

* The OpenAccess Server readme file: hdpoaserverreadme.txt
  Installed file and available online at
  https://documentation.progress.com/output/DataDirect/hdpreadmes/hdpoaserverreadme.htm


     Installed Files

When you install the Hybrid Data Pipeline server, the installer creates logs and
scripts. These files can be used to evaluate and troubleshoot issues. These
files can be found in the following locations where INSTALL_DIR is the
installation directory for the Hybrid Data Pipeline server.

INSTALL_DIR/ddcloud/: 
---------------------
deploy.log                 Log file that provides deployment details

deploy.sh                  Shell script to update existing deployment or echo
                           all output to stdout

error.log                  Log file that provides list of server errors

getlogs.sh                 Shell script that creates a compressed tar file with
                           all server logs

setlogginglevel.sh         Shell script to specify level of detail written to 
                           log files

In addition, the installation process creates four configuration files that are
needed to integrate Hybrid Data Pipeline components. These files are located in
the "redist" subdirectory of the "Key location" directory specified during
installation.

<Key location>/redist/: 
--------------------
config.properties          File that contains branding information

ddcloud.pem                File for the self-signed certificate

ddcloudTrustStore.jks      The Java keystore

OnPremise.properties       File with server and port information


     Third-Party Acknowledgments

Refer to the following Web page:
https://www.progress.com/legal/hybridpipe-third-party


March 2018
~~~~~~~~~~~~~
End of README