skip to main content
Corticon Server: Data Integration Guide : Corticon's alternatives for data integration
 

Try Corticon Now

Corticon's alternatives for data integration

Corticon provides three techniques for data integration. Which ones are right for you depends on your use case – you can assemble the right mix to suit your needs.

When to use the Enterprise Data Connector (EDC)

Corticon EDC is designed to augment data when processing a discrete Decision Service request. It is used by rule authors who like the user friendly and intuitive way of modeling data access and persistence in their Rulesheets to conditionally enrich transactional data, and to use reference data for rule processing using a single backend relational database.
For example, a single request to adjudicate an insurance claim tells Corticon to retrieve all required data and related data from a database to service the request. Corticon performs well in this scenario including the persistence of the claim processing result back into the database.
Corticon EDC is built on Hibernate and is tied to Hibernate’s object relational model (ORM) and transactional models. This introduces query and data processing overhead when reading data from a database with related tables. The biggest factor in performance is that each updated object is committed to the database as a distinct update instead of a large, single pass multi-record update.
Database read and update time through EDC is a good choice for a single Decision Service request scenario with limited amounts of data. Examples are individual claims processing, single client eligibility requests, and a single transaction validation request. Many Corticon users have EDC running every day with EDC deployment scenarios.
EDC's limitations are in its performance in large, data intensive operations where large chunks of data are loaded into Corticon for processing and updating the connected database.

When to use ADC – Advanced Data Connectors

ADC provides features that improve on EDC. First, ADC is very efficient in dealing with larger transactional data sets, and second, ADC enables connections to disparate relational databases. Both read and write performance is much better than EDC when processing larger data sets in a Decision Service request. Without the constraints of Hibernate, ADC in a Decision Service has minimal processing overhead. ADC can read related data in a few passes from the database, where EDC requires discrete queries to fetch data. And ADC can write back data in chunks, where EDC writes data as discrete updates.
Both ADC and EDC are single-threaded -- a single Decision Service request is executed by a single Decision Service reactor which has access to the full collection of data included in the request payload, database exposed entities and filter criteria in the Rulesheets (EDC) or ADC queries.
Using ADC, you get great performance when processing a large dataset through a single Decision Service request. You then have access to all the data to do operations such as aggregations and clustering. You can build rules that operate on the full collection of data. As examples, you can quickly adjudicate all medical claims in a month, or approve specific procedures across all hospitals in a specified region, or calculate sales prices for all items in stock. In some situations, it is imperative to have access to the full collection of data for your rules to work properly. For example, when sales prices are calculated based on clustering rules whereby all sales prices for products in the same cluster are based on the average purchase price of their respective product cluster.

When to use batch processing

In some scenarios, it is better and more efficient to leverage the concurrent Decision Service processing capability of Corticon Server when you split up the transactions into batches. If you don’t need access to the full collection of transactions in your rules to make decisions on individual transactions, you might want to use ADC through a batch configuration. Batch processing can handle huge amounts of data to perform billions of transactions remarkably fast.
A requirement for batch processing is that each transaction stands on its own, not needing access to the full collection of data to make decisions on single transactions. As only so much data can be loaded into Corticon working memory at once, the data would need to be fed to the rules engine in chunks to then process the chunks concurrently based on resource capacity. Note that there are no return payloads in batch processing – the result of all the rule processing is persisted in the database.
Batch processing usually runs against the same input source to process large volumes of data so it is set to run at scheduled time in the range from once every minute to once every year.