skip to main content
Corticon Server: Integration & Deployment Guide : Performance and tuning guide : Cluster configuration
 

Try Corticon Now

Cluster configuration

The recommendations above also hold true in clustered environments, with the following clarifications:

CPUs & Wrappers

Because wrappers are typically located on the Main Cluster Instance and Reactors are located on the cluster machines, the direct relationship between CPUs and wrappers isn't so straightforward in clustered environments. The key relationship becomes number of CPUs on the cluster machine and the maximum pool size of any given Decision Service deployed to the same machine. If the number of CPUs in cluster machine A is 4, then the maximum pool size for any Decision Service deployed to cluster machine A should not exceed 4.

Wrappers and pools

Wrapper count on the Main Cluster Instance should be greater than or equal to the sum of the maximum pool sizes for any given Decision Service across all clustered machines. For example:
*Cluster machine A has Decision Service #1 deployed with min/max pool settings of 4/4.
*Cluster machine B has Decision Service #1 deployed with min/max pool settings of 6/6.
*Cluster machine C has Decision Service #1 deployed with min/max pool settings of 2/2.
Based on this example, the Main Cluster Instance should have at least 12 instances of the wrapper deployed to make most efficient use of the 12 available Reactors in Decision Service #1's clustered pool.

Shared directories and unique sandboxes

While sharing certain directories across multiple clustered machines is a good practice, the nodes in a cluster should not share the same CcServerSandbox directory. Different instances are likely to get out-of-sync with the ServerState.xml, thereby causing instability across all instances. Each cluster member should have its own CcServerSandbox with its own ServerState.xml, yet share the same Deployment Directory (/cdd) directory. Then, when there is a change to a .cdd or a RuleAsset, each node handles its own updates and its own ServerState.xml file.