skip to main content
Corticon Server: Integration & Deployment Guide : Inside Corticon Server : Multi-threading, concurrency reactors, and server pools

Try Corticon Now

Multi-threading, concurrency reactors, and server pools

Multiple Decision Services place their requests in a queue for processing. Server-level thread pooling is implemented by default, using built-in Java concurrent pooling mechanisms to control the number of concurrent executions. This design allows the server to determine how many concurrent requests are to be processed at any one time.

Implementation of an Execution Queue

Each thread coming into the Server gets an available Reactor from the Decision Service, and then the thread is added to the Server's Execution Thread Pooling Queue, or, simply put, the Execution Queue. The Execution Queue guarantees that threads do not overload the cores of the machine by allowing a specified number of threads in the Execution Queue to start executing, while holding back the other threads. Once an executing thread completes, the next thread in the Execution Queue starts executing.
The Server will discover the number of cores on a machine and, by default, limit the number of concurrent executions to that many cores, but a property can be set to specify the number of concurrent executions. Most use cases will not need to set this property. However, if you have multiple applications running on the same machine as Corticon Server, you might want to set this property lower to limit the system resources Corticon uses. While this tactic might slow down Corticon processing when there is a heavy load of incoming threads, it will help ensure Corticon does not monopolize the system. Conversely, if you have Decision Services which make calls to external services (such as EDC to a database) you may want to set this property higher so that a core is not idle while a thread is waiting for a response.

Ability to Allocate Execution Threads

The Corticon Server takes each thread (regardless of which Decision Service the thread is executing against), and then adds the thread to the Execution Queue in a first-in-first-out strategy. While that generally satisfies most use cases, you might want more control over which Decision Services get priority over other Decision Services. For that, you first set the Server property com.corticon.server.decisionservice.allocation.enabled to true, and then set the maximum number of execution threads (maxPoolSize) for each specified Decision Service in the Execution Queue. Once you set the allocation on every Decision Service, the Server will try to maintain corresponding allocations of execution threads from the Decision Services inside the Execution Queue.
Once the property is set to true, the Decision Services will allocate based on the maxPoolSize that was assigned when the Decision Service was deployed. You can then dynamically change a Decision Service's maxPoolSize, depending on how you deployed that Decision Service:
*If deployed using the API method ICcServer.addDecisionService(..., int aiMaxPoolSize, …), then the maxPoolSize can be updated using ICcServer.modifyDecisionServicePoolSizes(int aiMinPoolSize, int aiMaxPoolSize).
*If deployed using a CDD file, then change the value of max-size in the CDD. When the CcServerMaintenanceThread detects the change, it will update the Decision Service.

Memory management

Allocation means that you could allocate hundreds of execution threads for one Decision Service. The way Reactors are maintained in each Decision Service, the Server can re-use cached data across all Reactors for the Decision Service. Runtime performance should reveal only modest differences in memory utilization between a Decision Service that contains just one Reactor and another that contains hundreds of Reactors. Because each Reactor reuses cached data, the Server can dynamically create a new Reactor per execution thread (rather than creating Reactors and holding them in memory.) Even when an allocation is set to 100, the Server only creates a new Reactor (with cached data) for every incoming execution thread, up to 100. If there are only 25 execution threads against Decision Service 1, then there are just 25 Reactors in memory. Large request payloads are more of a concern than the number of concurrent executions or the number of Reactors.
Note: In prior releases, you set minimum and maximum pool sizes that the Server would use based on load. As load increased, the Server would allocate more Reactors to the Decision Service Pool (up to Max Pool Size), then, as load decreased, the Server would remove Reactors (down to Min Pool Size). This mechanism attempted to throttle the Server so that it would not run out of memory. Starting in this release, there is no need to decrease the number of Reactors in the Pool because extra Reactors are not actually sitting in the Pool. A new Reactor is created for every execution thread, and -- when the execution is done -- the Reactor is not put back into the Pool for reuse (as it was in previous versions), it just drops out of scope and garbage collection releases its memory.

Related Server properties

The following Server properties let you adjust server executions and allocations:
*Determines how many concurrent executions can occur across the Server. Ideally, this value is set to the number of Cores on the machine. By default, this value is set to 0, which means the Server will auto-detect the number of Cores on the server.
*This is the timeout setting for how long an execution thread can be inside the Execution Queue. The time starts when the execution thread enters the Execution Queue and ends when it completes executing against a Decision Service. A CcServerTimeoutException will be thrown if the execution thread fails to complete in the allotted time. The value is in milliseconds. Default value is 180000 (180000ms = 3 minutes)
*In some cases, you might want to enable Decision Service level allocations to control the number of Decision Service instances that can be added to the queue at a particular time. This will cause prioritizing of one Decision Service over another, making more resources available to that type. To do this, set the property's value to true. Default value is false
*Once Decision Service allocation is turned on, prioritization of one Decision Service may occur in the Execution Queue. If a particular Decision Service is fully allocated in the Execution Queue, other execution threads for that Decision Service will have to wait until one of the allocated execution Threads completes its execution. The wait time, in getting into the Execution Queue varies, based on load and other Decision Service allocations. You can allow those waiting Threads to timeout if they wait longer than specified. A CcServerTimeoutException will be thrown if the execution thread fails to complete in the allotted time. The value is in milliseconds. Default value is 180000 (180000ms = 3 minutes)

API methods that set and maintain queue allocation

The following CcServer API methods let you specify the queue allocation on a Decision Service:
*addDecisionService methods that have the aiMaxPoolSize parameter.
Note: The addDecisionService methods that previously had int aiMinPoolSize, int aiMaxPoolSize parameters were deprecated, and replaced by corresponding addDecisionService methods that set only the int aiMaxPoolSize parameter. Existing methods that have these signatures will be valid but will ignore the aiMinPoolSize parameter. Check any such existing usage to ensure that the aiMaxPoolSize value is appropriate as the allocation queue value.
*modifyDecisionServicePoolSize methods to have the int aiMaxPoolSize parameter.
Note: The modifyDecisionServicePoolSizes methods (note the plural Sizes) that had the int aiMinPoolSize, int aiMaxPoolSize parameters were deprecated.