Try OpenEdge Now
skip to main content
BPM Events User's Guide
Administering and operating BPM Events : Event management : Event caching and indexing

Event caching and indexing

The purpose of the event cache is to provide fast access to past events, for event correlation purpose. The event cache is not required if rules did not perform event correlation. Any event notified to BPM Events, that matches at least one rule activation clause, is automatically added to the cache. This cache is not only a memory structure, but it also has a persistent image in database. When recovering from a shutdown, the original in-memory cache is recovered from this persistent image.
The event cache supports two indexing schemes:
*type ID indexing. All events that are posted to the database event queue, whether from Business Process Server (BP Server) or from external sources, are indexed based on their type ID. The type ID for BP Server events in <evttype>::<evtvalue>. The type ID for XML messages is XMLEvent::<evttype>::<evttype>Event.
*group ID indexing. All events of type BP Server are also automatically indexed based on their PROCESSINSTANCEID attribute. In other words, each entry of this index contains the list of all events generated by a single process instance. This indexing allows for fast event correlation within a single process instance.
For XML messages, define group ID for each XML message type in the XML Event BPM Events class. The public function get_group_id(): int can be implemented. For instance, let's say we have a Purchase Order XML message that contains the customer ID. Use this ID to index this type of purchase order. The xsd follows:
<xsd:complexType name="PurchaseOrderWithIdType">
        <xsd:element name="custId" type="xsd:decimal"/>
        <xsd:element name="shipTo" type="USAddress"/>
        <xsd:element name="billTo" type="USAddress"/>
In the <BPM Events>\ebmsapps\XMLEvent\rules\PurchaseOrderWithId class, add the get_group_id function:
public fun get_group_id(): int
        return read_jobj().getCustId().intValue();
We call "cache entry" the set of events associated with one of these index entries. Cache entries are automatically flushed once the cache size reaches its limits. This flushing is done based on a LRU algorithm. If a swapped out cache entry is required later for event correlation, then it is automatically reloaded in memory by the cache manager.
The property bpmevents.engine.eventcachesize in the bpmevents.conf file specifies the maximum number of events the in-memory cache stores at any time, but not the maximum number of events the cache manager can handle: if the cache size is too small for the actual number of events needed for correlation, then this only affects performance as there is more swapping in and out to the database.