SonicMQ API

progress.message.jclient.mp
Interface PacerThread


public interface PacerThread

Title: SonicMQ Adaptive Pacing (Managed Client Pacer Thread)

Description: The Pacer Thread is a background thread that is created by Message Producers or Consumers that use Adaptive Pacing. It is not exposed as part of the client API and is provided here soleley for documentation purposes. The thread uses wait/notify primitives to create a race condition towards a message batch commit. The thread loops, waiting for AdaptivePacingTimer millis before it's commit interval expires. If during this interval the message buffer fills to its AdaptivePacingBatchSize size, the batched message operations (send/receive) are acknowledged and all timers for the session are reset. Otherwise upon timer exit a session level commit is triggered and in-wait message counter is reset for all Producers or Consumers.

One pacer thread is created for each Producer or Consumer enabled with Adaptive Pacing. The reason for this design is to ensure that each message processor can individually trigger commit operations. At the high end of the performance spectrum, for high message rates across multiple message processor in a session this will result in a sliding batch window. Since several processors may have a number of in-wait messages, the commit will acknowledge a message set larger then the batch size of an individual processor. This is the desired behavior for sessions that have multiple message processors defined, since it dynamically grows the batch size and allows the session to adapt to the load without adding complexity to session level logic. Creating such constructs at the session level would have resulted in an inverse behavior where session level batch control would have been split between multiple message processors, thereby reducing the size of an individual batch as the number of processors increased, thereby adversley affecting performance. Processor (producer/consumer) level monitor threads, while a bit more complex, ensure that batch sizes grow with the number of processors and with their associate message traffic. Note that setting large values for Buffer Sizes or Timer Interval will result in under-commited operations and will introduce latency that may be unacceptable for Real-Time applications.

At the low end of the performance spectrum, where message traffic is lighter, multiple processors may cause over-commit conditions due to low Timer Interval settings. Over-commit conditions may cause many commit opeations to flood the connection and slow down performance by introducing additional acknowledgements. It is suggested that processor level settings be used and set to reflect potential processor level message operations. However if traffic patterns across paced Producers or Consumers on the same session are are inconsistent and vary greatly across the spectrum, it is recommended such message processors be broken out in to separate sessions.

Adaptive Pacing, Batching and Latency

As previously mentioned, Real-Time systems may have high throughput requirements that mandate low latency. In such environments there is typically a concern that in general, batching will dramatically increase latency to levels unacceptable by a Real-Time environment. Lets take a moment to consider the actual numbers as they apply to general batching and Adaptive Pacing.

Latency factors a set of variables that comprise an avarage number. In general latency is considered the lag time or overhead (wall clock time), associated with message delivery from a Producer to a Consumer. In terms of a JMS compliant messaging system, per-message latency must factor in the various Quality of Service and Delivery options which affect the number of network operations that must occur during message creation and delivery. Guaranteed message delivery, for instance, requires that every message produced by a Sender or Publisher be acknowledged by the message broker, prior to the next message being sent, resulting in a synchronous, RPC style behavior by message producers. Although TCP/IP does provide a number of optimization techniques for streaming transmissions such as tunable Maximum TCP Windows Size, Adjustable Maximum Transmission Unit (MTU) and TCP buffer size, most of these settings have no effect on synshronous message publishing.

Raw (socket) tests for this type of guaranteed message creation have been tested by the Sonic team and customers across a variety of harware and found to be nearly identical. This type of synchronous message production accounts for the largest performance impact to a message producer and consumer alike. When considering groups of messages, synchronous delivery acknowledgement imposes an artificial latency on the message following the one being acknowledged. As such, for synchronous message producers latency and throughput are linked. This is an important (if not an intuitive) fact.

Burst message tests intended to emulate latency scenarios aim to illustrate per-message latency under optimal conditions. Independent tests show that avarage, non-optimized networks with multiple segments (hops) experience an avarage latency of ~25 milliseconds. Optimized latency tests using a high resolution chronometer and a C# test harness have shown that latency may be reduced below 1 millisecond on direct networks and 1-2 milliseconds on well tuned networks using Java applications.

However, such tests usually do not take into consideration aggregate latency over time that results from introducing a higher Quality of Service such as Fault Tolerant connections. For example 100,000 synchronous messages sent in a sequence will accrue a significant amount of latency even at the low latency rates illustrated above. In our extreme example, considering 100,000 synchronous messages sent in a logical group imposes significant latency (100+ milliseconds) on the messages towards the end of the logical sequence. Hence, if the ratio of messages to their acknowledgements is reduced, the aggregate latency of a message stream is also reduced and total throughput is increased. This can be acheived by organizing message operations into small groups by batching.

The opposing argument to batching points out that as batch buffers build latency begins to accrue towards the front of the batch sequence, since messages are now accumulating while waiting for a commit. It should be noted, however, that by reducing the number of costly message operation acknowledgements thru batching, latency associated with QOS is dramatically reduced. Additional network tuning is also not necessary as batched operations are better at taking advantage of standard TCP/IP optimization mechanisms. Adaptive Pacing makes use of batching by controlling operation batch size and by allowing the Pacer Thread to perform independent, self-clocked commit operations similar to the way the Nagle Algorythm paces small packet traffic at the lower layers of TCP/IP. Assembling batches into message groups of 10-20 may actually eliminate a significant amount of QOS latency and may dramatically increase throughput. Fast message processors gain the benifit of reduced acknowledgement traffic. While some latency is incurred when compared to asynchronous (unacknowledged) message processing, no significant latency can be observed when comparing paced clients with synchronous message processors. For slow message processors latency may be tuned by setting the Pacer Thread timer's interval. In general, this results in a predictive behavior and configurable latency. Another key benifit of this approach is that batch operations mitigate a sender's risk by allowing users to configure the number of outstanding messages between acknowledgements. Unacknowledged (uncommited) messages may be cached by producers and re-applied after a failure, or re-delivered by the Broker. As such the number of benifits associated with bacthed operations clearly out-weighs any issues under specific circumstances.

Copyright: Copyright (c) 2006

Company: Progress Software

Version:
2.2
Author:
Dmitry Lelchuk


SonicMQ API

Copyright © 1999-2011 Progress Software Corporation. All Rights Reserved.
HTML formatted on 5-August-2011.