Try OpenEdge Now
skip to main content
Developing AppServer Applications
AppServer and Client Interaction : Understanding synchronous and asynchronous requests : Understanding the session-free queuing model
 

Understanding the session-free queuing model

For a given logical connection to an application service, asynchronous requests are executed in no determined order, and the event procedures for those requests are executed in no determined order. Unlike requests sent over physical connections, which are handled sequentially, requests sent over logical connections are handled in parallel and completed in a manner that depends entirely on available AppServer resources.
The send queue queues the requests that are submitted for execution by the client in the order that the client submits them. The response queue queues the responses received on a given application service connection for requests that have completed execution. Finally, the ABL event queue queues the PROCEDURE-COMPLETE event for each completed request as it is received from the response queue. However, it is the availability of AppServer resources in the application service connection pool that determines when and how these requests are actually processed.
The following figure shows how this works, with nine asynchronous requests submitted on the client. In this example, the requests are assumed to be sent for a single application service binding that is supported by two AppServer instances (A and B).
Figure 3. Session-free asynchronous request queuing
For simplicity, the example assumes that each AppServer has only one agent to execute requests. However, the principle is the same for one AppServer running two agents to execute the requests. The main difference is that two AppServers are likely to provide higher general availability for the application service. AsyncRequest refers to the execution of an asynchronous RUN statement, and EventProc refers to the execution of an event procedure in response to the PROCEDURE-COMPLETE event handled from the ABL event queue. The requests are numbered in order of execution.
On a given application service binding, if an asynchronous request is submitted for execution when all AppServer resources are unavailable, the next asynchronous request is queued on the send queue until all previously submitted asynchronous requests for that connection have executed. Such is the case for AsyncRequests 8 and 9. The most recent asynchronous request (9) is only sent to an AppServer for execution once AsyncRequest 8 has been submitted to an available AppServer for execution. All prior asynchronous requests have completed execution (1, 2, 4, 5, and 6) or are executing on an AppServer (3 and 7).
Note that although all these requests have been submitted by the client in order from 1 to 9, AsyncRequest 3 is still being executed, and AsyncRequests 4 through 6 have already completed. If this was a session-managed application, AsyncRequests 4 through 7 would still be in the send queue, waiting for 3 to complete. Because the application is session-free, AsyncRequest 3 has taken so long to complete execution on AppServer B that AsyncRequests 4 through 6 have already completed execution on AppServer A and AsyncRequest 7 is still executing on AppServer A.
Assuming that 3 completes while 7 is still running, 3's results will appear in the response queue ahead of 7 and AsyncRequest 8 will likely begin executing on AppServer B. Note that when this series of requests began, AsyncRequests 1 and 2 would have executed at the same time on AppServers A and B, but 2 clearly completed before 1 and passed its results back ahead of it.
So, while results might return faster in a session-free application, the application must be written to handle them in any order. An example might be populating a temp-table with the results of each request ordered by an index, where the order of population is therefore irrelevant.
Note, again, in the previous figure, if AppServers A and B were actually two agents of a single AppServer, the order of completion might be exactly the same, especially if the two-AppServer scenario balances load evenly, because a state-free AppServer distributes its requests evenly to all of its available agents. The one feature that could very well change the order of completion between the two-AppServer and the one-AppServer scenarios is if the two-AppServer scenario used load balancing and had a balance of load set significantly different between the two AppServers.