next up previous contents
Next: The Twisted Framework Up: Implementing OPC Server Functionality Previous: Implementing OPC Server Functionality   Contents

Handling Concurrent Requests

The server must furthermore be able to maintain concurrent client connections. This way, one long operation does not lead to the starvation of others. Moreover, OPC requests may contain multiple requests in one message. For instance, one ReadRequest message may request multiple items at once. Furthermore OPC requests may contain a so-called ``Deadline'' which specifies how long a single item request may last. Therefore handling messages with multiple requests by issuing sequential ESD commands are inappropriate, as a long-lasting command may lead to the starvation of others.

Figure 63 shows two possible solutions to this problem. The left diagram solves the problem by issuing one ESD command per requested item, while the right diagram implements a connection pool which maintains multiple open connections to the ESD server.

Figure 63: Handling Concurrent ESD Requests

In this implementation, the solution in the left diagram will be chosen. The simple reason for this is simplicity, as setting up and maintaining a connection pool may lead to a high level of complexity. The disadvantage is that many concurrent client connections may thus lead to a high memory requirement in the OPC server and high loads on the ESD server136.

Basically there are three methods to maintain concurrency in a server:

1. Multiple Processes:
Each connection is handled in a separate operating system process. The operating system maintains the concurrency of these processes.

2. Multi threading:
Every connection is handled by a separate thread, which is controlled by the multi threading framework.

3. Event-driven Design:
This technique uses non-blocking function calls, hence it makes the program asynchronous. These functions signal their readiness to a scheduler, which decides if a function is to be executed.

Each of these concepts has its strength and weaknesses. For this design, an event-driven concept seems most appropriate for the following reasons:

Ease of use:
Compared to threading, an event-driven program does not have to be thread-safe. Therefore data can easily be shared among functions and there is no need for mutexes or similar concepts.
Good for Network Programming:
In an event-driven system, functions are modeled non-blocking, which means that functions are chained and wait for a specific event. In case an event triggers such a chain, they are executed one after another. Applied to Networking servers, this concepts fits well, as server functions will often wait for messages from a peer, which will trigger such a chain of functions.
High Level of Control:
In event-driven programs, the programmer may carefully select in which order functions should be executed. Compared to multiple processes, the programmer has a higher level of control.

The disadvantage of an event-driven design is that it does not make sense where computing-intensive tasks should be parallelized. Moreover, the design has to be asynchronous from the beginning. Therefore existing blocking functions cannot be parallelized later. However, these two disadvantages are not important for this design, as computing-intensive functions do not exist and the OPC server will be modeled with the event-driven concept in mind from the beginning.

ZSI does not support concurrent programming very well. It offers dispatching of SOAP methods in a CGI-like style, however this is inappropriate for this design as a CGI-program needs a separate web server. It also provides a simple HTTP server, with very limited functionality. However, all dispatching is based on the top-level SOAP element and can therefore not be used for OPC XML-DA.

next up previous contents
Next: The Twisted Framework Up: Implementing OPC Server Functionality Previous: Implementing OPC Server Functionality   Contents
Hermann Himmelbauer 2006-09-27