next up previous contents
Next: Memory Usage Up: Performance and Scalability of Previous: Parsing/Serializing Performance   Contents

Scalability and Performance of the IGUANA OPC Server

The next step was to measure the performance of the IGUANA OPC server in different scenarios. The test setup consisted of the OPC server which was to be tested, a client that creates different loads on the server and the ESD simulation server that provide appropriate fieldbus data for the test server.

In order to optimize the test results, the OPC server which was to be tested was placed on a different machine than the test helper programs, such as the client and the ESD server. This way the measurements of the server performance could not be tampered with other CPU demanding processes. Moreover, the test was set up with server hardware slower than the client hardware to avoid server idle times due to a slow client.

Two OPC operations were chosen for the test, namely ``GetStatus'', which does not request data from the ESD server and ``Browse'', which retrieves fieldbus data via an ESD ``NODELIST'' command. Every test consisted of 500 requests of one of these operations. These tests were first issued in a pipelined manner, then multiple simultaneous requests were done, starting with 5 up to 500 concurrent requests.

It was observed that the timing results for up to 50 concurrent requests were quite consistent, while 100 and more concurrent requests led to erratic results. The reasons for this test behavior were unclear at first, but it was noticed that there were short durations were both the client and the server were idle and therefore a problem with the network was suspected. Therefore a network I/O test was done with a network analyzer tool154. The resulting diagrams are depicted in figure 74.

Figure 74: Networking I/O Diagram During Server Testing
\begin{figure}\centering
\includegraphics[scale=0.7]{graphics/perf-concurrency.eps}\end{figure}

The first two diagrams show the network load with a pipelined query, where requests are issued one after another, and a scenario with 10 concurrent requests. Multiple tests were done which always led to very similar I/O diagrams. However looking at the two right diagrams, which both result from server tests with 100 concurrent connections155, it can be seen that they differ. It is very likely that this inconsistent network load lead to erratic test results, as denoted above. The cause of this problem could not be located, but it is likely that it lies either in the Twisted framework or in the TCP stack of the operating system.

Due to these problems, the test setup was changed, so that all test programs resided on the same server. This way, the client CPU load will influence the test results, but it was observed that they are minimal and can be neglected. Figure 75 depicts the test results.

Figure 75: Server Throughput
\begin{figure}\centering
\includegraphics[scale=0.7]{graphics/perf-server1.eps}\end{figure}

It can be seen that the IGUANA OPC server scales well up to 100 concurrent requests. Above this value the throughput seems to decrease. However, an IGUANA gateway will probably never have such high loads, therefore the maximum amount of simultaneously allowed connections may be limited to 100. This way, the server will always provide optimal performance.

From this graph the average throughput can be deduced, which is at minimum 20 Browse requests per second and 26 GetStatus requests per second156. As the IGUANA gateway hardware is approximately ten times slower than the testing hardware, it can be estimated that it could handle two OPC operations per second, which seems to be a reasonable performance for a fieldbus gateway.


next up previous contents
Next: Memory Usage Up: Performance and Scalability of Previous: Parsing/Serializing Performance   Contents
Hermann Himmelbauer 2006-09-27