Check out the new USENIX Web site. next up previous
Next: Proxy Pattern Up: Evaluation Previous: Evaluation

Simple Client-Server Pattern  

This example illustrates the basic elements of the PPL and PMO in the context of a simple client-server pattern, which reflects current conventional benchmarking practice. Listed below is the PPL script corresponding to the scenario of Figure 2. The client and server in this case are Sender and Receiver respectively. The information regarding the parameters required for the testing between these two CORBA objects is provided in the object blocks of the PPL script and the kind of relation between the objects is specified in the relation block of the PPL script. Execution of the pattern is specified by the one line schedule.


  
Figure 2: Simple Client-Server Pattern
\begin{figure}
\centerline{
\psfig {file=figs/cli-ser.eps,width=2.5in}
}

\scrip...
 ...on Schedule */

CUBIT-TESTS;\end{verbatim}\normalsize
\vspace*{-7mm}\end{figure}

The PPL compiler takes the script as input, analyzes the object definitions and relations, and generates the NetSpec PMO script shown in Figure 3. The first thing to note is that the PMO script has two major sections, one defining the client as a corba daemon running on the machine marcus, and the server as a corba daemon running on the machine zeno. The other major point is that the parameter block is specified explicitly for each daemon. The main point is that the PMO script defines each object separately and that the relations among them are more difficult to discern in the PMO language.


  
Figure 3: Corresponding Client-Server PMO NetSpec Script
\begin{figure}
\scriptsize
\begin{verbatim}
cluster {
 corba marcus {
 NameOfORB...
 ...e = eth, port = 22222);
 }
}\end{verbatim}\normalsize
\vspace*{-7mm}\end{figure}


  
Figure 4: Client-Server Cubit for OmniORB and TAO on Linux
\begin{figure}
\centerline{
\psfig {file=plots/figs/cubitLinuxNew.ps,width=3in}
}\end{figure}

Figure 4 shows the performance of the Client-Server pattern supporting the cubit test type for OmniORB and TAO on a Linux platform, while Figure 5 shows the results for OmniORB, TAO and CORBAplus on a Solaris platform. The CORBAPlus ORB is not currently available for Linux, but should be soon. The flexibility of the script driven approach is demonstrated by the observation that the TAO based tests were repeated for the OmniORB by replacing orb_name = TAO with orb_name = OmniORB in the PPL script. The cubit test emphasizes basic communication performance because it involves small packets, and a simple computation (cube a number) on the server side. The cubit behavior thus focuses on the time spent by each packet in the system layers and the middleware for a CORBA call invocation. The results shown are the average values for 250 invocations of the basic operation for each of several CORBA data types, and are presented in terms of calls per second.

There are several points of interest in these results. First, is the fact that even such a simple test reveals differences between ORB implementations, and between operating system platforms. The most striking difference is that while TAO performance is essentially constant on both Linux and Solaris, OmniORB performance on Solaris is roughly double that on Linux for many data types but not all. Another observation is that OmniORB generally outperforms the other ORBs, but that its performance for the ``llse'' (long long sequence) data type is substantially below that of TAO on Linux.


  
Figure 5: Client-Server Cubit for OmniORB, TAO and CORBAplus on Solaris
\begin{figure}
\centerline{
\psfig {file=plots/figs/cubitSolaris.ps,width=3in}
}\end{figure}


  
Figure 6: Client-Server Throughput for TAO on Linux
\begin{figure}
\centerline{
\psfig {file=plots/newplots/ThruLinuxTAO_CS.ps,width=3in}
}\end{figure}


  
Figure 7: Client-Server Throughput for OmniORB on Linux
\begin{figure}
\centerline{
\psfig {file=plots/newplots/ThruLinuxOmni_CS.ps,width=3in}
}\end{figure}

Determining why these observed behaviors occur will take further study, but this demonstrates the important point that our compact PPL script describes a test which can be run automatically in a matter of seconds, revealing significant differences in ORB behavior, and providing a convenient and efficient foundation for further experimentation. The flexibility of the PPL approach is further illustrated by changing the test type from cubit to throughput in the client-server pattern, producing the Linux throughput results for TAO illustrated in Figure 6 and Figure 7 for OmniORB. A data file, essentially CORBA ``char'' data type, ranging from 1 MB to 64 MB is sent using buffer sizes ranging from 512 bytes to 16 KB. This test shows that throughput for both ORBs is constant with the total amount of data, but that throughput is significantly affected by the buffer size used for each data transfer session. The TAO throughput increased with buffer size, indicating that the packet transfer rate was limited, but not the packet size. The OmniORB throughput varied in a much less obvious way, and was significantly greater for 4KB buffers. Determining why OmniORB performance varies so haphazardly with buffer size would require gathering data from the operating system layer, as discussed in Section 4.3. The throughput tests for a single client-server pair ran under NetSpec control in an elapsed time of approximately 15 minutes.


next up previous
Next: Proxy Pattern Up: Evaluation Previous: Evaluation
Sridhar Nimmagadda
3/23/1999