Check out the new USENIX Web site. next up previous
Next: NetSpec Up: Performance Patterns: Automated Scenario Evaluation Previous: Related Work

Implementation  

We have implemented an integrated tool-based approach for performance measurement of ORB endsystem performance. The single most important aspect of our system is that it measures performance within the target environment, rather than relying on published data that may be inaccurate, or which accurately describes aspects of performance under a different environment. The main features of this approach are:

1.
A script based approach for conducting performance tests which promises better expressiveness of experiments.
2.
The ability to study the performance of CORBA objects in the context of different operating system loads and network traffic.
3.
The ability to study the influence of different components of the CORBA endsystem including the middleware, the operating system, and the network on the performance of CORBA objects.
4.
The ability to measure the performance of objects in heterogeneous distributed systems from a single point of control.
5.
The flexibility and scalability to specify a wide range of distributed tests and behavior patterns. This includes scalability in time, number of objects, and number of hosts supporting the pattern.
6.
The ability to measure latencies, throughput and missed deadlines among a wide range of performance metrics.
7.
An automated highly scalable framework for performance measurement. This is a crucial feature because it enables practical us of much larger benchmarking suites than non-automated approaches.

The performance metrics which best predict application performance depend, in part, on the properties of the application. This is one reason why a pattern based and automated framework is required. The pattern orientation enables the user to describe scenarios with a rich and varied set of behaviors and requirements, closely matching the proposed application architecture. Automation enables testing on a large scale, permitting the user to test a wide range of parameters under a wide range of conditions, which permits the user to avoid making many potentially unjustified assumptions about what aspects of the application, ORB, and endsystem are important in determining performance.

The metrics which will be crucial for important classes of applications include: throughput, latency, scalability, reliability and memory use. The system parameters which can affect application performance with respect to these metrics include: multi-threading, marshalling and demarshalling overhead, demultiplexing and dispatching overhead, operating system scheduling, integration of I/O and scheduling, and network latency. Our approach currently enables us to examine the influence of many of these aspects of the system on performance, and further development will enable us to handle all of them.

Figure 1 shows our integrated benchmarking framework supporting performance evaluation tests. The experiment description expressed in the PPL script is parsed by the PPL compiler which emits a PMO NetSpec script implementing the specified experiment. The NetSpec parser processes the PMO based script and instructs the NetSpec controller daemon to create the specified sets of daemons on each host used by the distributed experiment. Note that Figure 1 illustrates a generic set of daemons, rather than those supporting a specific test. The PMO daemon interfaces the CORBA based objects on that host to the NetSpec controlling daemon. An additional PMO object is sometimes used, and communicates with the PMO daemon, because CORBA objects can be created dynamically. Note that the line between the PMO objects represents their CORBA based interaction, which is the focus of the experiment. The DSKI measurement daemon, if present, is used to gather performance data from the operating system. It is a generic daemon and is not CORBA based. The traffic daemon is also not CORBA based, but is used to create a context of system load and background traffic within which the CORBA objects exist.


  
Figure 1: The Integrated Benchmarking Framework
\begin{figure*}
\centerline{
\psfig {file=figs/integ-fram-mod1.eps,width=6in}
}
\vspace*{-10mm}\end{figure*}

Our approach integrates several existing tools and adds significant new abilities specifically to support CORBA. The tools integrated under this framework are NetSpec[12,16], the Data Stream Kernel Interface (DSKI)[1], the Performance Measurement Object (PMO)[10,9], and the Performance Pattern Language (PPL). The rest of this section discussed each component in greater detail.



 
next up previous
Next: NetSpec Up: Performance Patterns: Automated Scenario Evaluation Previous: Related Work
Sridhar Nimmagadda
3/23/1999