Check out the new USENIX Web site. next up previous
Next: Related Work Up: Performance Patterns: Automated Scenario Evaluation Previous: Performance Patterns: Automated Scenario Evaluation

Introduction

The Common Object Request Broker Architecture (CORBA)[15] is emerging as an important open standard for distributed-object computing, especially in heterogenous computing environments combining multiple platforms, networks, applications, and legacy systems[27]. Although the CORBA specifications define the features of a compliant ORB, they do not specify how the standards are to be implemented. As a result, the performance of a given application supported by ORBs from different vendors can differ greatly, as can the performance of different applications supported by the same ORB.

A number of efforts have been made to measure the performance of ORBs, often comparing with performance of other ORBs [23]. These efforts generally measure only specific aspects of ORB performance in isolation. While performance of specific ORB functions is important, it is also important to realize that superior results in a few simple tests does not ensure that the aggregate performance of ORB A is better than ORB B for a particular application object architecture. The performance of ORB based applications implemented as a set of objects is greatly influenced by the application context and by the architecture and performance of the ORB endsystem. The endsystem consists of the ORB middleware, the operating system and the underlying network. An application's performance is determined by how well these components cooperate to meet the particular needs of the application.

Current benchmark suites and methods tend to concentrate on a specific part of the endsystem. Operating system benchmarks concentrate on component operating system operations, but may say comparatively little about how well the operating system will support ORB middleware. ORB benchmarks concentrate on component operations of the middleware, but are less effective at pinpointing problems at the application, operating system, and network layers. Developers considering non-trivial ORB based applications need the ability to evaluate, in some detail, how well a given ORB and endsystem combination can support candidate application object architectures. They need this information before implementing a significant portion of the entire application. Such developers should begin with a set of standard performance benchmark suites exercising various aspects of the ORB endsystem under realistic application scenarios, but they also require the ability to create test scenarios which specifically model their candidate application architectures and behavior in the endsystem context.

Current benchmarking methods and test suites do not adequately solve the real problem developers face because current methods concentrate on only a part of the application and endsystem in isolation and thus do not enable the developer to consider how implementation decisions at various levels interact. An effective and efficient tool set supporting an integrated performance evaluation methodology should support ORB, endsystem, and application oriented tests, should be automated, and should make it easy for the user to extend and modify the set of tests performed. Only such an integrated tool set and benchmark test suite supporting realistic application scenarios and capable of collecting information from all layers of the endsystem can enable developers to effectively evaluate candidate application object architectures before implementation.

This paper describes how a combination of tools developed at the University of Kansas (KU) can address this challenge. This integrated tool set represents a significant advance in support for performance evaluation of ORB based applications because it increases the range and complexity of tests that a benchmark suite can contain, it extends the types of performance information which can be gathered during an individual test, and its support for automated test execution significantly extends the number of tests that a practical benchmark suite can contain. The NetSpec tool provides a control framework for script driven automation of distributed performance tests. The Data Stream Kernel Interface (DSKI) provides the ability to gather time stamped events and a variety of other performance data from the operating system as part of a NetSpec experiment. The Performance Measurement Object (PMO) provides the ability to conduct NetSpec based experiments involving CORBA objects, and the Performance Pattern Language (PPL) provides a higher level language for describing NetSpec based experiments involving sets of CORBA objects more succinctly.

NetSpec has been used by a number of research projects at the University of Kansas (KU) and elsewhere. It provides the automation and script based framework supporting experiments including a wide range of conditions, component behaviors, and data collection [12,16]. NetSpec is designed to be extended and modified by the user through the implementation of daemons. Test daemons support basic network performance tests and supply background traffic in other NetSpec based experiments. Measurement daemons gather information during an experiment but contribute no traffic or behavior beyond that required to gather data.

The DSKI is a pseudo-device driver which enables a NetSpec experiment, through the DSKI measurement daemon, to specify and collect the set of operating system level events of interest which occur during the experiment [1]. The PMO is a NetSpec test daemon designed to support CORBA based performance experiments. A NetSpec PMO script can specify the creation of CORBA objects, their execution time behavior, and the relations that hold among the objects. Using existing traffic related NetSpec test daemons, the DSKI, and PMO, a user can write a script specify a set of interacting objects, a set of network background traffic providing a context within which the objects exist, and gather operating system level information about network and operating system level events affecting performance.

A practical drawback of the NetSpec PMO support is that the language is defined at a low level of detail, and PMO scripts for scenarios with many objects are thus long and repetitive. The PPL addresses this by defining a higher level language for more compactly describing application level object interaction scenarios, which abstract the performance aspects of commonly used implementation strategies. We have called these scenarios performance patterns to draw a direct analogy to design patterns which the definitive book Design Patterns defines on page 3 as ``descriptions of communicating objects and classes that are customized to solve a general design problem in a particular context''[6].

A performance pattern is a set of objects exhibiting a set of behaviors, relationships, and interactions typical of an application architecture or class of application architectures. This pattern can be customized through parameter specification or user extension to match the intended application behaviors and architecture as closely as required. The PPL compiler emits NetSpec PMO scripts implementing the specified performance pattern.

It is important to realize that the PPL approach is quite general and is not ORB or even CORBA specific. The PPL could easily be used to create object based performance scenarios given support from a NetSpec daemon of the correct type. The PMO is CORBA specific, but it would be straightforward to implement and analogous NetSpec daemon for DCE or DCOM based performance evaluation. The PMO is not ORB specific and has been ported with minimal effort to four ORBs: The ACE ORB (TAO)[20], OmniORB[23], ExperSoft's CORBAplus[5], and ILU[11]. We currently focus on TAO, OmniORB, and CORBAPlus for project specific reasons. The range of experiments which can be supported is a function, in part, of the set of possible object behaviors supported by the PMO. PMO behaviors are implemented by routines linked into the PMO, and it has been designed to make adding new behaviors simple, thus supporting user extension.

The rest of the paper first discusses related work in Section 2, and then describes the implementation of the PMO and PPL in Section 3. Section 4 presents examples of PMO and PPL use, while Section 5 presents our conclusions and discusses future work.


next up previous
Next: Related Work Up: Performance Patterns: Automated Scenario Evaluation Previous: Performance Patterns: Automated Scenario Evaluation
Sridhar Nimmagadda
3/23/1999