Check out the new USENIX Web site. next up previous
Next: Evaluation Up: Implementation Previous: Performance Measurement Object

Performance Pattern Language  

The PPL was designed as a higher level language for describing such application level object interaction scenarios[14,6] in terms of performance patterns. Within each pattern, the user describes objects, object behaviors, test types and relations among the objects that influence the performance of the pattern as a whole. For convenience, the PPL also permits the user to define parameter blocks describing aspects of object behavior which are referenced by object definitions using the same set of parameters. After the patterns associated with the experiment are specified, the schedule for their execution is given. Currently the only schedule supported is a simple sequential execution of one pattern at a time. However, we are working on extending this to permit flexible pattern composition and dynamic time dependent behavior to better support application scenario based testing. The correspondence between PPL constructs and the PMO level scripts produced by the PPL compiler is illustrated by the examples in Section 4.

The combination of the PMO and PPL provides a powerful and efficient way for developers to describe and conduct a wide range of application scenario based performance evaluation experiments for CORBA systems. The method is applicable to any ORB and has been ported with minimal effort to four ORBs: The ACE ORB (TAO)[20], OmniORB2[23], ExperSoft's CORBAplus[5], ILU[11]. The range of experiments which can be supported is a function, in part, of the set of possible object behaviors supported by the PMO. PMO behaviors are implemented by routines linked into the PMO, and it has been designed to make adding new behaviors simple.

The scalability of our method is important in two ways. First, script driven automation of the experiments makes it fairly easy to describe tests at a scale representative of the final application. Second, the script driven automation makes it possible to conduct an acceptably large and comprehensive set of tests in an acceptably short period of time. For example, sets of tests producing graphs discussed in Section 4 are fully automated and execute in periods ranging from a few seconds to almost an hour. Scalability is important because the number of properties of an ORB which can significantly affect performance of a particular application is large, requiring a large test suite for adequate evaluation.


next up previous
Next: Evaluation Up: Implementation Previous: Performance Measurement Object
Sridhar Nimmagadda
3/23/1999