Check out the new USENIX Web site. next up previous
Next: Acknowledgments Up: PMQS : Scalable Linux Previous: Chat Evaluation


Conclusions and Future work

In our previous work, we addressed the scalability limitations of the default Linux scheduler (DSS). We proposed a Multi Queue Scheduler (MQS) which used per-CPU runqueues instead of a single global runqueue. However, to maintain strict functional equivalence with DSS, MQS continued to examine all runqueues, albeit intelligently, to make global scheduling decisions. In this paper, we take the work one step further and present a Pooled Multi Queue Scheduler (PMQS) based on MQS. The processors of an SMP are divided into pools for the purpose of scheduling decisions, reducing the number of remote CPU runqueues that need to be examined. As this can lead to load imbalances, we have complemented PMQS with a number of load balancers.

We evaluated the performance of PMQS and the different load balancers against MQS and DSS on a 4x4-way NUMA system and on an 8-way SMP using two benchmarks. The Mkbench benchmark, which is throughput oriented, benefitted overall from PMQS while the Chat benchmark did not. We believe that Mkbench is more representative of server workloads as it consists of largely unrelated tasks running for short time periods. Chat is more of a microbenchmark with very strong interactions between a large number of tasks leading to a very high rate of scheduling decisions. Different conclusions were also drawn about the relative performance of the load balancers.

The pooling scheduler and load balancers chosen for study are preliminary implementations of the general concept of subdividing processors into pools and regulating load across them. The choice of these implementations was dictated by simplicity and a desire to make incremental changes to MQS.

The performance evaluation has some lessons for future work. First, pooling does show benefits over and above those seen by multi queue schedulers alone. Aggressive load balancing is generally a bad idea as it tends to overcorrect for load imbalances and leads to excessive task migrations.

As part of our future work, we will look at load balancing algorithms which try to balance loads asymptotically. We will also take a fresh look at the approach of running a load balancing module periodically. It might be better to integrate load balancing functionality into the scheduler code.

Overall we believe that PMQS is a very flexible extension to MQS and both have shown small to significant performance improvements over DSS. The pooling approach has shown promise and merits further investigation.


next up previous
Next: Acknowledgments Up: PMQS : Scalable Linux Previous: Chat Evaluation
2001-09-18