Check out the new USENIX Web site. next up previous
Next: Conclusions and Future Work Up: Experimental Evaluation Previous: Allocation of CPU in


Scheduling for Multimedia

Multimedia applications often perform poorly under Linux when there are some CPU bound processes running. For this experiment, we have run a multimedia application in the presence of CPU bound processes - each being a distributed genetic algorithm client7. mpeg_play was used for viewing the mpeg file. Frame rate was noted for different number of genetic algorithm clients. The frame rate observed under Linux was 15.5, 9.8, 7.0 and 5.6 for 1, 3, 5 and 7 clients respectively.

Even if the frame rate was 15.5 with one client, the movie was jerky. This was because of the way Linux schedules processes. Initially both mpeg_play and the client will be allocated 200ms (ie count = 20) CPU time. Once mpeg_play is scheduled, it will require service from the X server and it will sleep. Next the client is scheduled, and it will use up its 200ms since it does not sleep in between. Again, mpeg_play is scheduled and it will again sleep for service from X server. So the X server and mpeg_play will be scheduled alternately till their remaining CPU quanta is exhausted (ie count becomes 0 ). After this, the whole cycle repeats. The movie is jerky as the client runs once per this major cycle.

Next, we ran mpeg_play by creating two resource containers with fixed cpu share and binding the mpeg_play and X server processes to those containers. We tried various amounts of CPU shares from 10% to 60%. We also allocated 30% to 50% of the CPU to X server. But the maximum frame rate went only up to 7.0.

The problem in this case was that even if the scheduler was allocating enough CPU time to both mpeg_play and X server, they were not able to use it since one needed the service of the other. After mpeg_play generated a frame, it required the service of the X server to display it. After displaying the frame, the X server needed mpeg_play to be scheduled to generate the next frame.

Now we ran mpeg_play by creating a single resource container with fixed cpu share and binding both mpeg_play and X server processes to that container. We tested it by allocating different amount of CPUs to this resource container and running distributed genetic algorithm with different number of clients.

With this arrangement for binding processes, we obtained significant performance for CPU shares more than 60%. Also the picture was very smooth (no jerks) even for lower frame rates. The resulting framerates are shown in Table 1. The output shows that the frame rate is independent of number of clients running in the system and depends only on the CPU allocated to the resource container.


next up previous
Next: Conclusions and Future Work Up: Experimental Evaluation Previous: Allocation of CPU in
Mansoor Alicherry 2001-05-01