Check out the new USENIX Web site. next up previous
Next: 4.3 Revision #1: DLL Aggregation Up: 4 Implementation Previous: 4.1 Initial Implementation


4.2 Shared Memory Issues

Having debugged the new implementation, and ensured the system is stable with the new page allocation method, we evaluate PAVM's effectiveness at reducing energy footprints of processes. We expect that the active node set, $ \alpha$, for each task will tend to localize to the task's preferred node set, $ \rho$. However, this is far from what we see.

Table 2 shows a partial snapshot of the processes in a running system, and, for each process $ i$, indicates the nodes in sets $ \rho_i$ and $ \alpha_i$, as well as the number of pages allocated on each node.2 It is clear from the snapshot that each process $ i$ has a large set of active nodes, where $ \vert\alpha_i\vert$ is much larger than the corresponding $ \vert\rho_i\vert$. This causes a significantly larger energy footprint for each process than what we have originally anticipated. Nevertheless, since most pages are allocated in the preferred nodes, and none of the processes use all nodes in the system, we still consider this a working system that provides opportunities to put nodes into low-power modes and conserve energy. However, it is not as effective as we would like, due to the fact that for each process, there is a set of pages scattered across a large number of nodes.


Table 2:  A snapshot of processes' node usage pattern using the initial version of PAVM. The number in parenthesis besides each active node indicates the number of pages the corresponding process is currently using on that node. Recall that our system has 16 nodes, denoted as 0, 1, ..., 15, each contains 256 Mbits (or 8192 4-KB pages).

Process $ \rho$ $ \alpha$
syslog 14 0(3) 8(5) 9(51) 10(1) 11(1) 13(3) 14(76)
login 11 0(12) 8(7) 9(112) 11(102) 12(5) 14(20) 15(1)
startx 13 0(21) 7(12) 8(3) 9(7) 10(12) 11(25) 13(131) 14(43)
X 12 0(125) 7(23) 8(47) 9(76) 10(223) 11(19) 12(1928) 13(82)
14(77) 15(182)
sawfish 10 0(180) 7(5) 8(12) 9(1) 10(278) 13(25) 14(5) 15(233)
vim 10,15 0(12) 9(218) 10(5322) 14(22) 15(4322)
... ... ...


To understand this ``scattering'' effect, we need to investigate how memory is used in the system. In most systems, a majority of the system memory is occupied by user processes. In turn, most of these pages are used to hold memory-mapped files, which include binary images of processes, dynamically-loaded libraries (DLL), as well as memory-mapped data files. To reduce the size of the executable binaries on disk and the processes' cores in memory, DLLs are extensively used in Linux and most other modern operating systems. The scattering effect we observe is a result of the extensive use of DLLs combined with the behavior of the kernel-controlled page cache.

The page cache is used to buffer blocks previously read from the disk, so on subsequent accesses, they can be served without going to the disk, greatly reducing file access latencies. When a process requests a block that is already in the page cache, the kernel simply maps that page to the requesting process's address space without allocating a new page. Since the block may have been previously requested by any arbitrary process, it can be on any arbitrary node, resulting in an increased memory footprint for the process. Unfortunately, this is not limited to shared data files, but also to DLLs, as these are basically treated as memory-mapped, read-only files. The pages used for DLLs are lazily loaded, through demand paging. So, when two processes with disjoint preferred nodes access the same library, the pages will scatter across the union of the two preferred node sets, depending on the access pattern of the processes and which process first incurred the page-fault to load a particular portion of the library into the page cache.

In the following sections, we describe the incremental changes we make to reduce the memory/energy footprint for each process by using DLL aggregation and page-migration techniques. We then discuss how to reduce overhead of these new techniques.


next up previous
Next: 4.3 Revision #1: DLL Aggregation Up: 4 Implementation Previous: 4.1 Initial Implementation
2003-03-03