The VMD layer is ideally placed to monitor both the resource demands placed by the VMs and the resource supplies offered by the underlying physical network, with minimal active participation from either.
Each VMD sees every incoming/outgoing packet to/from each of the VMs that it is hosting. Given a matrix representation of , if the VMD is hosting it knows the th row and th column. Collectively, the VMDs know all of , so a reduction could be used to give each one a full copy of . Hence, without modifying the OS or any operating systems, the VMDs can recover the application topology.
Each VMD is also in a good position to infer the bandwidth and latency demands for each edge, , , the computational demand of each VM, , and the total size of each VM image, corresponding to the VMs that it is hosting. Again, a reduction would give each VMD a global picture of the the resource demands. Beyond inference, this information could also be provided directly by a developer or administrator without any modifications to the applications or operating system.
VMDs transfer packets on behalf of the VMs they host. An outgoing packet from a VM is routed through it's hosting VMD, then through zero or more transit VMDs, the host VMD of the destination VM, and finally to the destination VM. When such a message is transfered from to , the transfer time is a free measurement of the corresponding path in the underlying network. From collections of such measurements, the two VMDs can derive and using known techniques . A VMD can also periodically measure the available compute rate of its host () using known techniques [6,39].
Network monitoring tools such as Remos  and NWS  can, in some cases, determine the physical layer topology and measure its links, paths, and hosts.