The LISA '02 WIP session went very well. We had some interesting submissions! Amr Awadallah created a lot of excitement with his vMatrix presentation. However, the crowd gave the LISA '02 WIP Whip to Jeremy Mates for his "Improving Productivity" (by reading your daily cartoons) presentation. In order of presentation, here are the contestants' paragraphs.
Peg Schafer, Work-in-Progress Coordinator
SysAdmins of the world unite! Are you tired of being stepped on by others? Have you ever been in a situation where there was a serious problem and you were out-shouted in the problem resolution process by an unknowledgeable person? Or when the process was directed by politics rather than solutions based on causality data or reason? If the answer to these or similar questions is YES, then you are not alone! Alva Couch has encouraged me to do a paper on this topic, including case studies and a manual on formal problem resolution practices, which will include a "Roberts Rules" set of rules which should preempt such time-wasting, stressful activity.
After being encouraged by some seemingly nameless party, I've been conned into doing this. So, you get to hear about it. This WIP will focus on the combination of various information all available online, but resulting in a MacOS X machine being the recipient of a copy of data from a Tivo. It includes information on connecting a Tivo to a TCP/IP network, enabling external control of the Tivo via http & Web browser, as well as how the heck to export data from a Tivo to a Mac / Unix system. I am also involved with the design of a GUI system for all of this with a collaborator. (who just happens to work at Apple [:)]
The vMatrix is a network of virtual machine monitors allowing for fluid server mobility between real machines. By building the servers inside of virtual machines we can easily move them around. The applications that we are targeting are: Dynamic Content Distribution, Server Switching, and Warm Standbys. This is research work that I am doing with Prof. Mendel Rosenblum at Stanford. More info (papers, presentations) at: https://www.thevmatrix.com
SysNav started out as a closed source project for managing servers via a portal infrastructure. It consists of storing configuration information about machines and what components they would like managed. This information is held in LDAP and translated into cfengine files and configuration files by the middle layer. Then the backend takes these configuration files and executes them via cfengine. This framework will install, upgrade, and configure components automatically based on the information stored in LDAP. SysNav is going through a transition. It is CommNav's goal to take the Backend and the Middle Layer and form an Open Source meta project. We, at CommNav, feel the community will benefit from the project and other sub-projects that will be generated out of taking SysNav Open Source. Collaboration has already begun internally and will be released after the new year. Please see sysnav.commnav.com for more information.
The Configuration Monitoring and Reporting Environment (CMRE) is designed to collect configuration data from all our systems and then correlate and report on the information. This allows us to understand exactly the state of our systems, from OS levels and hardware, to software installed and patches, to security and audit problems, to standards conformance. We collect this data and save it for historical data collection (via CVS), as well as upload a significant portion to a database to do reporting across the company at various levels of detail. We also combine this with our performance monitoring to identify the most over and under utilized systems.
Following on research presented in the refereed papers track, Adam has founded a company to build decision support software for IT departments to find the optimal time to install patches, maximizing their uptime and reliability. Adam is interested in talking to IT managers who measure uptime and security.
I'll talk briefly about why I'm writing yet another backup application and why this one will be newer, better, different. So cool, that hopefully you'll forget why you thought digital watches were a pretty neat idea.
The conserver application was developed by Tom Fine in 1990 to allow multiple users to watch a serial console at the same time. Despite its indispensability, many sysadmins aren't aware of it. Conserver can log console output, allows users to take write access of a console (one at a time), and has a variety of bells and whistles to accentuate that basic functionality. The idea is that conserver will log all your serial traffic so you can go back and review why something crashed, look at changes (if done on the console), or tie the console logs into a monitoring system. With multi-user capabilities you can work on equipment remotely, collaborate with others, and mentor junior admins. (See Fine and Romig, LISA IV Conference Proceedings, 97-100)
Since then, many enhancements have been added. The current conserver.com version (7.2.4) also includes basic SSL support so that, assuming you have a network connection, you can securely interact with any of the equipment from home or where ever. The next version will have yet another slew of enhancements, including complete SSL support and a new config file format. In this WIP, I'll give you the scoop on the latest features and solicit you for additional cool ideas for the code and possible future paper.
rcs.mgr is a basic, self-contained configuration manager that wraps the RCS process for textual configuration files and manages their installation, including setting ownerships and permissions and running any post-installation commands necessary to activate the changes. The script has been in production for 1.5 years. Future developments will include better handling of unauthorized changes and support for per-file editors, allowing the management of non-textual files.
I have a multicast distribution program that has been 'under development' for about 2 years now. It puts a sequence number on each datagram and uses selective retransmission from the receiver to the sender to get missing sequence numbers. It also uses PGP signatures on each whole 'package' for authenticity and for integrity; this also allows building of a web of trust. The 'file' program is used to determine how to process the received item on each receiving host. Each 'distrib item' is signed with PGP and multicasted to all listening clients on a well defined port. Responses can be collated in many different ways. Syslog, mail, tcp socket, file, etc. The software provides distribution and an extensible framework upon which to build. A distribution server can also be used as a generic request-repository. A peer to peer network of senders and requestors can thus be built easily.
The talk is available at: https://www.sial.org/talks/productivity/
it has pointers to the script.
Suppose you had a host-based and a network-based IDS on every computer in your enterprise. How would you manage them? The "self-secure devices" we are developing are disk drives and NICs that include security measures such as monitoring for changes to system files and virus traffic. I am in the early stages of designing the user interface for a system administrator to configure, monitor, and control self-secure devices.