Check out the new USENIX Web site.
2006 USENIX Annual Technical Conference
TRAINING

Overview | Tuesday | Wednesday | Thursday | Friday | Saturday | By Instructor

Tuesday, May 30, 2006
Full-Day Tutorials
T1 Advanced Perl Programming
Tom Christiansen, Consultant

Who should attend: Anyone with a journeyman-level knowledge of Perl programming who wants to hone Perl skills. This class will cover a wide variety of advanced topics in Perl, including many insights and tricks for using these features effectively. After completing this class, attendees will have a much richer understanding of Perl and will be better able to make it part of their daily routine.

Topics include:

  • Symbol tables and typeglobs
    • Symbolic references
    • Useful typeglob tricks (aliasing)
  • Modules
    • Autoloading
    • Overriding built-ins
    • Mechanics of exporting
    • Function prototypes
  • References
    • Implications of reference counting
    • Using weak references for self-referential data structures
    • Autovivification
    • Data structure management, including serialization and persistence
    • Closures
  • Fancy object-oriented programming
    • Using closures and other peculiar referents as objects
    • Overloading of operators, literals, and more
    • Tied objects
  • Managing exceptions and warnings
    • When die and eval are too primitive for your taste
    • The use warnings pragma
    • Creating your own warnings classes for modules and objects
  • Regular expressions
    • Debugging regexes
    • qr// operator
    • Backtracking avoidance
    • Interpolation subtleties
    • Embedding code in regexes
  • Programming with multiple processes or threads
    • The thread model
    • The fork model
    • Shared memory controls
  • Unicode and I/O layers
    • Named Unicode characters
    • Accessing Unicode properties
    • Unicode combined characters
    • I/O layers for encoding translation
    • Upgrading legacy text files to Unicode
    • Unicode display tips
Tom Christiansen (T1) has been involved with Perl since day zero of its initial public release Tom Christiansen in 1987. Author of several books on Perl, including The Perl Cookbook and Programming Perl from O'Reilly, Tom is also a major contributor to Perl's online documentation. He holds undergraduate degrees in computer science and Spanish and a Master's in computer science. He now lives in Boulder, Colorado.

 
T2 Solaris 10 Performance, Observability, & Debugging
James Mauro and Richard McDougall, Sun Microsystems

Who should attend: Anyone who supports or may support Solaris 10 machines.

This one-day tutorial will cover the tools and utilities available in Solaris 10 for understanding system and application behavior. An overview of the various tools will be followed by a drill-down on the uses of and methodology for applying the tools to resolve performance issues and pathological behavior, or simply to understand the system and workload better.

Topics include:

  • Solaris 10 features overview
  • Solaris 10 tools and utilities
    • The conventional stat tools (mpstat, vmstat, etc.)
    • The procfs tools (ps, prstat, map, pfiles, etc.)
    • lockstat and plockstat
    • Using kstat
    • Dtrace, the Solaris dynamic tracing facility
    • Using mdb in a live system
  • Understanding memory use and performance
  • Understanding thread execution flow and profiling
  • Understanding I/O flow and performance
  • Looking at network traffic and performance
  • Application and kernel interaction
  • Putting it all together
James Mauro (T2) is a Senior Staff Engineer in the Performance and Availability Engineering group atJames Mauro Sun Microsystems. Jim's current interests and activities are centered on benchmarking Solaris 10 performance, workload analysis, and tool development. This work includes Sun's new Opteron-based systems and multicore performance on Sun's Chip Multithreading (CMT) Niagara processor. Jim resides in Green Brook, New Jersey, with his wife and two sons. He spent most of his spare time in the past year working on the second edition of Solaris Internals. Jim co-authored the first edition of Solaris Internals with Richard McDougall and has been writing about Solaris in various forums for the past eight years.

Richard McDougall (T2), had he lived 100 years ago, would have had the hood open on the first four-stroke Richard McDougallinternal combustion gasoline-powered vehicle, exploring new techniques for making improvements. He would be looking for simple ways to solve complex problems and helping pioneering owners understand how the technology works to get the most from their new experience. These days, McDougall uses technology to satisfy his curiosity. He is a Distinguished Engineer at Sun Microsystems, specializing in operating systems technology and system performance. He is co-author of Solaris Internals (Prentice Hall PTR, 2000) and Resource Management (Sun Microsystems Press, 1999).

T3 Building a Logging Infrastructure and Log Analysis for Security NEW!
Abe Singer, San Diego Supercomputer Center

Who should attend: System, network, and security administrators who want to be able to separate the wheat of warning information from the chaff of normal activity in their log files.

This tutorial will show the importance of log files for maintaining system security and general well-being, offer some strategies for building a centralized logging infrastructure, explain some of the types of information that can be obtained for both real-time monitoring and forensics, and teach techniques for analyzing log data to obtain useful information.

The devices on a medium-sized network can generate millions of lines of log messages a day. Although much of the information is normal activity, hidden within that data can be the first signs of an intrusion, denial of service, worms/viruses, and system failures. Getting a handle on your log files can help you run your systems and networks more effectively and can provide forensic information for post-incident investigation.

Topics include:

  • Problems, issues, and scale of handling log information
  • Generating useful log information: improving the quality of your logs
  • Collecting log information
    • syslog and friends
    • Building a log host
    • Integrating MS Windows into a UNIX log architecture
  • Storing log information
    • Centralized log architectures
    • Log file archiving
  • Log analysis
    • Log file parsing tools
    • Data analysis of logfiles (e.g., baselining)
    • Attack signatures and other interesting things to look for in your logs
  • Legal issues
Abe Singer (T3, R4) is a Computer Security Researcher in the Security Technologies Group at the San Diego SupercomputerAbe Singer Center. In his operational security responsibilities, he participates in incident response and forensics and in improving the SDSC logging infrastructure. His research is in pattern analysis of syslog data for data mining. He is co-author of of the SAGE booklet Building a Logging Infrastructure and author of a forthcoming O'Reilly book on log analysis.

T4 Hands-on Linux Security Class: From Hacked to Secure in Two Days (Day 1 of 2)
Rik Farrow, Security Consultant

Who should attend: System administrators of Linux and other UNIX systems; anyone who runs a public UNIX server.

We will work with systems that have been hacked and include hidden files, services, and evidence of the intrusion. You will learn how to uncover exploited systems and properly secure them. You will perform hands-on exercises with dual-use tools to replicate what intruders do, as well as tools dedicated to security. The tools vary from the ordinary, such as find and strings, to less familiar but important ones such as lsof, various scanners, sniffers, and the Sleuth Kit.

The lecture portion of this class covers the background you need to understand UNIX security principles, TCP/IP, scanning, and popular attack strategies, as well as defenses for networks and individual systems. The class will end with a discussion of the use of patching and vulnerability assessment tools.

Day 1 topics begin with a quick assessment of a system, looking for obvious signs of intrusion. We will then cover TCP/IP and how it relates to different types of attacks and scanning, to learn what an attacker can "see" from the network and the limitations of certain styles of attack. The inner workings of buffer overflows, with examples, graphically illustrate how these attacks work, and what defenses against them exist. Day 1 concludes with an examination of a buggy Web script, how to audit CGI scripts quickly, and what can be done to prevent this attack from succeeding.

Day 2 begins with a look at passwords, including a quick spin with John the Ripper. We examine suid files as potential backdoors and show how to bypass the common defense against these backdoors. Network services provide the necessary access for attackers, so we practice determining exactly what services are necessary and how UNIX systems should be hardened. Tools that look for rootkits, often the most subtle way for an attacker to maintain a presence, have their weak points. We learn about rootkits and how to search for them. Then we look at the output of Sleuth Kit to discover what happened, and when, on a poorly secured system. Finally, we look at other defensive software, including firewalls (netfilter), patching, and vulnerability scanning.

Class exercises require that you have an X86-based laptop computer that can be booted from a CD. Students will receive a live CD (KNOPPIX) which contains the tools, files, and exercises required for the course. You can download KNOPPIX yourself (v3.9) and see whether your laptop is supported. Some people have come without laptops and teamed up with friendly laptop users.

Exercises include:

DAY 1:

  • Searching for hidden files
  • TCP/IP and its relation to probes and attacks
  • Uses of ARP and ethereal
  • hping2 probes
  • nmap (connect and SYN scans)
  • Buffer overflows in sample C programs
  • Weaknesses in Web scripts (using a Perl example)
DAY 2:
  • John the Ripper, password cracking
  • Misuses of suid shells, finding backdoors
  • Disabling network services
  • Searching for evidence of rootkits
  • Sleuth Kit (looking at intrusion timelines)
  • netfilter
Rik Farrow (T4, W4) provides UNIX and Internet security consulting and training. He has beenRik Farrow working with UNIX system security since 1984 and with TCP/IP networks since 1988. He has taught at the IRS, Department of Justice, NSA, NASA, US West, Canadian RCMP, Swedish Navy, and for many U.S. and European user groups. He is the author of UNIX System Security, published by Addison-Wesley in 1991, and System Administrator's Guide to System V (Prentice Hall, 1989). Farrow is the editor of ;login: and a network security columnist for Network magazine. Rik lives with his family in the high desert of northern Arizona and enjoys hiking and mountain biking when time permits.

T5 Reliable, Fast, Lightweight Data Storage: A Berkeley DB Tutorial NEW!
Margo Seltzer, Sleepycat Software

Who should attend: Application architects and developers involved in the design and implementation of data storage solutions for real-world applications. Software development and program managers who are interested in understanding the services provided by Berkeley DB will also benefit from this tutorial. Engineers familiar with Berkeley DB, as well as those who have not yet used this product, will come away from this tutorial with useful, practical information, sample code, and Berkeley DB performance tuning tips.

Berkeley DB is the most widely used developer database in the world. It is used by many open source and mission-critical proprietary applications that require a lightweight, fast, reliable, recoverable, scalable data storage solution. Short-circuit your learning curve or increase the effectiveness of your existing use of Berkeley DB by attending this tutorial.

We will review the underlying concepts, API, and implementation recommendations for application developers using Berkeley DB. Additionally, we will focus on performance tuning options and usage scenarios.

Topics include:

  • Databases and access methods
  • Managing key/data pairs
  • Using cursors
  • Using secondary indices
  • Database environments
  • Application design
  • Transactions, locking, loggin, backup and recovery
  • Berkeley DB database administration
  • Application tuning
  • Replication overview
  • Berkeley DB XML overview
  • Berkeley DB Java Edition overview
  • Berkeley DB access methods
    • Btree
    • Queue
    • Recno
    • Hash
  • Berkeley DB subsystems
    • Logging
    • Locking
    • Transactions
  • Performance tuning
    • Setting page and cache sizes
    • Tuning logging and locking parameters
    • Managing deadlock detection methods and timeout values
Margo Seltzer (T5) is a Herchel Smith Professor of Computer Science, the Associate Dean for Computer Science Margo Seltzerand Engineering, and a Harvard College Professor in the Division of Engineering and Applied Sciences at Harvard University. Her research interests include file systems, databases, and transaction processing systems. She is the author of several widely used software packages, including database and transaction libraries and the 4.4BSD log-structured file system. Dr. Seltzer is also a founder and CTO of Sleepycat Software, the makers of Berkeley DB. She is a Sloan Foundation Fellow in Computer Science and a Bunting Fellow, and was the recipient of the 1996 Radcliffe Junior Faculty Fellowship, the University of California Microelectronics Scholarship. She is recognized as an outstanding teacher and won the Phi Beta Kappa teaching award in 1996 and the Abrahmson Teaching Award in 1999. Dr. Seltzer received an A.B. degree in applied mathematics from Harvard/Radcliffe College in 1983 and a Ph.D. in computer science from the University of California, Berkeley, in 1992.

Wednesday, May 31, 2006
Full-Day Tutorials
W1 Ajax and Advanced Responsive WebApp Development NEW!
Alex Russell, JotSpot, Inc. (and Project Lead for the Dojo Toolkit)

Who should attend: Developers who are familiar with basic Web application development practice and standards (HTML, CSS, and basic JavaScript) who are looking to add Ajax and other rich and responsive interface elements to new or existing Web applications. Familiarity with HTTP, the DOM, and XML are welcome but not required. The tutorial will include many code examples of professional-quality idiomatic JavaScript, which will be explained.

Web applications development techniques have continued to advance at a rapid clip in server-side environments, but for several years the apparent limitations of browser technology have hindered the creation of richer experiences for end users. "Ajax" is a Web development technique for employing the latent capabilities of modern browsers, which can now be fully exploited as legacy browsers are retired. These capabilities let developers provide better user interactions, expand the uses of the browser as a platform, and make new types of in-browser apps possible without plugins.

This tutorial focuses on augmenting existing applications with Ajax, and the trade-offs involved. A strong emphasis is given to testing, interoperability, and debugging. Tools for easing development and debugging are also presented and used.

Attendees will leave this tutorial with a firm grasp of the underlying technologies of Ajax and where to turn when the going gets rough in developing Ajax applications. Examples and the slides will be made available online.

Topics include:

  • Ajax: Good? Bad? Irrelevant?
  • Ajax basics
    • HTTP fundamentals
    • The XMLHTTP object, and alternatives to it
  • What to send over the wire, and when: HTML, XML, JSON, or plain text?
  • JavaScript and the DOM
  • Defensive client-side development
    • Accessibility concerns and techniques
    • Browser support "gotchas"
  • Debugging
  • Dissection of common Ajax apps w/ debugging tools
  • Toolkits to help
  • Advanced JS and visual effects
    • Drag-and-drop
    • Animation
    • JSON-RPC
    • Web services
Alex Russell (W1) is a Senior Software Engineer at JotSpot and Project Lead for the Dojo Toolkit. Dojo isAlex Russell an Open Source library that helps Web application developers spend more time building great experiences and less time fighting browser quirks. Prior to assisting in the development of Dojo, Alex was primary author of the netWindows DHTML framework. He has been wrestling browsers into relative submission since the late '90s.

W2 Introduction to VMware ESX Server NEW!
John Gannon and John Arrasjid, VMware

Who should attend: x86 sysadmins who want to dramatically improve the way they manage systems.

Do any of these complaints sound familiar?

  • Our datacenter is out of power/space/network infrastructure and adding new servers is a struggle.
  • Our developers ask us for new servers constantly and we can't keep up with the demand.
  • It takes us days or weeks to procure, rack, stack, and configure a new box.
  • Our yearly disaster recovery simulations are hardly ever successful because the DR site has a different hardware configuration than the production site.
  • Our DR site is too expensive to operate because it is an exact replica of our production environment.
  • We can only do hardware upgrades late at night and on the weekends.

If yes, VMware ESX Server can help by:

  • reducing your x86 server count by up to 90%
  • supporting up to 80 x86-based OS instances running simultaneously (Linux, FreeBSD, Netware, and Windows) on a single physical machine
  • freeing up valuable rack space, SAN, and networking ports
  • providing instantaneous rollback to a "known good configuration" to
  • assist in software development and testing
  • allowing you to provision a new x86 server in minutes instead of weeks
  • enabling Disaster Recovery despite having different hardware (and less of it) at your DR site
  • eliminating downtime traditionally associated with hardware maintenance
In this tutorial, we will provide an overview of virtual machine technology as well as the features and functionality of ESX Server. Installation, configuration, and best practices will be the focus of the session.

Topics include:

  • Virtual infrastructure and ESX Server overview
  • ESX Server installation and configuration
  • Virtual Machine (VM) creation and operation
    • Installing VMs from scratch
    • Using templates and cloning to provision VMs in minutes
  • Operations and administration
    • Sizing the environment
    • Automating tasks via scripting
    • Operations best practices
  • Enabling disaster recovery and business continuity with ESX Server
  • Migration strategies and the P2V process (Physical-to-Virtual)
  • Advanced configuration
    • SAN
    • Networking
    • Performance Tuning
    • Security
John Gannon (W2, R5) has over ten years of experience architecting and implementing UNIX, Linux, John Gannonand Windows infrastructures. John has worked in network engineering, operations, and professional services roles with various companies including Sun Microsystems, University of Pennsylvania, Scient Corporation, and FOX Sports. John's current work at VMware involves delivering server consolidation, disaster recovery, and virtual infrastructure solutions to FORTUNE 500 clients.

John Arrasjid (W2, R5) has 20 years experience in the computer science field. His experienceJohn Arrasjid includes work with companies such as AT&T, Amdahl, 3Dfx Interactive, Kubota Graphics, Roxio, and his own company, WebNexus Communications, where he developed consulting practices and built a cross-platform IT team. John is currently a senior member of the VMware Professional Services Organization as a Consulting Architect. John has developed a number of PSO engagements, including Performance, Security, and Disaster Recovery and Backup.

W3 Implementing [Open]LDAP Directories
Gerald Carter, Samba Team/Hewlett-Packard
9:00 a.m.–5:00 p.m.

Who should attend: Both LDAP directory administrators and architects. The focus is on integrating standard network services with LDAP directories. The examples are based on UNIX hosts and the OpenLDAP directory server and will include actual working demonstrations throughout the course.

System administrators today run a variety of directory services, although these are referred to by names such as DNS and NIS. The Lightweight Directory Access Protocol (LDAP) is the successor to the X.500 directory and has the promise of allowing administrators to consolidate multiple existing directories into one.

Topics include:

  • Replacing NIS domains
  • Integration with Samba file and print servers
  • Integrating MTAs such as Sendmail and Postfix
  • Creating address books for mail clients
  • Managing user access to HTTP and FTP services
  • Integrating with DHCP and DNS servers
  • Scripting with the Net::LDAP Perl module
  • Defining custom attributes and object classes
Gerald Carter (W3, R7) has been a member of the Samba Development Team since 1998. He has been developing,Gerald Carter writing about, and teaching on open source since the late '90s. Currently employed by Centeris as a Samba and open source developer, Gerald has written books for SAMS Publishing and for O'Reilly Publishing.

 

W4 Hands-On Linux Security Class: From Hacked to Secure in Two Days (Day 2 of 2)
Rik Farrow, Security Consultant
9:00 a.m.–5:00 p.m.

We will work with systems that have been hacked and include hidden files, services, and evidence of the intrusion. You will learn how to uncover exploited systems and properly secure them. You will perform hands-on exercises with dual-use tools to replicate what intruders do, as well as tools dedicated to security. The tools vary from the ordinary, such as find and strings, to less familiar but important ones such as lsof, various scanners, sniffers, and the Sleuth Kit.

The lecture portion of this class covers the background you need to understand UNIX security principles, TCP/IP, scanning, and popular attack strategies, as well as defenses for networks and individual systems. The class will end with a discussion of the use of patching and vulnerability assessment tools.

Day 2 begins with a look at passwords, including a quick spin with John the Ripper. We examine suid files as potential backdoors and show how to bypass the common defense against these backdoors. Network services provide the necessary access for attackers, so we practice determining exactly what services are necessary and how UNIX systems should be hardened. Tools that look for rootkits, often the most subtle way for an attacker to maintain a presence, have their weak points. We learn about rootkits and how to search for them. Then we look at the output of Sleuth Kit to discover what happened, and when, on a poorly secured system. Finally, we look at other defensive software, including firewalls (netfilter), patching, and vulnerability scanning.

Class exercises require that you have an X86-based laptop computer that can be booted from a CD. Students will receive a live CD (KNOPPIX) which contains the tools, files, and exercises required for the course. You can download KNOPPIX yourself (v3.9) and see whether your laptop is supported. Some people have come without laptops and teamed up with friendly laptop users.

Exercises include, for Day 2:

  • John the Ripper, password cracking
  • Misuses of suid shells, finding backdoors
  • Disabling network services
  • Searching for evidence of rootkits
  • Sleuth Kit (looking at intrusion timelines)
  • netfilter
Rik Farrow (T4, W4) provides UNIX and Internet security consulting and training. He has beenRik Farrow working with UNIX system security since 1984 and with TCP/IP networks since 1988. He has taught at the IRS, Department of Justice, NSA, NASA, US West, Canadian RCMP, Swedish Navy, and for many U.S. and European user groups. He is the author of UNIX System Security, published by Addison-Wesley in 1991, and System Administrator's Guide to System V (Prentice Hall, 1989). Farrow is the editor of ;login: and a network security columnist for Network magazine. Rik lives with his family in the high desert of northern Arizona and enjoys hiking and mountain biking when time permits.

W5 Issues in UNIX Infrastructure Design
Lee Damon, University of Washington

Who should attend: Anyone who is designing, implementing, or maintaining a UNIX environment with 2 to 20,000+ hosts. System administrators, architects, and managers who need to maintain multiple hosts with few admins.

This intermediate class will examine many of the background issues that need to be considered during the design and implementation of a mixed-architecture or single-architecture UNIX environment. It will cover issues from authentication (single sign-on) to the Holy Grail of single system images.

This class won't implement a "perfect solution," as each site has different needs. It will try to raise all the questions you should ask (and answer) while designing the solution that will meet your needs. We will look at some freeware and some commercial solutions, as well as many of the tools that exist to make a workable environment possible.

Topics include:

  • Administrative domains: Who is responsible for what, and what can users do for themselves?
  • Desktop services vs. farming: Do you do serious computation on the desktop, or do you build a compute farm?
  • Disk layout: How do you plan for an upgrade? Where do things go?
  • Free vs. purchased solutions: Should you write your own, or hire a consultant or company?
  • Homogeneous vs. heterogeneous: Homogeneous is easier, but will it do what your users need?
  • The essential master database: How can you keep track of what you have?
  • Policies to make life easier
  • Push vs. pull
  • Getting the user back online in 5 minutes
  • Remote administration: Lights-out operation; remote user sites; keeping up with vendor patches, etc.
  • Scaling and sizing: How do you plan on scaling?
  • Security vs. sharing: Your users want access to everything. So do the crackers . . .
  • Single sign-on: How can you do it securely?
  • Single system images: Can users see just one environment, no matter how many OSes there are?
  • Tools: The free, the purchased, the homegrown

Lee Damon (W5) has a B.S. in Speech Communication from Oregon State University.Lee Damon He has been a UNIX system administrator since 1985 and has been active in SAGE since its inception. He assisted in developing a mixed AIX/SunOS environment at IBM Watson Research and has developed mixed environments for Gulfstream Aerospace and QUALCOMM. He is currently leading the development effort for the Nikola project at the University of Washington Electrical Engineering department. He is past chair of the SAGE Ethics and Policies working groups and he chaired LISA '04.

W6 Linux System Administration
Joshua Jensen, Cisco Systems Inc.

Who should attend: System administrators who plan to implement Linux in a production environment. Attendees should understand the basics of system administration in a UNIX/Linux environment, i.e., user-level commands and TCP/IP networking. Both novice admins and gurus should leave the tutorial having learned something.

From a single server to a network of workstations, maintaining a Linux environment can be a daunting task for administrators knowledgeable in other platforms. Starting with a single server and ending with a multi-server, 1000+-user environment, this tutorial will provide practical information on how to use Linux in the real world. Attendees should leave the tutorial confident in their ability to set up and manage a secure Linux server and services. The tutorial will be conducted in an open manner that allows for question-and-answer interruptions.

Topics include (with an emphasis on security):

  • Installation issues
  • Boot loaders and system startup
  • Disk partitioning and LVM
  • Software RAID
  • The RPM package system
  • Networking
  • User management
  • Automated system installation
  • Network-based authentication
  • User accounts and management
  • Network services and xinetd
  • SSH: port tunneling, keys, tricks
  • New developments

Joshua Jensen (W6) has worked for IBM and Cisco Systems, and was Red Hat's first instructor, examiner, andJoshua Jensen RHCE. He worked with Red Hat for four and a half years, during which he wrote and maintained large parts of the Red Hat curriculum: Networking Services and Security, System Administration, Apache and Secure Web Server Administration, and the Red Hat Certified Engineer course and exam. Joshua has been working with Linux since 1996 and finds himself having come full circle: he recently left IBM to work with Red Hat Linux for Cisco Systems. In his spare time he dabbles in cats, fish, boats, and frequent flyer miles.

Thursday, June 1, 2006
Full-Day Tutorials
R1 Administering Linux in Production Environments
Æleen Frisch, Exponential Consulting

Who should attend: Both current Linux system administrators and administrators from sites considering converting to Linux or adding Linux systems to their current computing resources. We will be focusing on the administrative issues that arise when Linux systems are deployed to address a variety of real-world tasks and problems arising from both commercial and research-and-development contexts.

Topics include:

  • Recent kernel developments
  • High-performance I/O
    • Advanced filesystems and logical volumes
    • Disk striping
    • Optimizing I/O performance
  • Advanced compute-server environments
    • Beowulf
    • Clustering
    • Parallelization environments/facilities
    • CPU performance optimization
  • High availability Linux: fault-tolerance options
  • Enterprise-wide authentication
  • Fixing the security problems you didn't know you had (or, what's good enough for the researcher/hobbyist won't do for you)
  • Automating installations and other mass operations
  • Linux in the office environment
Æleen Frisch (R1, F1) has been a system administrator for over 20 years. She currently looks Aeleen Frischafter a pathologically heterogeneous network of UNIX and Windows systems. She is the author of several books, including Essential System Administration (now in its 3rd edition).

 

R2 Solaris 10 Security Features Workshop
Peter Baer Galvin, Corporate Technologies, Inc.

Who should attend: Solaris systems managers and administrators interested in the new security features in Solaris 10 (and features in previous Solaris releases that they may not be using).

This course covers a variety of topics surrounding Solaris 10 and security. Solaris 10 includes many new features, and there are new issues to consider when deploying, implementing, and managing Solaris 10. This will be a workshop featuring instruction and practice/exploration. Each student should have a laptop with wireless access for remote access into a Solaris 10 machine.

Topics include:

  • Overview
  • N1 Grid Containers (a.k.a. Zones) (lab)
  • RBAC (lab)
  • Privileges (lab)
  • NFSv4
  • Flash archives and live upgrade
  • Moving from NIS to LDAP
  • Dtrace
  • FTP client and server enhancements
  • PAM enhancements
  • Auditing enhancements
  • BSM
  • Service Management Facility (lab)
  • Solaris Cryptographic Framework
  • Smartcard interfaces and APIs
  • Kerberos enhancements
  • Packet filtering
  • BART
Peter Baer Galvin (R2, A2) is the Chief Technologist for Corporate Technologies, Inc., a systems integrator and VAR, Peter Baer Galvin and was the Systems Manager for Brown University's Computer Science Department. He has written articles for Byte and other magazines. He wrote the "Pete's Wicked World" and "Pete's Super Systems" columns at SunWorld. He is currently contributing editor for Sys Admin, where he manages the Solaris Corner. Peter is co-author of the Operating Systems Concepts and Applied Operating Systems Concepts textbooks. As a consultant and trainer, Peter has taught tutorials on security and system administration and has given talks at many conferences and institutions on such topics as Web services, performance tuning, and high availability.

Thursday Morning Half-Day Tutorials
R3 Wide Area Storage Networking: Server Consolidation and Data Protection Over the WAN NEW!
Michael Cucchi, Cambridge Computer Services
9:00 a.m.–12:30 p.m.

Who should attend: System administrators, IT managers, and enterprise architects who are concerned with disaster recovery, data protection, server consolidation, and resource sharing over a WAN. This tutorial is a survey of the types of solutions on the market today, with examples of when to choose one approach over another.

Storage networking over the WAN? Impossible, right? Bandwidth is too expensive. WAN latency kills the applications. SAN replication and remote backup are too expensive.

That was yesterday. This is today. New technologies have emerged to address all of these challenges and the results are much better than you might think. This tutorial is a survey of the various technologies available for moving and accessing storage over the WAN. It is divided into three sections. The first section focuses on backup and restore over the WAN and describes solutions for sending backups off-site over the WAN and for managing backups of branch offices. The second section focuses on the various technologies for replicating live data between sites. The third section focuses on accessing live storage over the WAN, covering technologies such as Wide Area File Services (WAFS) and WAN accelerators.

Topics include:

  • Remote site backup techniques
  • Continuous Data Protection (CDP)
  • Storage encryption
  • Capacity optimized storage devices and WAN accelerators
  • Host-based vs. SAN-based vs. fabric-based replication
  • Filesystem vs. volume-level vs. application-level replication
  • Application fail-over
  • The impact of latency on storage-intensive applications
  • Compensating for WAN latencies
  • WAN accelerators
  • Wide Area File Services (WAFS)
  • Email server consolidation
Michael Cucchi (R3) has over 13 years of IT experience. He spent seven of those years as aMichael Cucchi lead Linux/UNIX/Windows senior system admin and lead system administrator for a major data center for the Federal Department of Transportation. Michael did a two-year stint as a solution engineer for Ammasso, where he helped launch the first RDMA Ethernet NIC. Mike is currently a consultant for Cambridge Computer, a national integrator of data protection and storage networking technologies.

R4 Security Without Firewalls NEW!
Abe Singer, San Diego Supercomputer Center
9:00 a.m.–12:30 p.m.

Who should attend: Administrators who want or need to explore strong, low-cost, scalable security without firewalls.

Good, possibly better, network security can be achieved without relying on firewalls. The San Diego Supercomputer Center does not use firewalls, yet managed to go almost 4 years without an intrusion. Our approach defies some common beliefs, but it seems to work, and it scales well.

"Use a firewall" is the common mantra of much security documentation, and are the primary security "solution" in most networks. However, firewalls don't protect against activity by insiders, nor do firewalls provide protection against any activity that is allowed through the firewall. And, as is true for many academic institutions, firewalls just don't make sense in our environment. Weighting internal threats equally with external threats, SDSC has built an effective, scalable, host-based security model. The keys parts to our model are: centralized configuration management; regular and frequent patching; and strong authentication (no plaintext passwords). This model extends well to many environments beyond the academic.

Of course, we're not perfect, and last year we had a compromise as part of a security incident that spanned numerous institutions. However, firewalls would have done little if anything to have mitigated that attack, and we believe our approach to security reduced the scope of compromise and helped us to recover faster than some of our peers.

In addition to a good security model and faster recovery, our system administration costs scale well. The incremental cost of adding a host to our network (beyond the cost of the hardware) is negligible, as is the cost of reinstalling a host.

Topics include:

  • The threat perspective from a data-centric point of view
  • How to implement and maintain centralized configuration management using cfengine, and how to build reference systems for fast and consistent (re)installation of hosts
  • Secure configuration and management of core network services such as NFS, DNS, and SSH
  • Good system administration practices
  • Implementing strong authentication and eliminating use of plaintext passwords for services such as POP/IMAP
  • A sound patching strategy
  • An overview of last year's compromise, how we recovered, and what we learned
Abe Singer (T3, R4) is a Computer Security Researcher in the Security Technologies Group at the San Diego SupercomputerAbe Singer Center. In his operational security responsibilities, he participates in incident response and forensics and in improving the SDSC logging infrastructure. His research is in pattern analysis of syslog data for data mining. He is co-author of of the SAGE booklet Building a Logging Infrastructure and author of a forthcoming O'Reilly book on log analysis.

R5 Server Consolidation and Containment Metholodology for Intel Environments NEW!
John Gannon and John Arrasjid, VMware
9:00 a.m.–12:30 p.m.

Who should attend: System administrators/architects and IT managers who will be responsible for planning and/or implementing server consolidation in an x86 environment. The tutorial assumes basic familiarity with server virtualization and software-based virtual machine technology.

Server sprawl is a serious problem in Wintel environments. Due to a variety of factors, enterprises find themselves with large server farms that operate at extremely low utilization rates, use an excessive amount of datacenter resources, and are increasingly costly to manage and support.

A compelling way to address these issues is by server consolidation and containment through use of server virtualization technology. Server consolidation and containment initiatives have significant impact on an organization's people, process, and technology. Proper assessment, planning, and execution of these initiatives are critical to making them successful.

Topics include:

  • Introduction
    • Flavors of consolidation
    • Motivations for consolidation (business and technical)
    • Risks inherent in consolidation
  • Assessment
    • Defining metrics for success
    • Tools and best practices for gathering performance and inventory data
    • Identifying good and bad candidates for consolidation
  • Planning
    • Selecting the ideal target hardware platform(s)
    • High availability and disaster recovery considerations
    • Virtual machine allocation best practices
    • Development of documentation and test plans
  • Building
    • Building momentum and mindshare for the consolidation
    • Tools for migrating from Physical to Virtual (P2V)
  • Managing
    • Virtual infrastructure management tools
    • Ongoing management considerations
    • Server containment policies
John Gannon (W2, R5) has over ten years of experience architecting and implementing UNIX, Linux, John Gannonand Windows infrastructures. John has worked in network engineering, operations, and professional services roles with various companies including Sun Microsystems, University of Pennsylvania, Scient Corporation, and FOX Sports. John's current work at VMware involves delivering server consolidation, disaster recovery, and virtual infrastructure solutions to FORTUNE 500 clients.

John Arrasjid (W2, R5) has 20 years experience in the computer science field. His experienceJohn Arrasjid includes work with companies such as AT&T, Amdahl, 3Dfx Interactive, Kubota Graphics, Roxio, and his own company, WebNexus Communications, where he developed consulting practices and built a cross-platform IT team. John is currently a senior member of the VMware Professional Services Organization as a Consulting Architect. John has developed a number of PSO engagements, including Performance, Security, and Disaster Recovery and Backup.

Thursday Afternoon Half-Day Tutorials
R6 UNIX on My Mind NEW!
Bill Cheswick, Lumeta
1:30 p.m.–5:00 p.m.

Who should attend: Anyone developing tools for UNIX or Linux, who wants to gain a greater understanding of "the way things are done, and why that is the right way." This is not a tutorial for novices, but is for people who want to fill out their understanding of the topics with philosophy, examples, and war stories.

Attendees should have used UNIX, at least a bit, and have some programming experience. They will gain facility in using UNIX as a powerful and efficient prototyping tool for a number of classes of problems and will gain insight into the simplicity and power of the original design of UNIX.

Attendees will benefit more if they bring their own computers running some version of UNIX or Linux, but you can team up with someone else.

Topics include:

  • The UNIX style of prototyping
  • Small instructional tasks in sysadmin, network admin, small languages, data mining of Internet mapping data, and HTML generation
  • Bourne shell and bash programming, awk, sed, etc.
  • Why real cats don't have options
  • Why /dev/null shouldn't be special
  • Why man pages are so important, and so hard to get right
Bill Cheswick (R6, F3) logged into his first computer in 1968. Seven years later, he was graduated fromBill Cheswick Lehigh University in 1975 with a degree resembling Computer Science. Cheswick has worked on (and against) operating system security for over 35 years. He has worked at Lehigh University and the Naval Air Development Center in system software and communications. At the American Newspaper Publishers Association/Research Institute he shared his first patent for a hardware-based spelling checker, a device clearly after its time. For several years he consulted at a variety of universities doing system management, software development, communications design and installation, PC evaluations, etc. In 1998, Ches starting the Internet Mapping Project with Hal Burch. This work became to core technology of a Bell Labs spin-off, Lumeta Corporation, which explores the extent of corporate and government intranets and checks for host leaks that violate perimeter policies. Ches has pinged an active duty US nuclear attack submarine (distance, 66ms). Ches has a wide interest in science and medicine. In his spare time he reads technical journals, hacks on Mythtv and his home, and develops exhibit software for science museums. He eats very plain food—boring by even American standards.

R7 Ethereal and the Art of Debugging Networks NEW!
Gerald Carter, Samba Team/Hewlett-Packard
1:30 p.m.–5:00 p.m.

Who should attend: System and network administrators who are interested in learning more about the TCP/IP protocol and how network traffic monitoring and analysis can be used as a debugging, auditing, and security tool.

The focus of this course is using the Ethereal protocol analyzer as a debugging and auditing tool for TCP/IP networks. System logs can turn out to be incomplete or incorrect when you're trying to track down network application failures. Sometimes the quickest, or the only, way to find the cause is to look at the raw data on the wire. This course is designed to help you make sense of that data.

Topics include:

  • Introduction to Ethereal for local and remote network tracing
  • TCP/IP protocol basics
  • Analysis of popular application protocols such as DNS, DHCP, HTTP, NFS, CIFS, and LDAP
  • Security
  • How some kinds of network attacks can be recognized

Gerald Carter (W3, R7) has been a member of the Samba Development Team since 1998. He has been developing,Gerald Carter writing about, and teaching on open source since the late '90s. Currently employed by Centeris as a Samba and open source developer, Gerald has written books for SAMS Publishing and for O'Reilly Publishing.

 

R8 Practical Project Management for Sysadmins and IT Professionals NEW!
Strata Rose Chalup, Project Management Consultant
1:30 p..m.–5:00 p.m.

Who should attend: System administrators who want to stay hands-on as team leads or system architects and need a new set of skills with which to tackle bigger, more complex challenges. No previous experience with project management is required. Participants will get a no-nonsense grounding in methods that work without adding significantly to one's workload. After completing this tutorial, participants will be able to take an arbitrarily daunting task and reduce it to a plan of attack that will be realistic, lend itself to tracking, and have functional, documented goals. They will be able to give succinct and useful feedback to management on overall project viability and timelines and easily deliver regular progress reports.

People who have been through traditional multi-day project management courses will be shocked, yet refreshed, by the practicality of our approach. To get the most out of this tutorial, participants should have some real-world project or complex task in mind for the lab sections.

This tutorial focuses on complementing your own organizational style (or lack thereof) with a toolbox of ways to organize and manage complex tasks without drowning in paperwork or clumsy, meeting-intensive methodologies. Also emphasized is how to bridge the gap between ad-hoc methods and the kinds of tracking and reporting traditionally trained managers will understand.

Topics include:

  • Quick basics of project management
    • The essentials you need to know
    • How to map the essentials onto real-world projects
  • Skill sets
    • Defining success
    • Chunking and milestoning
    • Delegating
    • Tracking
    • Reporting
  • Problem areas
    • Teams, interactions among people
    • The albatross project
    • When to go deep and when to get "pointy-haired"
    • When disaster strikes, should you scrap, or salvage?
  • Project management tools
    • What tools should do for you
    • Leveraging the command line: UNIX PM
    • Freeware PM tool options
    • The only 15 minutes of MS Project you'll ever need
Strata Rose Chalup (R8, F5) began as a fledgling sysadmin in 1983 and has been leadingStrata Rose Chalup and managing complex IT projects for many years, serving in roles ranging from Project Manager to Director of Network Operations. She has written a number of articles on management and working with teams and has applied her management skills on various volunteer boards, including BayLISA and SAGE. Strata has a keen interest in network information systems and new publishing technologies and built a successful consulting practice around being an avid early adopter of new tools, starting with ncsa_httpd and C-based CGI libraries in 1993 and moving on to wikis, RSS readers, and blogging. Another MIT dropout, Strata founded VirtualNet Consulting in 1993.

Friday, June 2, 2006
Full-Day Tutorials
F1 Beyond Shell Scripts: 21st-Century Automation Tools and Techniques NEW!
Æleen Frisch, Exponential Consulting

Who should attend: System administrators who want to explore new ways of automating administrative tasks. Shell scripts are appropriate for many jobs, but more complex operations will often benefit from sophisticated tools.

Topics include:

  • Automating installations
    • Vendor-supplied tools
    • Alternative approaches
    • State-of-the-art package control
    • Heterogeneous environments
  • Cfengine
    • Basic and advanced configurations
    • Examples
      • Installations and beyond
      • "Self-healing" system configurations
      • Data collection
      • More
    • Cfengine limitations: when not to use it
  • Other Tools
    • Expect: automating interactive processes
      • What to Expect . . .
      • Using Expect with other tools
      • Security issues
    • Bacula, an enterprise backup management facility
      • Prerequisites
      • Configuration
      • Getting the most from Bacula
    • Nagios: monitoring network and device performance
      • How it works
      • Sample configurations
      • Extending Nagios
    • RRDTool: examining retrospective system data
      • Basic operation
      • Advanced graphing
      • Options for data collection
    • Other tools of interest
      • Webmin
      • Oscar
    Æleen Frisch (R1, F1) has been a system administrator for over 20 years. She currentlyAeleen Frisch looks after a pathologically heterogeneous network of UNIX and Windows systems. She is the author of several books, including Essential System Administration (now in its 3rd edition).

     

    F2 Network Security Monitoring with Open Source Tools
    Richard Bejtlich, TaoSecurity.com

    Who should attend: Anyone who wants to know what is happening on their network. I assume command-line knowledge of UNIX and familiarity with TCP/IP. Anyone with duties involving intrusion detection, security analysis, incident response, or network forensics will like this tutorial.

    This tutorial will show that there is more to network security monitoring (NSM) than Snort and Ethereal. In fact, we won't talk about either, unless it's to mention something you might not have seen before! NSM involves collecting the statistical, session, full content, and alert data you need to discover normal, malicious, and suspicious network events. You will leave this tutorial immediately able to implement numerous new techniques and tools. Past participants have discovered intrusions during the class, using concepts learned in a few hours. The instructor bases his teaching on his books, his professional consulting experience, and the latest security research.

    Students with VMware Player installed will be able to follow along with the technique and tool demonstrations, using an NSM VMware image provided by the instructor.

    Topics include:

    • NSM theory
    • Building and deploying NSM sensors
    • Accessing wired and wireless traffic
    • Full content tools: Tcpdump, Ethereal/Tethereal, Snort as packet logger
    • Additional data analysis tools: Tcpreplay, Tcpflow, Ngrep, Netdude
    • Session data tools: Cisco NetFlow, Fprobe, Flow-tools, Argus, SANCP
    • Statistical data tools: Ipcad, Trafshow, Tcpdstat, Cisco accounting records
    • Sguil (sguil.sf.net)
    • Case studies, personal war stories, and attendee participation
    Richard Bejtlich (F2) is founder of TaoSecurity (http://www.taosecurity.com), a company Richard Bejtlichthat helps clients detect, contain, and remediate intrusions using network security monitoring (NSM) principles. Richard was previously a principal consultant at Foundstone, performing incident response, emergency NSM, and security research and training. He has created NSM operations for ManTech International Corporation and Ball Aerospace & Technologies Corporation. From 1998 to 2001 then-Captain Bejtlich defended global American information assets in the Air Force Computer Emergency Response Team (AFCERT), performing and supervising the real-time intrusion detection mission. Formally trained as an intelligence officer, Richard is a graduate of Harvard University and the United States Air Force Academy. He authored the critically acclaimed Tao of Network Security Monitoring: Beyond Intrusion Detection in 2004 and Extrusion Detection: Security Monitoring for Internal Intrusions in 2005. Richard co-authored Real Digital Forensics and contributed to Hacking Exposed, 4th Ed.,Incident Response, 2nd Ed., and several Sys Admin Magazine articles. He holds the CISSP, CIFI, and CCNA certifications. Richard writes for his Web log (www.taosecurity.blogspot.com) and teaches at USENIX conferences.

    Friday Morning Half-Day Tutorials
    F3 Defense Against the Dark Arts: Repelling the Wily Hacker NEW!
    Bill Cheswick, Lumeta
    9:00 a.m.–12:30 p.m.

    Who should attend: Anyone interested in learning practical security from a person whose job is to think both offensively and defensively. System administrators, security consultants, and software developers will all benefit from this class. This is not a tutorial for novices, but is for people who want to fill out their understanding of the topics with philosophy, examples, and war stories.

    System and network administrators with some experience administering UNIX systems will gain an understanding of and confidence in designing sites that are highly resistant to network attacks.

    Topics include:

    • How to nail down freshly installed UNIX systems
    • How to jail important server and client applications
    • Software safety and resistance against outside attacks
    • Demo of securing and then probing systems with common hacking tools
    • How to secure a community of systems

    Bill Cheswick (R6, F3) logged into his first computer in 1968. Seven years later, he was graduated fromBill Cheswick Lehigh University in 1975 with a degree resembling Computer Science. Cheswick has worked on (and against) operating system security for over 35 years. He has worked at Lehigh University and the Naval Air Development Center in system software and communications. At the American Newspaper Publishers Association/Research Institute he shared his first patent for a hardware-based spelling checker, a device clearly after its time. For several years he consulted at a variety of universities doing system management, software development, communications design and installation, PC evaluations, etc. In 1998, Ches starting the Internet Mapping Project with Hal Burch. This work became to core technology of a Bell Labs spin-off, Lumeta Corporation, which explores the extent of corporate and government intranets and checks for host leaks that violate perimeter policies. Ches has pinged an active duty US nuclear attack submarine (distance, 66ms). Ches has a wide interest in science and medicine. In his spare time he reads technical journals, hacks on Mythtv and his home, and develops exhibit software for science museums. He eats very plain food—boring by even American standards.

    F4 Disk-to-Disk Backup and Eliminating Backup System Bottlenecks NEW!
    Jacob Farmer, Cambridge Computer Corp.
    9:00 a.m.–12:30 p.m.

    Who should attend: System administrators involved in the design and management of backup systems and policymakers responsible for protecting their organization's data. A general familiarity with server and storage hardware is assumed. The class focuses on architectures and core technologies and is relevant regardless of what backup hardware and software you currently use. Students will leave this lecture with immediate ideas for effective, inexpensive improvements to their backup systems.

    The data protection industry is going through a mini renaissance. In the past few years, the cost of disk media has dropped to the point where it is practical to use disk arrays in backup systems, thus minimizing and sometimes eliminating the need for tape. In the first incarnations of disk-to-disk backup (disk staging and virtual tape libraries), disk has been used as a direct replacement for tape media. While this compensates for the mechanical shortcomings of tape drives, it fails to address other critical bottlenecks in the backup system, and thus many disk-to-disk backup projects fell short of expectations. Meanwhile, many early adopters of disk-to-disk backup are discovering that the longterm costs of disk staging and virtual tape libraries are prohibitive.

    The good news is that in the next generation, disk-enabled data protection solutions have reached a level of maturity where they can assist—and sometimes even replace—conventional enterprise backup systems. These new D2D solutions leverage the random-access properties of disk devices to use capacity much more efficiently and to obviate many of the hidden backup system bottlenecks which are not addressed by first-generation solutions. The challenge to the backup system architect is to cut through the industry hype, sort out all of these new technologies, and figure out how to integrate them into an existing backup system.

    This tutorial identifies the major bottlenecks in conventional backup systems and explains how to address them. The emphasis is placed on the various roles for inexpensive disk in your data protection strategy; however, attention is given to SAN-enabled backup, the current state and future of tape drives, and iSCSI.

    Topics include:

    • Identifying and eliminating backup system bottlenecks
    • Conventional disk staging
    • Virtual tape libraries
    • Removable disk media
    • Incremental forever and synthetic full backup strategies
    • Block- and object-level incremental backups
    • Information lifecycle management and nearline archiving
    • Data replication
    • CDP (Continuous Data Protection)
    • Snapshots
    • Current and future tape drives
    • Capacity Optimization (Single-Instance File Systems)
    • Minimizing and even eliminating tape drives
    • iSCSI

    Jacob Farmer (F4, F7) is a well-known figure in the data storage industry. He has authoredJacob Farmer numerous papers and articles and is a regular speaker at trade shows and conferences. In addition to his regular expert advice column in the "Reader I/O" section of InfoStor Magazine, the leading trade magazine of the data storage industry, Jacob also serves as the publication's senior technical advisor. Jacob has over 18 years of experience with storage technologies and is the CTO of Cambridge Computer Services, a national integrator of data storage and data protection solutions.

    F5 RSS vs. Information Overload NEW!
    Strata Rose Chalup, Project Management Consultant
    9:00 p.m.–12:300 p.m.

    Who should attend: People who want to manage incoming information streams and go "on beyond Slashdot"; people who never heard of RSS before Microsoft announced it was going to do an embrace/extend/exterminate on it.

    There are so many sources of information out there that keeping up can be a big challenge. Wading through folders of postings to various lists, even quickly scanning the digest version, is fundamentally not scalable. What if I told you there's a tool out there designed for such things, which can publish headlines of articles, aggregate them into a reading interface, and even be used to fetch (or pre-fetch) the content?

    Better yet, these tools are a natural fit for managing some kinds of system information. And, like any hammer, RSS and its cohorts will undoubtedly be used to pound on things that were never nails. Look at the uses the Web is put to nowadays simply because it is a robust, simple, well-defined protocol, although it was never intended or designed for them. RSS is in the same boat.

    After completing this tutorial, participants will have an understanding of how to harness RSS feeds for information management, the tradeoffs among various publishing methods, and the toolkits available for working with RSS. We'll discuss methods whereby RSS can augment traditional system logging tools such as syslog and swatch, as well as hook into conventional distribution tools such as mailman and majordomo. Class materials will include pointers to RSS clients for a wide range of platforms.

    Topics include:

    • RSS basics
      • Origins and standards
      • Growing pains: Tim, Dave, and a cast of hundreds
      • RSS 2.0: a new beginning?
    • RSS in context
      • XML, DHTML, and RSS
      • Where does Tibco fit in?
      • The mod_pubsub model
    • Weed 'n' feed
      • Publishing basics
      • Reputation communities (Syndic8 et al.)
      • Atom: RSS on steroids, or annoyance?
      • Bonus: what's this "tagging" stuff, and do I give a damn?
    • Getting the goodies
      • Aggregation clients
      • Pre-fetch or post-fetch?
      • Archiving feeds
    • If I had a hammer . . .
      • Toolkits and libraries
      • Server-side fun for everyone
      • Client building blocks
    • Applied RSS
      • syslog and MRTG: the low-hanging fruit
      • Filtering and tagging
      • Bugzilla and Wiki hooks
      • Augmenting ticket systems
    • Next generation
      • Proposed RSS extensions
      • Microsoft gets on the bandwagon
    • Malice aforethought
      • Scaling aspects to consider
      • The coming deluge: spamvertising via RSS
      • Security caveats

    Strata Rose Chalup (R8, F5) began as a fledgling sysadmin in 1983 and has been leadingStrata Rose Chalup and managing complex IT projects for many years, serving in roles ranging from Project Manager to Director of Network Operations. She has written a number of articles on management and working with teams and has applied her management skills on various volunteer boards, including BayLISA and SAGE. Strata has a keen interest in network information systems and new publishing technologies and built a successful consulting practice around being an avid early adopter of new tools, starting with ncsa_httpd and C-based CGI libraries in 1993 and moving on to wikis, RSS readers, and blogging. Another MIT dropout, Strata founded VirtualNet Consulting in 1993.

    Friday Afternoon Half-Day Tutorials
    F6 Databases: What You Need to Know NEW!
    John Sellens, SYONEX
    1:30 p.m.–5:00 p.m.

    Who should attend: System and application administrators who need to support databases and database-backed applications.

    Databases used to run almost exclusively on dedicated database servers, with one or more database administrators (DBAs) dedicated to their care. These days, with the easy availability of database software such as MySQL and PostgreSQL, databases are popping up in many more places, and are used by many more applications.

    As a system administrator you need to understand databases, their care and feeding.

    Attendees will leave the tutorial with a better understanding of databases and their use and will be ready to deploy and support common database software and database-backed applications.

    Topics include:

    • An introduction to database concepts
    • The basics of SQL (Structured Query Language)
    • Common applications of databases
    • Berkeley DB and its applications
    • MySQL installation, configuration, and management
    • PostgreSQL installation, configuration, and management
    • Security, user management, and access controls
    • Ad-hoc queries with standard interfaces
    • ODBC and other access methods
    • Database access from other tools (Perl, PHP, sqsh, etc.)

    John Sellens (F6, A3) has been involved in system and network administration since 1986 and is John Sellens the author of several related USENIX papers, a number of ;login: articles, and the SAGE Short Topics in System Administration booklet #7, System and Network Administration for Higher Reliability. He holds an M.Math. in computer science from the University of Waterloo and is a chartered accountant. He is the proprietor of SYONEX, a systems and networks consultancy. From 1999 to 2004, he was the General Manager for Certainty Solutions in Toronto. Prior to joining Certainty, John was the Director of Network Engineering at UUNET Canada and was a staff member in computing and information technology at the University of Waterloo for 11 years.

    F7 Next Generation Storage Networking NEW!
    1:30 p.m.–5:00 p.m.
    Jacob Farmer, Cambridge Computer Services

    Who should attend: Sysadmins running day-to-day operations and those who set or enforce budgets. This tutorial is technical in nature, but it does not address command-line syntax or the operation of specific products or technologies. Rather, the focus is on general architectures and various approaches to scaling in both performance and capacity. Since storage networking technologies tend to be costly, there is some discussion of the relative cost of different technologies and of strategies for managing cost and achieving results on a limited budget.

    There has been tremendous innovation in the data storage industry over the past few years. Proprietary, monolithic SAN and NAS solutions are beginning to give way to open-system solutions and distributed architectures. Traditional storage interfaces such as parallel SCSI and Fibre Channel are being challenged by iSCSI (SCSI over TCP/IP), SATA (serial ATA), SAS (serial attached SCSI), and even Infiniband. New filesystem designs and alternatives to NFS and CIFS are enabling high-performance filesharing measured in gigabytes (yes, "bytes," not "bits") per second. New spindle management techniques are enabling higher-performance and lower-cost disk storage. Meanwhile, a whole new set of efficiency technologies are allowing storage protocols to flow over the WAN with unprecedented performance. This tutorial is a survey of the latest storage networking technologies, with commentary on where and when these technologies are most suitably deployed.

    Topics include:

    • Fundamentals of storage virtualization: the storage I/O path
    • Shortcomings of conventional SAN and NAS architectures
    • In-band and out-of-band virtualization architectures
    • The latest storage interfaces: SATA (serial ATA), SAS (serial attached SCSI), 4Gb Fibre Channel, Infiniband, iSCSI
    • Content-Addressable Storage (CAS)
    • Information Life Cycle Management (ILM) and Hierarchical Storage Management (HSM)
    • The convergence of SAN and NAS
    • High-performance file sharing
    • Parallel file systems
    • SAN-enabled file systems
    • Wide-area file systems (WAFS)

    Jacob Farmer (F4, F7) is a well-known figure in the data storage industry. He has authoredJacob Farmer numerous papers and articles and is a regular speaker at trade shows and conferences. In addition to his regular expert advice column in the "Reader I/O" section of InfoStor Magazine, the leading trade magazine of the data storage industry, Jacob also serves as the publication's senior technical advisor. Jacob has over 18 years of experience with storage technologies and is the CTO of Cambridge Computer Services, a national integrator of data storage and data protection solutions.

    F8 Time Management: Getting It All Done and Not Going (More) Crazy!
    1:30 p.m.–5:00 p.m.
    Tom Limoncelli, Google
    1:30 p.m.–5:00 p.m.

    Who should attend: IT people, sysadmins, and other busy people who want to improve their time-management skills, who want to have more control over their time and better follow-through on assignments. If you feel overloaded, miss appointments, and forget deadlines and tasks, this class is for you.

    Do any of these statements sound like you?

    • I don't have enough time to get all my work done.
    • I don't have control over my schedule
    • I'm spending all my time mopping the floor; I don't have time to fix the leaking pipe.
    • My boss says I don't work hard enough, but I'm always working my —— off!
    Based on a new book from O'Reilly, this tutorial will help you get more done in less time. You'll miss fewer deadlines, be more relaxed at work, and have more fun in your social life. If you think you don't have time to take this tutorial, you really need to take this tutorial!

    Topics include:

    • Why typical "time management" books don't work for sysadmins
    • How to delegate tasks effectively
    • A way to keep from ever forgetting a user's request
    • Why "to do" lists fail and how to make them work
    • Prioritizing tasks so that users think you're a genius
    • Getting more out of your Palm Pilot
    • Having more time for fun (for people with a social life)
    • How to leave the office every day with a smile on your face

    Tom Limoncelli (F8), author of O'Reilly's Time Management for System AdministratorsTom Limoncelli and co-author of The Practice of System and Network Administration from Addison-Wesley, is Director of IT Services at Cibernet Corp. A sysadmin and network wonk since 1987, he has worked at Dean for America, Lumeta, Bell Labs/Lucent, Mentor Graphics, and Drew University. He is a frequent presenter at LISA conferences.
     

    Saturday, June 3, 2006
    Full-Day Tutorials
    A1 Learning PHP 5 NEW!
    David Sklar, Ning Inc.

    Who should attend: Programmers and system administrators who want to use PHP to build database-backed Web applications. Students in the tutorial should be handy with HTML, familiar with programming in another language such as Perl, C, or Python, and not total strangers to SQL and relational databases.

    Instead of a dry walkthrough of keywords and syntax, this tutorial presents an applied introduction to PHP 5 by building a complete database-backed Web application: a social bookmarks manager that can be used by a community to tag and share interesting URLs.

    Code examples in the tutorial highlight new capabilities of PHP 5, such as the improved object-oriented programming model, iterators, overloading, and XML processing.

    Topics include:

    • Displaying and processing HTML forms
    • Talking to a database
    • Identifying users with sessions and cookies
    • Securing your code against common hacker attacks
    • Manipulating numbers, strings, and arrays
    • Debugging and troubleshooting problems in your code
    • Cleaning up your URL structure
    • Using PEAR modules such as HTML_QuickForm and Mail_mime
    • Generating dynamic images with GD
    • Caching and tuning your application's performance
    • Interfacing PHP with client-side Javascript
    • Working with the Zend Framework

    David Sklar (A1) is a Software Architect at Ning. He is also the author of Learning PHP 5David Sklar (O'Reilly), Essential PHP Tools (Apress), and PHP Cookbook (O'Reilly). After discovering PHP as a solution to his Web programming needs in 1996, he created the PX, which enables PHP users to exchange programs. Since then, he has continued to rely on PHP for personal and professional projects. When away from the computer, Sklar eats mini-donuts, plays records, and likes to cook. He lives in New York City and has a degree in computer science from Yale University.

    A2 Solaris 10 Administration Workshop NEW!
    Peter Baer Galvin, Corporate Technologies, Inc.

    Who should attend: Solaris system managers and administrators interested in learning the new administration features in Solaris 10 (and features in previous Solaris releases that they may not be using).

    This tutorial covers a variety of topics concerning Solaris 10. Solaris 10 includes many new features, and there are new issues to consider when deploying, implementing, and managing Solaris 10. This will be a workshop featuring instruction and practice/exploration. Each student should have a laptop with wireless access for remote access into a provided Solaris 10 machine.

    Note that, except for a few instances, Solaris 10 security is not covered in this workshop.

    Topics include:

    • Overview
    • Solaris releases (official, Solaris Express, OpenSolaris, others)
    • Installing and upgrading to Solaris 10
      • Planning your installation, filesystem layout, post-installation steps
      • Installing (and removing) patches and packages
    • Advanced features of Solaris 10
      • Flash archives and live upgrade
    • Patching
    • Service Management Facility (lab)
    • The kernel
      • Update
      • /etc/system
    • Crash and core dumps
      • Management and analysis
    • Cool commands
    • ZFS (lab)
    • N1 Grid Containers (a.k.a. Zones) (lab)
      • Installation
      • Management
      • Resource management
    • Dtrace
    • FMA
    • Performance
      • Commands
      • Cachefs
    • Networking
      • IP multipathing
    • Sysadmin best practices

    Peter Baer Galvin (R2, A2) is the Chief Technologist for Corporate Technologies, Inc., a systems integrator and VAR, Peter Baer Galvin and was the Systems Manager for Brown University's Computer Science Department. He has written articles for Byte and other magazines. He wrote the "Pete's Wicked World" and "Pete's Super Systems" columns at SunWorld. He is currently contributing editor for Sys Admin, where he manages the Solaris Corner. Peter is co-author of the Operating Systems Concepts and Applied Operating Systems Concepts textbooks. As a consultant and trainer, Peter has taught tutorials on security and system administration and has given talks at many conferences and institutions on such topics as Web services, performance tuning, and high availability.

    A3 System and Network Monitoring: Tools in Depth
    John Sellens, SYONEX

    Who should attend: Network and system administrators ready to implement comprehensive monitoring of their systems and networks using the best of the freely available tools. Participants should have an understanding of the fundamentals of networking, familiarity with computing and network components, UNIX system administration experience, and some understanding of UNIX programming and scripting languages.

    This tutorial will provide in-depth instruction in the installation and configuration of some of the most popular and effective system and network monitoring tools, including Nagios, Cricket, MRTG, and Orca.

    Participants should expect to leave the tutorial with the information needed to immediately implement, extend, and manage popular monitoring tools on their systems and networks.

    Topics include, for each of Nagios, Cricket, MRTG, and Orca:

    • Installation—Basic steps, prerequisites, common problems, and solutions
    • Configuration, setup options, and how to manage larger and non-trivial configurations
    • Reporting and notifications—proactive and reactive
    • Special cases—how to deal with interesting problems
    • Extending the tools—how to write scripts or programs to extend the functionality of the basic package
    • Dealing effectively with network boundaries and remote sites
    • Security concerns and access control
    • Ongoing operation

    John Sellens (F6, A3) has been involved in system and network administration John Sellens since 1986 and is the author of several related USENIX papers, a number of ;login: articles, and the SAGE Short Topics in System Administration booklet #7, System and Network Administration for Higher Reliability. He holds an M.Math. in computer science from the University of Waterloo and is a chartered accountant. He is the proprietor of SYONEX, a systems and networks consultancy. From 1999 to 2004, he was the General Manager for Certainty Solutions in Toronto. Prior to joining Certainty, John was the Director of Network Engineering at UUNET Canada and was a staff member in computing and information technology at the University of Waterloo for 11 years.

    A4 Measuring Security NEW!
    Dan Geer, Geer Risk Services

    Who should attend: Operations and security managers who need to design or interpret a metric structure for security risk management.

    "You cannot manage what you cannot measure": every business school says this, so it must be true. "Cyber security is about risk management": almost everyone believes this, and for good reason. The sum of the two says that with respect to computer-related security we are hosed if we don't get on the ball and design some decent security metrics. So far, so good, but what in tarnation is a decent security metric? "Ay, there's the rub," as Hamlet would say. This tutorial makes a healthy stab in the direction of security metrics and hopes that its students soon surpass their teacher, which may not be all that hard, as security metrics design is somewhere between infancy and toddlerhood.

    Topics include:

    • Where You Stand Depends on Where You Sit: What management texts/schools mean when they say, "Measure what you manage"
    • Good Artists Create, Great Artists Steal: Styles and methods of measurements used in other fields that are applicable to security risk, and how to steal them
    • Modeling: Is there any point in lifecycle or other models of how security works; is there any unifying abstraction worth using?
    • Large Numbers: The state of the world and how to compare yourself to it
    • Information Sharing: Data fusion is dangerously powerful but essential (with a sidebar on de-identification as a pre-sharing safety mechanism)
    • Where to Begin: How to roll your own, and a few pitfalls to avoid, assuming that decision support is your real deliverable
    • How to Communicate What You Find: Being simple without being simplistic
    Topics do not include:
    • Secure coding standards, disaster recovery planning, firewall log analysis, or anything else that is already a solved problem or a side effect of low/no discipline
    Dan Geer (A4)—Milestones: The X Window System and Kerberos (1988), the first information securityDan Geer consulting firm on Wall Street (1992), convenor of the first academic conference on electronic commerce (1995), the "Risk Management Is Where the Money Is" speech that changed the focus of security (1998), the presidency of the USENIX Association (2000), the first call for the eclipse of authentication by accountability (2002), principal author of and spokesman for Cyberinsecurity: The Cost of Monopoly (2003), and co-founder of SecurityMetrics.Org (2004).

    A7 Inside the Linux Kernel (Updated for Version 2.6)
    Theodore Ts'o, IBM

    Who should attend: Application programmers and kernel developers. You should be reasonably familiar with C programming in the UNIX environment, but no prior experience with the UNIX or Linux kernel code is assumed.

    This tutorial will give you an introduction to the structure of the Linux kernel, the basic features it provides, and the most important algorithms it employs.

    The Linux kernel aims to achieve conformance with existing standards and compatibility with existing operating systems; however, it is not a reworking of existing UNIX kernel code. The Linux kernel was written from scratch to provide both standard and novel features, and it takes advantage of the best practice of existing UNIX kernel designs.

    Although the material will focus on the latest release version of the Linux kernel (v. 2.6), it will also address aspects of the development kernel codebase (v. 2.7) where its substance differs from 2.6. It will not contain any detailed examination of the source code but will, rather, offer an overview and roadmap of the kernel's design and functionality.

    Topics include:

    • How the kernel is organized (scheduler, virtual memory system, filesystem layers, device driver layers, networking stacks)
      • The interface between each module and the rest of the kernel
      • Kernel support functions and algorithms used by each module
      • How modules provide for multiple implementations of similar functionality
    • Ground rules of kernel programming (races, deadlock conditions)
    • Implementation and properties of the most important algorithms
      • Portability
      • Performance
      • Functionality
    • Comparison between Linux and UNIX kernels, with emphasis on differences in algorithms
    • Details of the Linux scheduler
      • Its VM system
      • The ext2fs filesystem
    • The requirements for portability between architectures

    Theodore Ts'o (A7) has been a Linux kernel developer since almost the very beginnings of Linux: heTheodore Ts'o implemented POSIX job control in the 0.10 Linux kernel. He is the maintainer and author of the Linux COM serial port driver and the Comtrol Rocketport driver, and he architected and implemented Linux's tty layer. Outside of the kernel, he is the maintainer of the e2fsck filesystem consistency checker. Ted is currently employed by IBM Linux Technology Center.

?Need help? Use our Contacts page.

Last changed: 3 May 2006 ch