Check out the new USENIX Web site. USENIX - Summaries


Large-Scale System Administration of Windows NT Workshop

SEATTLE, WA

August 14-16, 1997


KEYNOTE ADDRESS

From UNIX into Windows NT ­ A Long Day's Journey Into Night

David Korn, AT&T Laboratories


Summary by Gus Hartmann

David Korn is well known for his work on the Korn shell. Less well known is his work to make porting code to Windows NT from UNIX and vice versa easier for programmers. By his own admission, he has never administrated a computer system, so his dealings with Windows NT have been from a programming perspective. Ideally, he would like to see code being ported from Windows NT to UNIX and vice versa based on the needs of the program, and to that end, he has been working to increase the ease of migrating code. [Editor's note: Korn has his own article on page 28 in this issue.]

Before discussing Windows NT, Korn gave a brief overview of his computer experience in terms of platforms and programming languages. His list of platforms was divided into two sections; the first began with an IBM 650 and machine language to a VAX 780 running UNIX; the second ranged from Apollo Aegis and C to MVS Open Edition and ANSI C.

As a brief overview of Windows NT as a programming platform, Korn gave lists of the good and the bad qualities of Windows NT.

The good:

  • handles and file handle duplication
  • model
  • interface
  • order for DLLs
  • Compiler
  • performance

The bad:

  • complexity
  • programming interface
  • of a shell
  • documentation
  • model
  • scheduling
  • mentality
  • registry
  • UNICODE
  • reboots

Several common myths about Windows NT were debunked by Korn during his address. Korn has investigated each, and found it to be without basis in fact. Among these were the following:

  • software works on all the Windows platforms.
  • Everyone uses Windows.
  • is less expensive to purchase initially.
  • UNIX is going to die.
  • runs faster than Windows.
  • experts are plentiful.

TECHNICAL SESSION

Administration I ­ Enterprise Management


Summary by Steve Hillman

Three papers focusing on various aspects of administering a truly large-scale NT installation were presented in this session.

The first paper, "Domain Engineering for Mission Critical Environments," was presented by Chris Rouland of Lehman Brothers. Lehman Brothers is a large

investment firm with offices worldwide and about 8,500 employees. It has roughly 3,200 SUNs, 7,000 PCs, and 400 NT and Novell servers. It runs both UNIX and NT servers, and still has some legacy Novell servers running.

According to Rouland, when setting up domains for NT servers, there are two models you can follow, and which one you choose is primarily dictated by the size of your network. Single-master domain models work in smaller environments. Lehman Brothers uses a multiple-master domain model, with a total of three masters ­ one in North America, one in Europe, and one in Asia. Multiple-masters have higher resource requirements, but function better in very large organizations.

Lehman Brothers found that it was good to throw money at problems. With lots of servers, it was able to separate functions onto different servers. This also made it easier to delegate authority for managing services. Sysadmins would be given access only to a server that ran a service they were responsible for.

Rouland stressed the need for highly available Domain Controllers. All of Lehman Brothers DCs run on Compaq servers with RAID arrays, ECC DRAM, fail over CPUs, etc.

One interesting point that Rouland raised: consider a service pack update to be an OS upgrade. Most SPs make changes to the kernel files and hence should be treated as OS upgrades, rather than simply as patches. They have found that their Domain Controllers are not happy running at different SP levels, so they must synchronize as much as possible SP and hotfix updates to their DCs.

Rouland also talked about namespace. There are four primary methods of managing names under NT:

  • WINS
  • LMHOSTS
  • DNS
  • Lanman Browsing

Lanman Browsing is broadcast-based and, in large networks, does not propagate well and gets confused. Microsoft DNS "works." The only realistic choice in a large NT installation is WINS; however. it still has its problems. One of these is that it has no decent administrative interface. Lehman Brothers found only one command-line interface into the WINS database, and it was a poor one. People there are working on their own WINS service based on the Samba code. [Editor's note: see Chris Hertel's article on page 23 about the University of Minnesota's extensions to Samba and WINS.]

The first question of the session was how Lehman Brothers handles account synchronization between UNIX and NT. Rouland responded that, currently, it doesn't. Accounts on each system are given the same username, but no password synchronization occurs. They're looking at a Kerberos solution in the future. As far as file space goes, people at Lehman Brothers have set up an NFS server on some of their NT servers and provided limited cross-platform support that way.

Lyle Meier of Texaco detailed a problem he was having with users in the US getting authenticated across a slow link from a PDC in Kuwait. He wanted to know if Lehman Brothers experiences similar problems. Rouland responded that they had. The short-term solution was to turn off net logon at the remote site. The long-term solution that they hope to implement is a rewritten WINS service that keeps track of hop counts for Domain Controllers and authenticates to the "closest" one, rather than the first one to answer.

Rouland had mentioned earlier that the Microsoft DNS server stored all of its information in the registry, preventing you from finding any named.boot or similar files. Eric Pearce of O'Reilly commented that, in his experience, the MS DNS could be set up to use files. According to documentation I've read, the MS DNS server can be set up to use files, but when doing so, the DNS Manager cannot be used; and if you switch to using the registry, you can't switch back.

Jeff Hoover of Cisco asked how Lehman Brothers intends to manage its browsing environment. Rouland said that people there are still trying to figure out how to deal with browsing issues in general ­ everything from turning it off completely, to using Samba, to turning it off on enough servers that they know which servers will act as browse masters.

The second paper, "In Search of a Complete and Scalable Systems Administration Suite," presented by Joop Verdoes of Shell International Exploration and Production, detailed his frustration in trying to find a suite that really managed their machines. This paper wasn't about NT management specifically, in fact, most of the examples given were UNIX-based. Unfortunately, SIEP has not met with much success so far.

Verdoes summarized many of the inadequacies he'd found in the management suites he'd dealt with and then listed features he believes a good suite should have. They include:

  • managing, and distributing config files ­ not just the basic config files, but all necessary config files
  • configuration management ­ a mechanism for obsoleting a workstation's old config files at boot time and automatically fetching the current ones, falling back to the obsoleted ones if the fetch fails
  • logging
  • version control ­ simple backout to a previous OS version if the current version has problems

On the bright side, Verdoes provided some good lines for the conference. Among them were, "Have you ever heard of drag-and-undrop?", referring to his frustration with not having an audit trail on most system management suites, and "Iconized management is a fake!", referring to windows full of several hundred icons, all of which look exactly the same.

The final paper of this session, "Large Scale NT Operations Management," was presented by Steven Hearn of Westpac Bank in Australia. Westpac has about 33,000 employees and 10,000 PCs spread out across 1,100 bank branches. Each branch runs its own NT 3.51 server, and each server speaks IP over 9600 bps modem links back to the headquarters. There's no NT expertise at each branch, so all server management must be done remotely over the 9600 bps links.

Because there's very limited bandwidth to each branch and limited staff at the operations center, Westpac has been focusing on trying to automate as much as possible. It's also been adding more applications at the branches, increasing complexity.

Hearn summarized some of the key challenges that his operations center is facing:

  • (NT automation, capacity and performance reviews, and coordination of large data transfers to all remote sites overnight)
  • (coordinating data at remote branches)
  • centralized security and account management (Westpac currently uses a separate domain for every branch.)
  • the NT footprint (With every service and management tool added to each server, the resource requirements grow. With over 1,000 servers in place all over Australia, constant hardware upgrading is not an option.)
  • management and data currency (By "change management," Hearn is referring to some sort of audit trail that allows config changes to be tracked.)

Some of these challenges are already being worked on. Hearn's group is now using the User Manager, Print Manager, and Server Manager as remote management tools to manage their branch servers centrally. This provides a very basic level of management. They're presently working on the security of this ­ tightening up controls for who can remotely connect and what they can do.

For NT event logging, they're using SNA Server and passing events into NetMaster for a centralized view of NT alerts. They've had this system in place for nearly two years now, and it has helped them learn a lot about the volume of NT events that can be expected and what events should be passed along to their help desks for immediate action.

Westpac is currently using FTP for remote data distribution. Each night, the NT servers ftp into a central mainframe to get updates via FTP. They're currently able to disseminate up to 2Mb to all 1,000 branches each night.

The operations center is now reviewing several performance-monitoring software packages and working on in-house automation software that will alert the operations center when the automation fails. Their help desk training is also being improved to provide more problem resolution on the "front lines."

Questions started with Rob Wilkins of Dupont asking Hearn how he managed to remote boot his clients over their 9600 bps lines. Hearn explained that all booting happens within the branch. The PCs are diskless and connect via 10BaseT to the NT server at the branch. They get their disk images from the local server.

Till Poser of DESY asked why they chose to have a separate domain at each branch and how their centralized account management works with that many domains. According to Hearn, their remote-boot architecture required them to have separate domains at each location. They currently have no centralized accounts ­ accounts exist only at each branch, and when the servers are being managed from the operations center, the administrators must log in to each server.

Derek Simmel of CERT wanted to know what measures had been put in place to protect the confidentiality and integrity of data. Hearn answered that, for the most part, he couldn't say much under confidentiality agreements with the bank, but he did say that their remote FTP transfers go through a validation process to make sure the remote server is a valid server and the data have arrived intact.

TECHNICAL SESSION

Administration II ­ Integration


Summary by Steve Hillman

This session began with a paper entitled "Integrating Windows 95/NT into an existing UNIX Enterprise Network," presented by Cam Luerkens of Rockwell Collins Cedar Rapids Division.

In 1996, Rockwell Collins Cedar Rapids had roughly 25 UNIX-based application servers, 400 UNIX workstations, and 3,500 PCs running Windows 3.1 and PC-NFS. When it came time to transition to Windows 95 or NT on the desktop, management decided that the UNIX-based servers should stay on and that some form of PC-NFS client should continue to be used unless another solution could be proven to be better.

The transition team had several problems to consider:

  • PC-NFS 5.0 was a DOS-based NFS client. A new solution for file sharing would need to be found.
  • was strictly NIS based before. Now NT domains were potentially being added
  • Printing

To solve the first problem, eight NFS and two SMB products were evaluated. Based on evaluation in several key areas, Hummingbird's Maestro NFS client narrowly proved to be the best product and was adopted sitewide.

Authentication was handled by having the PCs authenticate to an NT domain. Initially, the NT and UNIX passwords were just set to the same thing and then given to the users. Hummingbird then agreed to write password synchronization routines to sync 95 and NT passwords to NIS. The NT domain routines were added last April, and the 95 routines were finished just before this conference and are currently being tested.

To handle printing, the NT workstations were set up to print to NT servers. Because NT connection licenses weren't purchased for them, the Windows 95 machines were set up to print to UNIX via Maestro.

A Perl login script is run at the Win95/NT workstation whenever a user logs in. This handles setting up the user's environment, ensuring that, regardless of where the user logs in, the environment will be the same. Additionally, a Rockwell programs group was added to the Start menu. Each icon in the group actually points to a Perl script that sets up the environment before running the application ­ if the application has never been run on the workstation before, appropriate installation steps are taken automatically.

When implementing the rollout, the transition team found that the first 300 users went without any major problems. When they hit 300­500 users, they started having problems with file locking on the NFS servers when running the Perl login scripts. This turned out to be a Maestro issue and was eventually resolved. At 700­800 users, they had problems with load: all users were running their Perl scripts from the same server. Using a combination of NIS tables and more Perl, they were able to load balance the clients across several servers based on what subnet they were connecting from.

The question period began with Rob Elkins of Dupont asking why Samba was not used. Luerkens answered that management required that any product they use have available support they can buy. The transition team would also have had to show that Samba was significantly better than an NFS solution, because NFS was already in place and working well.

Steven Hearn of Westpac Bank asked about virus scanning. Luerkens said that the PCs use realtime virus scanning to block most viruses, including macro-viruses. Rockwell is also running a version of McAfee's antivirus software on its UNIX servers. This gets run each night and automatically generates email notification to any users found to have a virus within a file in their file space.

Ian Alderman from Cornell University asked for some more information on Maestro. According to Luerkens, Maestro consists of two parts ­ the client, which runs on NT and 95, and the NFS daemon, which runs on the UNIX servers. The daemon ties into NIS for authentication. The Win95 password-changing program is currently being tested. It will allow a password changed on a Win95 client (using the standard Win95 password-changing interface) to be changed in both the NT Domain Controller and the UNIX NIS maps.

Jim Dennis wondered why file locking was an issue if the Perl scripts were mounted on a read-only volume. As was mentioned earlier, this was a bug with Maestro ­ the NFS daemon was locking files on reads as well as writes.

The second half of the technical session consisted of a panel discussion entitled "Management Integration/Politics." The panel moderator was Boris Netis. Panel members were Paul Evans and Phil Scarr of Synopsys and Chris Rouland of Lehman Brothers. This panel focused on the politics of bringing Windows NT into an organization.

In the case of Synopsys, a vocal group of senior VPs has been pushing for several years to have the largely Mac-based company switched to PCs. When Win95 was released, the sysadmins were able to head off this push, but when Apple's stock started to slip in early 1996, and with the pending release of NT 4.0, the company started planning the switch (much to the protest of the Mac support staff!). In the early stages of planning, the CEO asked why they couldn't just come in one weekend and replace all of the Macs with PCs running NT 4.0. The MIS director nearly leapt across the table and throttled him.

At Lehman Brothers, NT crept in through the back door. It started out as a pilot project. The first business unit to get NT on the desktop was the brokerages. This was in part due to the relatively simple requirements of the brokerage users. From there, they started planning how they would introduce it into the other business units. At the same time, Microsoft came up with a true 32-bit version of Office. One of the driving forces behind the NT rollout was the richness of the applications available for it. With the release of a 32-bit Office, the drive to get NT onto every desktop increased. The behind-the-scenes database servers that keep everything running are still largely UNIX based.

Evans touched upon the politics involved in replacing users' desktop environments. Users may tend to hold you responsible even if it wasn't your decision to replace them. Some thought needs to be given to how the changes will affect the user.

Joop Verdoes asked why it was necessary to pick a single platform for the entire company instead of giving the users what they want. The consensus from the panel was that most companies can't afford to support this. At Synopsys, it was a requirement that the support staff know exactly what was on everyone's desk to ensure that support was actually feasible. Rouland explained that at Lehman Brothers, each business unit is responsible for choosing its desktop. They don't decide technical details, but they decide whether to stick with their legacy Win3.1/Netware system or upgrade to NT. If they choose NT, Lehman's support staff handles all other aspects, asking the end-user/managers only whether they need a "fast computer, really fast computer, or really, really fast computer."

John Holmwood of Nova Gas commented that he's being asked by senior management to justify why they need to worry about integrating NT and UNIX. He asked the panel if they'd faced the same questions. Netis commented that there are good reasons for integrating the two platforms. One is avoiding duplication of services: there's no need to waste effort setting services up under NT if they already work perfectly well under UNIX. A second reason is sharing file space. If an organization needs to move files between the two platforms or has a large investment in UNIX disk space, it makes sense to try to integrate the two.

Scarr commented that it would be good if integration can happen at a political level as well as a technical level. Synopsys, has a completely separate NT administration group that doesn't even work in the same state. The NT group has no idea what the UNIX group is doing . When the UNIX group installed NISGina (a freeware product that allows an NT machine to authenticate to a NIS server at login time), the NT group wasn't interested.

A comment was made by Crystal Harris of the University of California at San Diego that not all environments are so homogenous. In a university environment, in most cases, the users dictate what platforms get used, often resulting in virtually every platform being used! She is very interested in seeing programs that will help integrate such a heterogeneous environment.

Steve Hearn asked the panel to comment on the integration issue of a single logon. Rouland responded that Lehman Brothers has been looking, so far to no avail, for a commercial solution to this problem. Its group is considering attacking it from the other side ­ dropping in LDAP-NIS gateways and making the UNIX boxes authenticate from the NT Domain controllers, rather than the other way around. Scarr expects that, within 18 months, Synopsys will be almost entirely NT based, and this will be a nonissue.

Dave Ellis of Sandia National Labs asked if senior management was aware of the benefits they were getting with NT, such as increased security. Evans responded that, in his opinion, no. The move to NT was largely market driven ­ "Everyone else is, so we have to too." The Gartner Group's Total Cost of Ownership reports are also motivating management to move to NT. The subtleties of the various operating systems are lost on them.

Joop Verdoes wanted to find some decent capacity-planning tools that would allow him to determine how many servers he'd need to deploy. Netis said that Microsoft had some simple tools that would help. Rouland said that he had seen no decent tools. He just grossly overengineers his network. Scarr asked for a clarification of what a "server" is. At Synopsys, most of the servers are legacy servers and are UNIX based.

A participant then brought up the issue of cloning NT boxes to facilitate quick rollouts. He currently spends about eight hours on each NT machine he sets up because he can't just clone the hard drive because of the Security ID issue. Apparently, the panel was not aware of this issue because they've all been cloning machines at their sites. As was explained by several people in the audience, if you duplicate the SID, the effects can be quite subtle at first, but duplication can cause problems with SMS and with browsing on large networks.

It will also wreak havoc if the machines are upgraded to NT 5.0, because NT 5 uses the SID as an identifier in its directory services tree. New SIDs are not created when a machine is upgraded.The unique SID is not generated until the machine is rebooted after the initial text-based installation. If you switch the machine off, rather than reboot, and then clone that hard drive, you'll have SID-less PCs that are ready to be configured. This still leaves a lot of steps in the installation process that must be completed on every clone.

Kimbol Soques of AMD has spent a fair amount of time working on this problem and has a lot of information accumulated from various sources. She offered to email the info to anyone interested. Her address is <kimbol.soques@amd.com>.

John Crane asked the room how many shops had separate UNIX and NT system administrators. The vast majority had separate administrators.

Vince Kluth of GDE Systems compared the information model to a type of government, where a university could be compared to a democracy ­ each user in control of their own machine. A corporation is more like a dictatorship, where a single voice decides what machines shall be used everywhere.

The last question of the session concerned experience with Mac support on NT servers. Netis commented that he'd had some experience with it and found it to be mediocre. Scarr relayed the all-too-common story where a user sets up an NT server, tries to set up a queue for an Appletalk printer, selects "yes" to the question "Do you want to capture this printer," and then wonders why the printer disappears.

Rouland added that one of Lehman Brothers departments ­ Creative Services ­ has Macs, and when an NT server with Mac File and Print services was set up for them, they loved it.

INVITED TALK

MS/Zero Administration Windows

Dan Plastina, Microsoft, Inc.


Summary by Gus Hartmann

In the next version of Windows NT, Microsoft is attempting to reduce the total cost of ownership by lowering the administrative costs. This would ideally involve remote management of software, users, and hardware profiles. Dan Plastina of Microsoft presented an overview of the new administration features that will be coming in Windows 98 and Windows NT 5.0.

One of the key new features is Intellimirror functionality. Essentially, this consists of mirroring every write to the local disk to a network disk. This will allow complete re-creation of the local disk in the event of a catastrophic hardware failure. This is dependent upon the presence of a boot ROM containing a DHCP client that is a part of the new motherboard standards on which Microsoft is working with other industry players to develop. Upon booting, the client machine connects to a Remote Boot server and updates the local hard drive as necessary to make it identical to the user's home drive. This updating could involve installing certain user applications or even installing a different version of the operating system.

Seamless roaming will also be enabled, allowing users to sit at any machine and, as far as hardware permits, have a re-creation of their normal environment on the remote machine. This will include any applications that would be installed locally, as well as user documents.

As a means of installing applications, remote installation based on document type will also be supported. For example, a user receives via email a document in a format that the user cannot open with the software that is installed locally. The client machine would indicate to the server that an application to deal with that file type is needed, and the server would remote install a suitable application, providing support to the user without human intervention. [Editor's note: The server would then make an electronic funds transfer to pay for the new application license to Microsoft. Just kidding!]

Another important part of this initiative is the Windows Management Services, such as Web-Based Enterprise Management (WBEM) and Windows Management Interface (WMI). WBEM allows sharing of management data between network devices, desktops, and systems.

Also featured will be new modular administrative tools, called "snap-ins." By selecting only the tools necessary for their site, administrators can build a tool set suited to their needs. Application deployment will be improved for administrators, enabling a system administrator to push applications onto a client machine by "assigning" the application to a given user. Other software packages can be made available without forcing installation by making them "published" for a given user or group of users. "Published" applications can be added via the "Add/Remove Programs" commands in the control panels.

TECHNICAL SESSION

Tools


Summary by Steve Hillman

This session contained three papers. The first paper, "U/WIN ­ UNIX for Windows," was written and presented by David Korn. U/WIN is a set of libraries and header files that allow UNIX source code to be compiled and run on a Windows NT or 95 system with little or no modification. The software was developed by the Software Engineering Research Department at AT&T Labs, with most of the work being done by Korn. The binaries are freely available for noncommercial use.

In writing the software, the team's goals were

  • source code changes required
  • minimal performance overheard (compared to the native NT function calls)
  • to mix and match UNIX and NT functions
  • and 95 support
  • availability

These goals were met, either partially or completely. Some of the problems they had to overcome included

  • filenames and filename limitations in general
  • syntax
  • vs. <cr><nl>
  • handling
  • interface
  • mapping

Korn did not go into all of the technical details on how some of these problems were overcome (he only had 30 minutes!), but a lot more detail can be found in his paper.

U/WIN was written using the Win32 API. Because this API is present on both Windows NT and Windows 95, most code compiled to use U/WIN will run on either platform (note that some Win32 API calls are not available on Win95 and hence can't be made available via U/WIN on Win95). Virtually all UNIX calls were implemented in U/WIN. Additional features include:

  • calls based on sfio
  • C library used for remainder of calls
  • for hard/symbolic links
  • mapping and dynamic linking
  • System V shared memory
  • sockets implemented on top of Winsock
  • setuid/setgid support on Windows

The current version of U/WIN, 1.30, was released just before the conference. It includes a C compiler wrapper for Visual C/C++. To date, over 170 tools have been ported to NT using U/WIN, including ksh-93 and vi. Development team testing has produced the following results:

  • no loss in I/O performance using U/WIN
  • three times slower (than on Linux running on the same machine)
  • ~ 30% slower
  • 25% slower
  • two times slower (than deletes done in native NT)

The U/WIN project is still being actively worked on. In the future, Korn hopes to add these features:

  • case-sensitive file names
  • a simulated mount table
  • rest of the inet daemons
  • file support
  • POSIX pthreads API
  • registry filesystem (making it possible to cd to a folder in the registry)
  • and deet debugger (Currently, debugging can be done only from the NT console.)
  • I18N based on UFT8 (Unicode)
  • n-DFS
  • documentation
  • SCO/Linux binaries

Korn provided some URLs that are relevant to this paper:

Phil Scarr asked Korn if he'd compiled Gcc under U/WIN yet. He said he hadn't, but that it should be a trivial task. In his opinion, most programs should compile with virtually no modifications, and he's hoping "we" will get to work on that!

Jim Dennis asked if he'd ported rdist yet. Korn explained that he started by porting just the minimum required to make a usable system. Not being an administrator, he hasn't ported rdist. He's eager to have someone else do it though.

An audience member asked Korn to elaborate on his comment in his paper that "there appear to be few if any technical reasons to move from UNIX to Windows NT." Korn responded that this may be more of a personal preference than anything else ­ he likes the Linux environment for developing software and doesn't need the GUI that NT offers. Without the GUI, he said, there's little other reason to move to NT.

The second talk, "Utilizing Low-Level NTFS Disk Structures for Rapid Disk Searching" was by Michael Fredrick of Pencom. There was no associated paper with this talk.

Fredrick started with a description of the problem: come up with a method for doing fast file searches that can search on various criteria such as date, file size, or file owner (for quotas). His primary interest in coming up with these routines is to develop a quota system for NT. There are products on the market that do quotas on NT, but apparently they work strictly on a file's position in the tree, not on the file's owner, making it impossible to determine how much space a particular user is using.

According to Fredrick, filesystems normally have special "header" files that store file attributes such as name, owner, size, and security. A search algorithm can use these headers for fast searching on attributes.

NTFS uses the MFT (Master File Table) to store this info. Unfortunately, the MFT is a database file, not a sequential file. It's also completely undocumented and cannot be opened with the Win32 API. This makes it rather difficult to use it for fast searches. To view the MFT at all, Fredrick had to write routines to open an NTFS volume in raw mode and seek to the location of the MFT on the disk (it always starts at the same place), then read it in raw.

Fredrick explained that rebuilding a map of the volume from the MFT is not a trivial task. It uses a very elaborate (and highly undocumented) structure for storing the layout of the directories. Additionally, not all information is resident in the MFT ­ it can contain pointers to other locations on the disk. In fact, NT treats everything to do with a file as an attribute ­ the name, date, security ACL, even the data itself. If an attribute is too large to fit in the MFT, the MFT just contains a pointer to it elsewhere on the disk. A side effect of this is that very small files (typically under 1k) can fit entirely within the MFT and do not require any additional disk access at all.

Unfortunately, this is more of a work in progress than a finished product. Fredrick does not yet have any usable routines to do this fast searching.

Fredrick listed some references for anyone interested in more information about the NT filesystem:

  • Windows NT by Helen Custer. By all reports, this is an excellent reference and the only one released by Microsoft. The only thing this book lacks is the numbers (such as offsets, sector locations, etc.).
  • Von Lowis's Web site: <https://tiger.informatik.hu-berlin.de/~loewis>. Martin helped develop for Linux the NTFS driver that contains a fair amount of documentation within it. You can get the source from here, as well as some pointers to other useful sites.

In response to Fredrick's comment about having to rebuild pathnames by walking backwards through the MFT, Rik Farrow commented that it might be worth considering the algorithm that UNIX's ncheck uses where it builds a directory tree as it scans through the i-nodes. The same approach could be used while doing a search through the MFT.

The last paper, entitled "Adding Response Time Measurements of CIFS File Server Performance to NetBench," was presented by Karl Swartz of Network Appliance.

NetBench is a software package for testing throughput of CIFS (aka SMB) fileservers. It doesn't include response time measurements though, so Network Appliance set about adding them. This was done using hardware ­ a packet capture device was added to capture a trace of a client talking to the server. Then the trace was analyzed to figure out how long it took the server to respond under various loads.

Not surprisingly, when Swartz compared a Network Appliance F630 to a Compaq ProLiant 5000 with hardware RAID, the F630 performed better. A detailed comparison can be found in the paper.

Mark Maxwell asked if there was any write caching being done on the Compaq server. Swartz responded that the intelligent RAID controller they were using does some write caching. Also, because the Compaq was using NT as its OS, NT does a substantial amount of write caching itself.

Phil Scarr asked if they were really doing a fair test when they were comparing a 500MHz Alpha box to a quad-Pentium Pro-200 box. Swartz didn't think the servers were processor bound, though, and because the rest of the hardware was very similar, should have made for a fair comparison. Scarr still had some concerns and suggested that a much better comparison would have been between a slower Network Appliance and a faster Compaq. This would truly have shown that CPU speed was irrelevant.

WORKS-IN-PROGRESS SESSION

Summary by Steve Hillman

The Works-In-Progress (WIP) session is made up of a number of very short presentations detailing projects that aren't quite ready to be presented in the form of

a paper yet. A total of eight WIPs was presented.

The first WIP, "DHCP in Eight Days," was presented by Phil Scarr of Synopsys. Scarr chose to use ISC's dhcpd running on UNIX. DHCP leases are keyed by the Ethernet addresses of the clients. ISC's DHCP server stores its lease information, including the name of the client (which gets sent to the server during the DHCP handshake) in a text file. With this text file, the DNS can automatically be stuffed with the IP address and hostname of machines that have leases. Scripts that run every few minutes were set up to check for changes in the leases text file and push them into the DNS.

One drawback to this approach is that every PC must be uniquely named to prevent DNS conflicts. Because the engineers insisted on having administrator access to their personal machines, it was impossible to guarantee that machine names didn't change.

An audience member asked how Scarr dealt with the Network Operators staff who generally want every machine to have a static IP address. Scarr answered that he just told them, "Tough. The NT project needs DHCP." With the automatic updating of the DNS, though, it becomes more or less a nonissue ­ the PCs are named using a specific naming convention, and with the name in the DNS, it is easy to identify the location of problem machines.

For more information on ISC's dhcpd, check their Web site at <https://www.isc.org/>.

The second WIP was on WINS and was presented by Chris Rouland of Lehman Brothers. Rouland's group has found that the Microsoft WINS server has a fairly high failure rate. The MS WINS server is based on a JET database, and there's no programmatic interface to it. Because Microsoft provides no good tools for managing WINS, if the WINS server crashes and corrupts the database, you either have to restore from tape or wipe the database and start from scratch. For these reasons and others, people at Lehman Brothers are working on writing their own WINS server that's much more like DNS ­ a single primary server that pushes changes out to secondary servers, with all servers being read-only to clients. Their implementation is based on nmbd, a program that's included with the Samba distribution.

Chris Rouland tagged off to Chris Hertel of the University of Minnesota, [Editor's note: An article about Hertel's project appears on page 23 in this issue.], who also did a WIP on WINS. The University of Minnesota has a DNS with 40,000 entries and is currently running Samba. When Chris and his team tried to load the 40,000 DNS entries into Samba's LMHosts file, it took over an hour to launch Samba. They decided to build a back-end database for Samba's nmbd instead. They're also working on external references ­ telling nmbd to, for example, check with the DNS before allowing a WINS entry into the database. Finally, because the University of Minnesota, like most universities, is directly on the Internet, they're working on adding filters to nmbd to control who can and can't make queries and generate WINS entries.

Someone asked how each speaker dealt with dhcp putting "dhcp-" in front of all hostnames. Hertel said he wasn't familiar with this problem. Rouland said he hadn't experienced that and suspected it may be a configuration issue.

Todd Williams of MacNeal Schwendler asked all three speakers how they dealt with "renegade" DHCP servers on their networks. At the University of Minnesota, this has not been a big problem; routers are configured to automatically forward DHCP requests to a specific machine, so renegade machines can affect only their local segment. A good relationship with departmental LAN administrators helps here. At Lehman Brothers they use network stormtroopers. At Synopsys, they can black-hole the renegade's packets at the routers, but this doesn't protect the segment that the renegade is on.

The next WIP was on cloning NT workstations and was presented by Kimbol Soques of AMD. Her group needs to roll out over 6,000 workstations during the next three years. To do this, they developed a system based on cloning the hard drive of a completely configured machine. Then they discovered that this is not supported by Microsoft because it results in identical security IDs (SIDs) on every machine. This will mess up SMS on large networks and will also prevent NT 5.0 from working properly. This is not well documented.

They then discovered that the SID is assigned during the installation process when you switch from character-based to graphical-based install. Essentially, NT copies all necessary files to the disk, then wants you to reboot and proceeds to come up into the GUI where the configuration happens. If, instead of rebooting, you shut down the computer and remove the hard drive, you have a SID-less half-installed NT disk that you can clone.

Rob Elkins asked if one could use Ghost to clone the SID-less disks. Kimbol responded that Ghost or any other duplicating tool could be used.

Ken Rudman asked how applications were installed in this process. Kimbol said that several tools, including sysdiff, unattended install, and SMS installer, were used to install applications. No one way works for every application. Some applications, such as MS Office (surprise!), refuse to be installed in this manner. They had to resort to a network install for Office.

Ian Alderman commented that Microsoft Knowledge Base article Q162001 contains a summary of this information. Kimbol mentioned that the Resource Kit tool getsid prints out the SID of the local machine and allows you to determine whether you have duplicate SIDs. She also said that she had 20 or 30 other KB articles that referenced this problem. She'll email all the info to anyone who wants it. Her email address is <kimbol.soques@amd.com>.

Chris Kulpa asked if they were using KiXtart to do registry modifications as part of their remote installations. They are. For more info on KiXtart, check out <https://netnet.net/~swilson/kix.html>.

The next WIP, on Samba, was presented by Jeremy Allison of Whistle Communi-cations. Allison is now the chief architect of Samba and filled us in on the features of release 1.9.17 (which should now be available) and on the features to come soon. [Editor's note: See Allison's article on page 16 of this issue]

Features of release 1.9.17 include:

  • stability
  • speed with shared memory
  • browsing
  • internationalization
  • documentation
  • roaming profiles

Features that Jeremy hopes to have added soon include:

  • config tools
  • locking
  • blocking locks
  • loading of client code pages
  • for large-scale name databases (see the previous WIPs by Rouland and the article by Hertel)
  • security integrated

Features that will take longer to add include:

  • Unicode support
  • support
  • signing
  • replication between Samba servers
  • Controller protocols (even further out)

Till Poser of DESY presented the next WIP, entitled "Designing Application Support Infrastructure for a Heterogeneous Scientific Environment." He wins the "longest WIP title" award. DESY is the German High-Energy Physics Lab. There is a very mixed environment there, with predominantly UNIX and Win3.1 workstations and NT workstations just emerging. DESY is made up of three user groups: engineers, physicists, and administrators. The engineers are, for the most part, the most demanding and are becoming the early adopters of NT. Till, working in the Computing Services Department, is finding that with the proliferation of powerful desktop machines, CS is struggling to hang on to its central control of services and is having to rethink its service structure.

To help control things, they've adopted a three-tier, color-coded computer scheme that was originally developed by Remy Evard at Argonne Labs. A "green" PC is fully administered by CS and is suitable for most administrative staff. A "yellow" PC is built, installed, and configured by CS, but is left to the individual to administer. A "red" PC is under the sole control of the end-user. CS neither administers nor supports it.

CS is using a product called NetInstall to handle automatic installation of applications on demand. User accounts are centrally administered, and users have roaming profiles that allow them to log on from different workstations. NetInstall deals well with machines that are not all identically configured, which makes it a nice fit in their rather anarchic environment.

CS is also experimenting with using SMS installers to push out critical updates such as OS updates and service packs. Most of these services are strictly for the "green" machines to allow CS to maintain central control. CS is not sure yet whether they'll be able to apply these techniques to the "yellow" machines.

An audience member asked what the current distribution was among the three colors and what they expected the distribution to be in the future. Poser answered that, currently, all machines are "yellow" ­ most administrative staff are still on Win3.1. Poser expects about 25­33% will eventually be "green" machines. Very few will be "red" because "red" machines are not allowed to participate in the NT network.

The next WIP, "Integrating NT Workstations with Solaris," was presented by Gerald Carter of Auburn University. When NT was released, Carter's group's goal was to integrate it into the university's environment without having to install any NT servers. This meant that there had to be some way to handle NT's authentication requirements. They either had to duplicate their existing UNIX accounts on the NT workstations or somehow have the NT boxes authenticate via UNIX. They chose the latter. Luckily, there was already a freeware product called NISGina that did exactly that. NISGina is a DLL that replaces MSGINA.DLL and allows an NT workstation to function as a NIS client. NISGina adds the ability to pull home directory info and registry settings from the NIS maps during login. The login procedure goes like this:

  • The user presses ctrl-alt-del to log in.
  • The user enters username and password.
  • NIS maps are searched for username. If it's found, passwords are compared.
  • If the password matches, a LOCAL account is created on the system with registry settings pulled from the NIS maps. The user is then logged in under that account.
  • If the username is not found in the NIS maps, NISGina checks the local administrators group to see if this username belongs to it. If it does, the user is let in (assuming the password is OK).
  • If the username is not in the NIS maps or local admin group, the user is denied access.

Password synchronization is one-way. If you use the NT password-changing mechanism, NISGina changes the password on the account that was created locally and then uses yppasswd to change the NIS password. Unfortunately, if users change their passwords on a UNIX box, the local password does not get updated. Logins still work, because NISGina always looks to NIS for passwords at logon, but apparently this causes problems with shares because NT looks locally for those passwords.

Phil Scarr commented that even if you don't use NISGina for authentication, you can use its tools to query NIS maps.

The final WIP, "Measuring NT Performance," was presented by Steve Hearn of Westpac Bank. Essentially, Hearn was charged with accurately measuring NT performance to allow for capacity planning. With over 1,000 servers deployed, this was not an easy task. Hearn's group had to determine how many servers to baseline, at what frequency to poll them, and how many of the hundreds of performance parameters were actually useful. When selecting performance-measuring products to evaluate, they wanted to make sure that each product

  • had little or no impact on the network
  • a 32-bit architecture
  • could easily export the data
  • be easy to customize which metrics to measure
  • must be easy to use

In the end, they chose two products to evaluate: Performance Works for NT by Landmark Systems and BEST/1 by BGS Systems. They chose a small, basic subset of NT's performance parameters to baseline and decided that they would baseline them again after three months to look for significant changes.

Hearn then asked the audience if anyone else had done anything like this. A spokesman for HP said that they had internally used a product that they marketed called Measureware. This is apparently a snap-in for HP Openview that runs agents on each monitored server (both UNIX and NT). The agent gathers and analyzes data locally and then ships the results back to the Openview module.

Alan Epps asked which product was evaluating better so far. Hearn said that Landmark's Performance Works was working out really well. However, he noted that both companies were relatively new to the NT field and that these two products were still the best two he'd found so far.

After 90 minutes of very speedy talking, the WIP session came to an end. It will be interesting to return next year and hear these WIPs as full papers after the authors have worked out all the kinks.

INVITED TALK

MS/Security in Windows NT 5.0

Peter Brundrett, Microsoft Corporation


Summary by Gus Hartmann

Peter Brundrett, a program manager at Microsoft, spoke about the future of security. Windows NT 5.0 should feature highly improved security, including data privacy on the desktop and on the wire, single enterprise logons, and decentralized administration for large domains. To implement these features, Microsoft will be utilizing strong network authentication, public-key credentials, and standard protocols for interoperability. Security administration will be greatly simplified, with unified account information for each account in a domain, fine-grain access controls, and per-property auditing. Most importantly, the new directory services will allow greater ease of administration through the hierarchical layering of domains.

Most notably, Kerberos 5 will be replacing NTLM as the authentication method for NT-based domains. However, the interoperability between existing Kerberos 5 distributions and NT has not been clarified. Kerberos 4 will not be supported. [Editor's note: SeeTs'o's article on page 8.]

Another important new feature closely associated with the hierarchical layering of domains will be the "nesting" of groups within other groups. Circular groups will be allowed, and error-checking will prevent endless looping when checking for group membership. Local groups will also be eliminated, being made redundant by the new group format.

PANEL

Windows NT Security


Summary by Rik Farrow

The security panel, moderated, by Mike Masterson of Taos Mountain Software, consisted of Bridget Allison of Network Appliances, Jeremy Allison of Whistle Communications (Samba), Peter Kochs of Digitivity, and Peter Brundrett, Microsoft program manager.

The panel started with a quick survey of the audience. Ten percent of the audience used NT 50% of the time or more, while 80% used UNIX 50% or more. This served to identify the audience's background. Most of the audience claimed some experience with NT, and only 2 (out of approximately 320 people) admitted to using Microsoft's DNS.

Bridget Allison began by passing out "Ten Things Every NT Administrator Should Know About Security," summarized here:

  • By default, NT assigns Full Control to Everyone for newly created shares (Full Control is similar to owning a file in UNIX).
  • Password hashes in NT have no equivalent of UNIX's salt; the hash algorithms (DEC and RC4) are faster than the UNIX algorithm, making password cracking easier.
  • No file is safe from Backup/Restore (user) rights.
  • The FTP server distributed with the TCP/IP tools exports the entire disk partition (not restricted, as in UNIX anonymous FTP).
  • Registry settings, as delivered with NT, may have weak or inappropriate (for true security) access control lists (ACLs). Also, prevent remote registry modifications by setting the the following binary key to 1:
    HKEY_LOCAL_MACHINES\System\
    CurrentControlSet\Control
    \SecurePipeServers\WinReg
    .
  • Disable the Guest account (automatically enabled on NT 3.x anytime someone fails to login three times!).
  • Default permissions on
    %systemroot%\system32 and
    %systemroot%\system are inadequate (similar to having / and /etc 777 mode under UNIX). Of course, fixing this will break some applications, such as MS Office.
  • Configure NT RAS (Remote Access Service) to use CHAP with DES (for authentication) and RC4 for link encryption.
  • Use NT auditing and monitor these log files (Be careful to select which events to audit.) Frank Heyne's event logging tools are available at
    <https://rcswww.urz.tu-dresden.de/~fh/nt/eventlog/index.html>.
  • For Internet accessible NT servers, unbind Server, Workstation, and NetBios (involved in SMB file sharing) from the TCP/IP protocol, and block ports 135 (MS RPC, very dangerous), 137 (WINS), 138 (SMB over UDP), and 139 (SMB over TCP).

I admit to ad-libbing by adding comparisons to UNIX.

Peter Kochs talked a little about his company's (Digitivity's) product: a server-based sandbox that runs Java and ActiveX applets while displaying the output on or fetching user events from desktop browsers. The applets run within a controlled environment on the server, isolating the desktop's browser from security lapses and providing central control of applet execution.

Jeremy Allison mentioned that man-in-the-middle attacks against the SMB protocol are fixed with SP3 (service pack 3) for NT 4.0, but this breaks backward compatibility. Essentially, by capturing and retransmitting an SMB authentication (prior to SP3), you can easily discover the password used by a remote client. The patch (SP3) means that older systems (Windows 3.1 and Windows 95) will not work with patched servers.

Allison also said that the system call interface to NT is completely undocumented. This point had been made by other speakers, including some Microsoft employees. The reason given by Microsoft representatives is that the NT kernel changes faster than documentation writers can keep up with. Essentially, the source, some 16­20 million lines of code, is the documentation (the contents of 18 CD-ROMs).

Peter Brundrett of Microsoft mentioned that the conference is part of the process of improving NT. He suggested staying on top of service packs for security. He also said that there are some 2,600 system service APIs (UNIX has 157 system calls in 4.4 BSD). Security was not designed to be a plug-and-play feature, and there are security APIs that have not been documented, said Brundrett.

After this introduction, questions from the audience were accepted, starting with Chris Hertel of the University of Minnesota. Hertel stated that he had heard that the level of cooperation with MIT on Microsoft's implementation of Kerberos 5 was "not so rosy" and that his university planned to block access to Microsoft services at the router. Brundrett responded by saying, "If you are interested in open systems, that is a decision for you to make. . . . Our implementation will interoperate with Kerberos 5."

Hertel that said that he would like to point an NT workstation at a UNIX Kerberos server. Brundrett said this was working in Microsoft's labs. I contacted Ted T'so, who works on the Kerberos project at MIT. His response appears on page 8 in this issue, but the disagreement centers on Microsoft extending the defining RFC (1510) as a result of some vagueness in the text. In particular, how Microsoft will distribute information about groups appears to be in dispute.

Dave Ellis of Sandia Labs asked about DCOM (a distributed object system) configuration and got an answer that Microsoft was working on the problem.

Someone complained about the lack of interoperability between Windows 95 and NT (NT RAS cannot change password, and Windows 95 users cannot include a password with the "net use" command). Todd Needham, a Microsoft evangelist, said that this was a valid point. However, you need to choose the product that meets your requirements. "Windows 95 gives you all the security you deserve," said Needham.

Derek Simmel of CERT/CC said that they planned to start producing advisories for NT but that CERT does not have the level of expertise to support NT. CERT has been working on this for six months. His question was about validity and verification testing of the NT kernel. Brundrett responded by saying that Microsoft has tools to do this. Simmel asked if there are probe tools such as SATAN for NT. Brundrett said that someone else should do this (are you listening, Farmer and Venema?).

Jim Duncan, a systems manager at Penn State, said that Sun, IBM, SGI, and most other vendors worked with FIRST (Forum of Incident Response Teams), but Microsoft has been a glaring absence. Brundrett said that he would see about working with CERT and FIRST.

Someone asked if a user's role can be modified in NT 5. Brundrett said there will be an su command in NT 5.

Another person asked if it would be possible in NT 5 to prevent certain accounts from being added to particular groups (a way of preventing some object access exposures). Brundrett said it is an interesting idea. (Users administer to groups they create, which can lead to vulnerabilities in NT.)

The same person asked about control over encryption in network links. Brundrett replied that Microsoft is considering the notion of zones that control the security policy for encryption. Finally, the person asked if the Management Console was an extra cost option. The answer is no, but snap-ins (add-ons) are customizable extras, and you can buy them.

Like the panel that discussed source licensing of NT, the security panel produced a lot of heat, with most of it directed at Microsoft. Security and vendor secrecy do not mix well. But the Microsoft representatives did make an effort to answer questions and at least said that they would look into making changes, such as better participation in generating security advisories or working with FIRST. Altogether, it was an interesting and useful panel.

T.J. Fiske of Qualcomm said that his company had hundreds of "blasted" (cloned or ghosted) NT installations with the same machine id and asked what happens when we upgrade. Brundrett said that the SIDs do not change during an upgrade to NT 5.

Bridget Allison closed the panel by saying that she relies far too much on volunteered information for NT security and that formalizing the information would be very welcome. She also stated that more public documentation would be welcome.

This session, like many others, had Microsoft employees speaking openly, and not defensively. Only the issue of Kerberos 5 standards was not openly discussed ­ not surprising, because Microsoft will be creating its own "standard" in this area.

Something that did not come up in this workshop, but did in the previous one, has an impact on security. Tod Needham, while speaking on a panel about the need for NT source, said that the RPC mechanism in NT has some very serious security flaws and that he doesn't want too many people poring over the source until these problems has been fixed. I found that interesting, because I had heard similar rumors about vulnerabilities from the hacker community this summer. Let's hope Microsoft handles this one, or perhaps we may yet see the "Microsoft worm."

TECHNICAL SESSION

Administration III: Miscellaneous


Summary by Steve Hillman

This session saw three papers presented. The first, "Implementing Large NT Networks in a Heterogeneous Computing Environment," was given by Freddie Chow of the Stanford Linear Accelerator Center (SLAC).

SLAC's network consists of roughly 700 UNIX boxes, 700 Macs, 200 VMS stations, 200 Win3.1 stations, and 500 NT stations. The NT boxes were all installed very recently.

SLAC's computing support group quickly discovered that the existing support model, consisting mostly of casual employees hired by each department, was not adequate for managing an increasingly complex network. Their new support model calls for centralized server and service support and centralized coordination of client support with departmental system administrators. They also do centralized purchases of software for the site, allowing them both to get discounted software prices and to at least control somewhat the software that goes on client machines.

SLAC has established several sitewide computing policies: they've decided to implement a single sitewide NT logon ­ any user can log on from any workstation. There's also to be no Windows 95 on site. Additionally, several domain policies were established, including:

  • All NT workstations must belong to a domain. A catchall domain was set up for workstations in departments that don't have enough machines to justify their own domain.
  • For departments running their own domains, there must be adequate hardware for the Domain Controllers.
  • Every domain must have a PDC and two BDCs. They must be on a UPS and located in a secure area.
  • Domain system administrators are to be technically proficient and must carry pagers

The computing support group realized it couldn't ignore Win95 completely. A large research site such as SLAC will have visiting researchers with notebook computers. To deal with this, they developed a Win95 policy that states:

  • Win95 is not supported.
  • need to report their machines to departmental sysadmins upon their arrival.
  • sysadmins will check the machine for proper network configuration.
  • should revert their machines to original config at departure if possible and reclaim the IP address

When looking at management software, Chow compared Microsoft's SMS to NICE/NT ­ a management package developed at CERN. NICE/NT offered support for Novell and NDS as well as NT, but SMS offered remote application installation and OS updates. Because SLAC has no Novell, they decided to use SMS. In talking with other sites running SMS and in testing they did themselves, they found that a practical configuration for an SMS machine was a dual Pentium Pro-200 station with 256Mb of RAM.

SLAC is also running NTrigue to allow non-NT users to run NT applications.

Till Poser of DESY asked how SLAC was generating its NT workstations. Chow replied that they were using cloning. He didn't elaborate, but in light of the SID issues that came up at the conference earlier, he may have to rethink that process.

Ken Rudman elaborated on the dual-Pentium requirement of SMS. SMS uses a SQL server to store all its information. Normally, the SMS server and SQL server are run on two separate machines, and the SQL server can benefit from the dual-Pentium configuration. SMS by itself should not need such a powerful machine.

Will Figueroa of Mentor Graphics asked for some more information on the NTrigue server hardware and how many users it can handle. The server at SLAC is a dual Pentium-Pro with 512Mb RAM and 100baseT. They run very few people on it and so haven't really stressed it, but according to colleagues at Stanford University, a similarly configured server was running with 32 concurrent users without significant slowdown.

Steve Hearn asked what SLAC's SMS hierarchy was. Chow responded that currently they're only deploying it on a small scale with just a single site. Bill Evans of Nortel commented that they'd installed a number of standalone SMS sites and then installed a master on top to pull all the sites together into a hierarchy.

Till Poser asked what trust relationships were in place for the 12 domains that they had. Chow said they had a single-master domain model with a single domain for accounts. The 12 domains are resource domains with between 50 and 150 workstations each.

The second paper, "Effective Usage of Individual User NT Profiles with Software Distribution," was presented by Mark Murray and Troy Roberts of Interprovincial Pipe Line Inc. (IPL). IPL runs an oil pipeline that stretches across most of Canada. Along the pipeline, they have over 100 field offices, roughly 30 of which are manned. Their WAN reaches most offices, and they have a mixture of Win3.1/95, Windows NT, VMS, and UNIX workstations spread through their offices.

In devising their infrastructure, they kept several design goals in mind:

  • Universal logon (A user should be able to log on from anywhere and in some cases, from any platform type.)
  • basis (Services should be network based, rather than platform or host based ­ i.e., all services available to all platforms. Relocation of services should be transparent to the user.)
  • administration
  • Automated patch and software distribution (independent of platform type)
  • naming across all platforms
  • workstation per office (Applications for other OSs should be available through Telnet, NTrigue, etc.)
  • software loads (Wherever possible, all machines run exactly the same software. This makes machine restoration easy.)

To accomplish most of these goals, IPL used NT profiles for users. They divided the profile into a user profile, which contains all of the settings relevant to a particular user, and a machine profile, which contains machine and global software settings. They also added the concept of a file space to the profiles. Profiles normally deal only with registry settings, but adding file space allowed them not only to specify what registry settings would be set, but what files should be present. This was done using combinations of shares to have appropriate files (both user data and application binaries) accessible to the user at a consistent location.

Software installation is accomplished by splitting software packages into two separate packages ­ a user package and a machine package. The user package gets installed into a user's profile and follows that user around. The machine package gets installed into each machine that requires that application and stays with the machine.

Not all machines run every application, and not all users need every application. To manage all of this, a database was set up. Every time a user logs in to a workstation, the database is queried to determine what apps the user should have and what apps the machine is allowed to run. The database has a Web-based front end that allows help desk staff to assign applications to users and machines. New assignments take effect the next time a user logs in or a machine is logged into.

This is an extremely sophisticated software installation infrastructure. For a lot more detail on how it works and what was done, refer to their paper.

Questions started with the most common question of the conference: how, if at all, did they integrate passwords between NT and UNIX boxes? Unfortunately, they haven't tackled this problem yet. Passwords are still stored in two separate places, and no synchronization is done.

Todd Williams asked them to comment on the amount of time it took to implement this project and whether it was justified. Roberts estimated that it had taken them about 18 months from conception to completion. During the actual rollout at the end, they had a staff of about 12 working on the project. He said that this project has drastically reduced their total cost of ownership. It's extremely easy to add new machines and to support existing machines because the machines are always in a known state.

Ian Alderman commented that NT normally leaves behind the profile of the last user who logged in. He wondered if they'd seen this problem. Roberts responded that, in general, this hasn't been a problem. For one thing, users tend to stick to using their own machines (for now) and, for another, because their profiles completely rebuild the machine's environment when a user logs in, any old information is completely flushed out.

An audience member commented on a product called P-synch, which runs on a Windows machine and allows users to change their passwords on every system at once. It's available from M-tech, <https://www.m-tech.ab.ca/psynch>.

Rob Elkins asked exactly how applications are downloaded to machines when users log in to different machines. Murray responded that, in general, they try to avoid this. Most applications are preinstalled on systems. Apps such as Office are installed on every machine. Other large packages, such as AutoCAD, are installed on selected machines and will not be installed onto other machines. Small applications are Just-In-Time installed if a machine doesn't have them and a user who's supposed to have them logs in. This installation is done at login time.

John Holmwood of NOVA Gas Trans-mission asked how a user's profile gets updated when new applications are installed if the user is currently logged in. Murray responded that, because applications are split into two separate packages, machine-specific part of the package can be installed immediately, and the user-specific package can be installed the next time the user logs in.

Someone asked what tools were used to implement this system. Roberts said that WinInstall was used to create the separate packages, and Perl was used behind the scenes on the servers.

The last paper of the session, entitled "System Administration of Scientific Computing on Windows NT," was presented by Remy Evard from Argonne National Lab.

Evard and his group got the idea that they might be able to build a supercomputing architecture around a farm of Pentium Pro-based PCs for a fraction of the cost of the existing supercomputers. They based this on tests that show that Pentium Pro-200-based systems compare favorably to supercomputers when matched processor to processor.

Their plan, quite simply, was:

  • a cluster of NT workstations.
  • out how to make it work.
  • it over to the computer scientists to port tools.
  • it over to the physicists to put it to work.

To start with, they built a cluster of several Pentium Pro 200 PCs interconnected with 100baseT. The main problem they faced was getting remote access to the PCs. NT is designed to be used from the console, and in their environment, that was rarely practical.

They found that Ataman's RSH offered a reasonable remote-access solution. It still wasn't perfect ­ it didn't load in the user's registry settings to allow for personalized drive mappings and such ­ but it permitted users to schedule jobs remotely.

They also set up a machine to run NTrigue on. They contemplated putting it on each node in the cluster, but that doesn't scale well because it requires a user to make a connection to every node one at a time. Instead, the NTrigue machine serves as a sort of "front end" to the cluster, allowing users who don't have their own NT workstations to access NT resources and build programs.

The cluster has been up for several months now, and they're calling it a qualified success. According to Evard, it works, but it's slow. Although there are still bugs to work out and tools to develop, the hardware is available cheap off the shelf, making expansion of the system relatively painless.

Lyle Meier asked if they'd considered using Timbuctu for remote access. Evard responded that, again, it doesn't scale to hundreds or thousands of nodes. It would be usable to connect to the front end, because NTrigue is being used now.

Till Poser asked if they'd looked at LSF to handle scheduling of jobs. Evard said that they had purposely avoided dealing with scheduling yet. They still hadn't perfected scheduling on their IBM supercomputer, so they didn't want to complicate the NT experiment. When it comes time to look at scheduling, they'll look at LSF as well as others.

Poser also asked Evard to expand on his "it's slow but it works" statement. Evard explained that it appeared to be a software problem involving an intercommunication library that was being used ­ the code ran faster on a single machine than on two, and on four, it pretty much stopped. They're convinced that interprocess communication will always be the most difficult obstacle to overcome. The Pentium chip itself will have no problems competing on a number-crunching basis.

PRESENTATIONS AND PANEL

Tips, Tricks, and Gurus


Summary by Gus Hartmann

This panel consisted of Mike Carpenter, Mike Frederick, Bruce Schrock, and Paul Pavlov from Pencom and Eric Pearce from O'Reilly & Associates. Mike Carpenter is the manager of the answer desk at Pencom. He has 11 years of UNIX experience. Mike Frederick urged everyone in attendance to obtain the Resource Kits. Bruce Schrock has been with Pencom for one-and-a-half years, and has been working with Windows NT since version 3.1. Paul Pavlov is from a mostly UNIX background. Eric Pearce is a half-time network administrator and half-time author at O'Reilly and Associates with 12 years of UNIX experience, mostly on Suns, and 2 years of Windows NT experience.

After this brief introduction, the panel began taking questions.

When browsing, the client has a fixed limitation on the number of shares displayed. Why?

Each subnet has a master browser. Every machine broadcasts its presence. The master browser picks up the broadcast and synchronizes with the Primary Domain Controller. Based on the version of the client and the master browser, there are some limitations on the number of shares displayed.

Is there a scriptable setup for Windows NT print servers?

It can and has been done, but the script is proprietary code and can't be released.

Is there an equivalent to sudo for Windows NT?

No.

Are there any books or tutorials analogous to "UNIX for VMS users" to assist UNIX users when dealing with Windows NT?

Not in any single unified source.

Is there a passive way to set system time based over a network?

No, there are only active commands such as net time /set /y.

IP anomalies are occurring while multi-homing Windows NT. What can be done to prevent or minimize this?

Don't multi-home Windows NT unless absolutely necessary.

INVITED TALK

MS/Systems Management Server (SMS)

David Hamilton, Microsoft Corporation


Summary by Steve Hillman

The final session of the workshop was an invited talk by Microsoft on SMS. The session nearly didn't happen ­ news came down on Thursday that the speaker had had to cancel. This did not go over well with the audience, and one has to wonder what transpired at Microsoft when, on Friday, the audience was informed that the session would happen after all.

The talk was presented by David Hamilton, who is the product manager for Systems Management at Microsoft. David emphasized that SMS is just one product that the Systems Management group puts out, but it is their key package. SMS is currently at release 1.2

SMS is primarily focused on three key areas: Inventory, software distribution, and diagnostics. Inventory deals with the hardware and software on each workstation. Hamilton said that SMS is very heavily inventory focused. In his opinion, you have to have good inventory tracking to be able to perform either of the other two functions. The second function, software distribution, is the function most people buy SMS for. It allows you to distribute applications and OS updates to

every workstation on the network. The last function, diagnostics, allows you to diagnose remote workstations. One of the most used tools is the remote-control tool that allows you to see and control a workstation's desktop.

Although SMS has a Windows NT focus, it will also manage Netware, LAN Manager, LAN Server, MS-DOS, Windows 3.1, Windows 95, Macs, and OS/2 clients.

SMS is geared toward enterprise networks with hundreds or thousands of workstations. It is designed to be distributed ­ multiple-site organizations can have a site server at each location. Each site-server becomes the point of communication both into that site and back to the master server. Site servers do not have to be Windows NT servers. If an organization has Netware servers already deployed, SMS has agents that can run on those servers and provide basic SMS functionality.

SMS is also bandwidth aware. It can be configured to use only a certain percentage of available WAN bandwidth for software distribution, diagnostics, or management. It can also be configured to use that bandwidth only at certain times during the day.

Hamilton admitted that SMS 1.2 is weak in the area of remote software installation (a key feature that many people buy it for), but Microsoft has recently released a couple of add-ins that promise to greatly improve it. The first product, Package Command Manager (PCM), runs as a service on each NT workstation and allows installations that must run with administrative privileges to run regardless of what user (if any) is logged in at the workstation.

The second add-on is the SMS Installer, which is an extremely sophisticated package installer that uses snapshooting to assemble packages. This is useful for applications that either aren't designed for remote installation or can't be automatically installed at all. Essentially, a snapshot is taken of a "test" machine before installation; then the application is installed manually, and another snapshot is taken afterwards. The differences are compared, and a package containing all of them, in both files and registry settings, is assembled. A script is also put together to automatically install the package at each workstation.

It's claimed that SMS Installer makes intelligent decisions about what does and does not need to be installed at each workstation (in the cases where remote workstations are configured differently than the "test" machine). By doing before and after snapshots of the workstation, the installer provides the capability to do an uninstall or even a full rollback in case of difficulty. SMS Installer also supports signing of packages to increase security.

On the subject of NT 5.0, Hamilton explained that, although it makes a lot of advancements in the area of manageability, NT 5.0 does not overlap much with SMS's capabilities. Both inventory management and diagnostics are strictly the domain of SMS. Where there is some overlap is with automated software installation. NT 5.0 includes a product currently named Darwin that attempts to simplify rollout of applications in an enterprise network. The SMS Installer has been designed to interact with Darwin, handing off to Darwin on the workstations when it's present.

Microsoft's Systems Management group is spearheading another new development: Web Based Enterprise Manage-ment (WBEM). This is somewhat of a bad title, because WBEM has nothing to do with the Web. Essentially, WBEM is to systems management what SNMP is to network management, although WBEM will not be constrained to just systems. The goal is to provide a common layer and interface for getting at information about a system. That information could be software config info, user info, hardware info, network stats, or performance data. It doesn't matter; the WBEM layer will bring it all together and translate it if necessary. WBEM is being included in both Windows 98 and NT 5.0 and promises to greatly improve the manageability of these platforms by products such as SMS.

Kristina Harris asked Hamilton to clarify what software is available today and what platforms it runs on. SMS is currently at revision 1.2 and will run on either NT 3.51 or NT 4.0. Several add-ons have been produced; most are bundled in the SMS Service Pack 2. The two add-ins, PCM and SMS Installer, are separately downloadable. All of these add-ins work under both NT 3.51 and 4.0.

An audience member asked for clarification on whether SMS needed SQL Server and a PDC or not. Hamilton said that SMS requires SQL Server (either 6.0 or 6.5). It does not work with any other database software. SMS must run on a Primary or Backup Domain Controller; however, it does not have to run on the same machine as the SQL Server (in fact, for performance reasons, it's better if it does not).

Another audience member asked if documented APIs are available for WBEM. There are, and Hamilton referred him to a Web site where it's all available: <https://wbem.freerange.com>.

Someone asked if SMS Installer's rollback capability consumes a lot of disk space. Hamilton said that it does, because it has to save a copy of everything it replaces. But the installer includes sophisticated scripts that allow you to uninstall by walking backwards through the script. This isn't the same as a full rollback, but it uses little or no disk space.

Remy Evard asked what resources are available for understanding and successfully deploying SMS and how users can get feedback to the development team on areas that need improvement. Hamilton said that there's a lot of information online on the Microsoft Web page, including planning guides for rolling out SMS and MS Office using SMS. See <https://www.microsoft.com/smsmgmt> For feedback, there are two mailing lists, <smswish@microsoft.com> and <manageit@microsoft.com>, for SMS-specific and general management suggestions, respectively.

Someone asked if Microsoft intends to support UNIX with SMS. Hamilton said that they worked with DEC quite a while ago to develop UNIX management. They ended up handing the work over to DEC, and it's been released under the title Assetworks.

Jonathon Ott of HP asked if there is a way to do SMS's remote control from the Windows 95 platform. Hamilton said that a company called Computing Edge makes an SMS add-on that allows the SMS admin program to run under Windows 95.

An audience member noted that SMS lacks support for "distribution by courier." [Editor's note: Courier may be the Tivoli Systems software distribution product.] Hamilton said they are working on functionality that will allow you to transfer a software package on a CD-ROM via courier and just send the commands over the WAN to install the package. This functionality will appear in the next full release of SMS.

Derek Simmel of CERT asked what security information the inventory manager could gather and also what security measures are being used on software distribution. Hamilton, admitting that he isn't a security expert, said that the inventory manager can gather any information that a process running on the remote workstation has access to ­ such as registry settings or file information. As far as the software distribution is concerned, he said they're not introducing any new holes, but are just using the existing network structure that's already in place (although he had said earlier that SMS Installer supports packet signing).

David Blank-Edelman from Northeastern University commented that he'd been doing systems management for at least 15 years and had come across a huge wealth of information, some of which was in the form of conference proceedings and such. He wondered if Microsoft had access to that information and was using it in its design philosophies. Hamilton said that he spends a lot of his time talking with customers to see what they want in the products, but he had no idea whether Microsoft had any of those proceedings.

Chris Kulpa of Michigan State University asked how the HKEY_LOCAL_USER hive in the NT registry gets updated when a package is installed and a user is not logged on. Hamilton said that currently it isn't. What really needs to happen is that packages need to be split apart into a user section and a system section and each section installed separately. The user section would be installed the next time the user logs on. Microsoft plans to have this in the next release. Note that this is the same functionality that IPL wrote (see the Administration III session summary on page 62).

Bill Evans asked what can be done about software that requires software keys or serial numbers to be entered at the workstation as it's installed. Hamilton said it's possible to put the numbers required into the registry as part of the distribution and then pull out the desired number as part of the install.

Rik Farrow said that most UNIX administrators are nervous about turning an installation script with administrative privileges loose on a system. Hamilton replied that you can "test drive" a script first and get a printout of exactly what changes the script will make to the system without it actually making the changes. This allows you to avoid things like installation of back doors.

T.J. Fisk of Qualcomm commented that his company has a software librarian who handles distribution of software to individuals. Currently the packages are mailed out, taking a week or more to be delivered. He wanted to know if it is possible in SMS to automate this, making it easy to deliver a single package to a single person. Hamilton responded that currently this is possible with SMS, but it's difficult. There are rule-based algorithms for deciding what machines to deliver software to, but one needs a lot of training to write those rules. With the introduction of Microsoft Management Console in NT 5.0, this should get a lot easier.

Scott Fadden of Sequent Computers asked if there is a database schema available for the SMS database. Hamilton said this is probably the single most frequent request they get, but they can't provide one because their database changes too much from release to release. Instead, they supply a utility called SMS Views that provides a series of ODBC interfaces into the database.

An audience member pointed out that throughout the conference, there's been a strong interest in getting more command line support into NT. He asked if SMS would support more command line functionality. Hamilton responded that they're working on that slowly, but they still won't have everything accessible from the command line by NT 5.0.

This marked the last session of the workshop. Phil Scarr concluded with a few words. When he asked who would attend a workshop like this next year, virtually all hands went up. The program committee had already agreed that another workshop would be a good idea.

The Large Installation System Administration of Windows NT Conference is scheduled for August 5-7, 1998, in Seattle, WA.

Great Workshop Quotes (from Phil Scarr's Closing Remarks)

Have you ever heard of drag and UN-drop?

Iconic systems management is a fake!

Human efforts alone are insufficient. . .

Last time I checked, Office 97 was a productivity app, NOT an operating system!

Now booting the registry. . .please wait. ..

This really isn't as bad as it sounds. . .

NT 5.0 . . . I'm running it today. What I run, you don't want!

Has anyone noticed that [cable TV] Channel 29 is dedicated to the Windows 95 Blue Screen of Death?

Sysadmins always know what should be in the next revision. . .

Windows 95 gives you all the security you deserve. . .

My day job is installing and configuring Exchange at Boeing!

Common Themes and Observations (from Phil Scarr's Closing Remarks)

NT is far too/more complicated (than UNIX)

Management demands rapid deployment

No good management tools

NT is too "closed"

Support departments are given far too few resources to manage NT

There are not enough NT "experts" yet

Proposed NT enterprise infrastructure doesn't scale

NT is becoming instantly mission critical

User namespace integration is hard

Microsoft is suprisingly willing to listen

Integration issues are becoming better understood

Samba is cool

Everybody is having the same problems I am


webster@usenix.org
Last changed: Nov 12, 1997 efc
Summaries Index
Workshop Index
Proceedings Index
USENIX home