Check out the new USENIX Web site. USENIX - Summaries


NTnix 98 . . . You Are There

by George M. Jones

Let's get the truth out of the way up front. I haven't written a line of code for NT in about three years and I haven't run it in roughly six months, since shortly after measuring cold-boot-to-useful-work-time of five minutes and 20 seconds. My main system, a Linux box, hasn't had to be booted in 397 days and counting, having been last rebooted only because of hardware failure.

That being said, the point of this article is to report on experiences and observations at two recent events sponsored by USENIX. The two events, the Second USENIX Windows NT Symposium and the Large Installation System Administration of Windows NT Conference, were held back-to-back in Seattle August 3-8, 1998, separated by a day of tutorials. I refer to these combined events as "the conference."

While much of my interest lies in technical areas, my primary reasons for attending these events were social. Having been a member of USENIX and the UNIX community for 11 years, I wanted to connect with my professional peers and friends on issues related to NT. Many of them have embraced "the dark side." As yet, I haven't been able to bring myself to do so. In the end, I would say that I met my goals of obtaining information and perspective.

The events drew together both senior Microsoft technical people and long-time members of the USENIX/UNIX community in a way that no other event could. While there was some not-so-thinly veiled animosity toward "the Borg," I think that this conference came about as close as you can to mixing oil and water successfully .

Now, on to specific observations.

NT/UNIX Integration

This being a USENIX conference, a major focus was on UNIX/NT integration and migration. There were at least six different systems presented in papers or discussed at BOFs that aimed at easing some form of integration or migration between Windows NT and UNIX.

Samba. Samba is the granddaddy of all UNIX/NT integration projects. It enables UNIX systems to act as CIFS/SMB (Windows) fileservers. It was (and continues to be) reverse engineered to be "bug compatible" with Microsoft file servers by watching the bits on the wire. Many of the other integration efforts use Samba as their starting point. There was a Samba BOF where Samba issues and futures were discussed. Some of the interesting work is aimed at enabling SAMBA to function as either a primary or backup domain controller. There is an interesting paper, "NT 3.5/4.0 Domains for UNIX," in the LISA-NT proceedings. See <https://samba.anu.edu.au/> for further information. Oh, yeah, and here's a mind-bending rumor: there's apparently been at least talk of porting SAMBA to NT.

Network Appliance. Network Appliance (<https://www.netapp.com>) makes a line of fileservers that "bat both ways." They act as both NFS fileservers and CIFS/SMB fileservers. Netapp presented two papers about their work dealing with attempts to resolve issues raised when trying to reconcile the UNIX and NT security models (permissions vs. ACLs) and user identities (UID/GID vs. SIDs). The papers do a good job of outlining the issues and potential solutions. Not surprisingly, some of these same issues were discussed at that SAMBA BOF.

UWIN. David Korn ran a BOF to discuss his UWIN "product." UWIN is fundamentally a porting layer that allows UNIX code to work on top of Win32. It allows you to use a full set of your old "friends," including, of course, the Korn Shell, on Windows NT. A paper on UWIN was presented at last year's conference. See <https://www.research.att.com/sw/tools/uwin/> for complete information.

Cygwin32. Geoffrey Noer of Cygnus Solutions presented a paper about Cygwin32. Cygwin32 is another UNIX-calls-on-top-of-Win32 porting layer. It is fundamentally similar to UWIN in that both allow UNIX code to be compiled and run with few or no modifications on Win32 systems. The biggest nontechnical difference is that source for Cygwin32 is available for free under the GPL. Cygwin32 has been used to port the GNU tools, compilers, and many other popular programs to Win32. It is available for download from <https://www.cygnus.com/misc/gnu-win32>.

Interix: Another way to crack a nut. Stephen Walli of Softway Systems ran a BOF to discuss the Interix product (formerly OpenNT, also presented at last year's conference). Interix takes a different approach. Rather than layering UNIX calls in a library on top of Win32, Softway Systems wrote a new POSIX subsystem for NT (one that, unlike Microsoft's, is functional). The benefit is that UNIX apps will be running "native." Interix implements POSIX interfaces that make direct calls to the NT executive/kernel layer without the overhead of going through two layers of libraries. The downside (or feature, depending on whom you talk to) is that one cannot write mixed-mode applications with Interix, i.e., one can not mix both POSIX and Win32 calls in the same application. Information about Interix can be found at <https://www.interix.com>.

Microsoft's "Services for UNIX." Last but not least in the UNIX/NT integration arena, we must mention Microsoft's "Services for UNIX," announced during the NT futures session. Herein lies a story. Greg Sullivan, the Microsoft product manager for "Services for NT," gets up to describe the product. They're including the Intergraph NFS client and server, "telnet clients and servers that [really work]," and some UNIX command line tools and shells. People asked what the source of the tools and shells was. "MKS." So, a guy goes up to the microphone and starts pointing out where the MKS Korn shell deviates from the specs for the various versions of the Korn shell. To which Mr. Sullivan replies, "Well, do you know of anything better?" To which someone else in the audience replies, "That's David Korn." Can you say "setup"? Can you say "lion's den"?

NT in the Real World

The Microsoft Corporate Rollout of NT 5.0. There were two sharply contrasting stories of the corporate rollout of Windows NT. Curtis Cummings of Microsoft's ITG group described their plan for and progress with rolling out NT 5. It was clear from several comments that he is "one of us" and does not merely toe the party line. He described their careful plan to roll out NT 5 in a phased approach, with the rollout to be completed sometime after the product release. When his management/marketing team told him to move the schedule up by six months or so in order to be fully converted before product rollout, he said, they calmly reverted to their backup plan — "Dear God, Help!"

He also made some less-than-flattering references to a well-known system management product by a large Seattle-based software developer (and indicated that they have a growing group of "really good Perl coders"), in part because he can't get development interested in developing the types of tools he needs to manage a large environment. There was strong sentiment that Microsoft should get some of their Perl people to next year's conference to describe what they're doing and why.

One of Microsoft/ITG's goals is to have the whole company running on NT. They are very close. While this is understandable from a marketing/sales perspective, it's not clear that attaining this goal will allow Microsoft to truly "feel the pain" of the mixed environment that many of their customers live in and will continue to live in for the foreseeable future.

The Lucent/Bell Labs Non-rollout of NT. Thomas A. Limoncelli of Lucent Bell Labs gave a provocatively entitled paper, "Providing Reliable NT Desktop Services by Avoiding NT Server," in which he described how Bell Labs provides desktop services in a mixed NT and UNIX environment largely without running NT Server. He makes the case that everyone is better off if the focus is on what services should be provided, rather than on what server or client software you should run. Once the required services are identified, the means of providing them can be selected. They chose to follow two principles: (1) Use open standards wherever possible. (2) Keep it simple, stupid. Rather than saying "we're going to use the foo mail server" or the "bar mail client," they chose to use open protocol standards such as SMTP and IMAP. This allows users to use clients of their choosing and allows administrative staff to choose servers that they can manage well, and to dump vendors of poorly functioning back-end software. What back end do you want to run today?

Installing Lots of Stuff

Four papers and one entire session dealt with one aspect or another of updating software/operating systems on large numbers of client machines. This tells me one of two things: either the tools provided by Microsoft to accomplish these tasks are inadequate, or LISA people just can't quit hacking together their own systems to solve problems they find interesting. I suspect the former (see Microsoft ITG resorting to training a group of Perl hacks).

sendmail and BIND for NT

Bo Ahiberg of Metainfo (<https://www.metainfo.com>) presented lessons learned from porting BIND and sendmail to NT. They started with a Win32-ified BIND and wound up developing their own porting layer, one that sounds suspiciously similar to Cygwin32 and UWIN. (How many people have invented that wheel?) They reported particular problems with the fork(2) and select(2) semantics (as did Cygnus). He claimed that BIND is essentially single threaded, whereas sendmail is "the forking daemon from hell" (quoted from memory — the words may be wrong, but the sentiment is correct). They had to get the fork semantics working fairly well to make sendmail work.

Since they were working with the standard ISC code (<https://www.isc.org>), they had a strong desire to minimize source code changes. They wanted to be able both to take new code from the owner and to hand code back for incorporation into the main source tree without going through #ifdef hell. The porting layer (UNIX system calls on top of Win32 again) were a big win in achieving this goal. It was claimed that getting new releases (from ISC) of BIND working on NT now takes only a day or two, whereas the first time it took months.

Of Keynotes and Architecture

I view myself as mostly a pragmatic implementor. I make things work, software things mostly. But I also like to keep an eye out to know what's coming down the road in the research and hardware areas. This is where USENIX conferences are unique. No other organization, to my knowledge, has such a strong focus on "research that works" or would bring together sysadmins and researchers in a context that allows each to benefit from the others' insights.

Justin Rattner of Intel gave the opening keynote. He spent some time talking about Intel processor offerings, including the new Pentium II Xeon, multiprocessor offerings, etc. He also spent some time talking about the Visual Interface Architecture (VIA). This is an architecture that permits "fully protected, direct user-level access to the network interface, the illusion of a dedicated network interface and application transfer of data to/from the network without operating assistance (i.e., zero data copy)." VIA has apparently broad industry support and will be coming soon to high-performance servers and clusters in your area. Tell your friends. It looks like a Very Good Thing.

I was amused to see that one of the keynotes (Forest Baskett, SGI) was talking about the inevitability of nUMA (non-uniform memory access) shared memory multiprocessors. I was a systems admin/programming consultant for a research nUMA machine 11 years ago. nUMA machines had, and still have, a lot of promise. The big question in my mind is still the question of programming complexity. Can we build tools to hide it from average programmers while allowing them to take advantage of the architecture, or can we train them to think effectively about things like process and memory placement? Sure, the hardware guys can build nUMA machines, but can we program them?

Another interesting concept that was touched on in several talks and poster sessions was the idea of System Area Networks (SANs). These are networks that allow systems to be interconnected at roughly bus speeds. Again, there are profound implications for operating systems and applications.

At the End of the Day

I think a conversation that I had with my boss upon returning from the conference puts things in perspective with respect to "the Microsoft question." There are two poles. At one pole are the people who refuse to run Microsoft (or proprietary) products. At the other are the people who think that unless Microsoft invented (or "embraced and extended") it, it doesn't exist/matter ("Who needs that Internet thing anyhow?" "Nobody every got fired for choosing . . .").

In the end, as usual, wisdom seems to be found in moderation and communication. Putting one's head in the sand, at either pole, is a mistake. The Microsoft crowd needs to realize that "the gospel according to the technical evangelists" may not be the whole truth. The UNIX/open systems crowd needs to realize that, like it or not, to a whole lot of people, computers and Microsoft/ Windows are the same thing, and that lots of useful work gets done using Microsoft products. The USENIX NT conferences are a vital step toward opening up communication between the two crowds. I believe that everyone will benefit from the continued dialogue that they make possible.


?Need help? Use our Contacts page.
First posted: 29th January 1999 jr
Last changed: 29th January 1999 jr
Issue index
;login: index
USENIX home