Check out the new USENIX Web site. USENIX - Summaries


The Inktomi Search Engine

By Eric Brewer, University of California, Berkeley

Summary by Jerry Peek

Inktomi is one of several popular search engines on the World Wide Web. Like other search engines, Inktomi lets Web users hunt for Web pages that contain a word or phrase. The world of search engines is competitive, and Eric Brewer didn't miss many chances to say where Inktomi is superior to other search engines, especially to AltaVista. His talk also had plenty of technical information, including some about AltaVista that wasn't covered in the AltaVista session. [Also see the next session report.]

In general, search engines have three parts. A crawler searches the Web for pages (documents) by following hypertext links from other pages. A database stores information about the pages. And an HTTP interface (basically, a Web server) handles users' queries on the database.

Inktomi now indexes 54 million full-text documents. Its crawler reads 10 million documents a day. The number of user queries is growing rapidly: 4% or more per week (some weeks more than 10%).

Eric gave four metrics for a search engine. First, completeness: what fraction of the Web is indexed? Second, freshness: how up-to-date is the index? Third, speed. Fourth, power: how well can a user focus the search?

As everyone has heard, the Web is exploding. IDC has projected 805 million documents on the Web by 1999, Eric said. The Web is growing faster than hardware speed. There are two main issues with this growth. First, to handle the exponential growth of users and documents, he says, a server must be scalable. Because it's hard to predict how big you'll need to be very far ahead, it's best to be able to grow as you need to. Second, the number of documents outside the US is growing very rapidly. But Internet connections between continents are slow and congested. So Eric believes that, to be efficient and avoid adding more congestion, each continent should have its own crawler; the crawlers' data should be merged into one big database.

Eric's key idea has been to build a "supercomputer" (he used the word loosely) from commodity building blocks. This has several advantages over big, super-fast, leading-edge hardware. There are many vendors, with lots of competition; so the cost/performance ratio is great, and new hardware can be obtained, fully configured, in 48 hours. Because you aren't on the bleeding edge, reliability is excellent. The nodes are heterogeneous: you can add newer hardware alongside older, slower nodes, and simply tune to balance their loads to their capacities. Inktomi also uses off-the-shelf 4 GB disks-400 of them, in a RAID configuration-so disk bandwidth is multiplied. The nodes (about 50 of them in January 1997) are tied together by a custom Myrinet network: 160 MB/second and a 25-microsecond round-trip user level to user level. For fault tolerance, pairs of disks and CPUs are linked: if one part of a pair fails, the other takes over. There's no replication, except in the RAID.

The database is spread across the nodes: each disk and chunk of memory holds a part of it, in proportion to the number of nodes. Documents are distributed randomly across the nodes. So to do a search, all nodes are involved; the inter-node network is crucial. Eric says his network is nowhere near capacity. All the nodes at his site run the same software and speak HTTP to handle user queries.

In contrast to Inktomi's distributed architecture, Eric said, other search engines (like Lycos and AltaVista) use a more traditional architecture: big SMP machines with lots of memory. Databases are stored on RAID disks and paged into shared memory. Processors handle queries independently on a shared database. To scale to more users, these sites simply replicate the database. Replication also provides fault tolerance, which is effective, but expensive. (There's no fault tolerance, though, if the engine has only a single SMP node.)

Inktomi also has partner sites in Australia and Japan. Because of the scalability of his design, partners can grow to fit their demand: starting out small, growing as their part of the Web grows. Growing incrementally is also an advantage because a site doesn't need to make a huge initial investment (in a big SMP box), so there's less risk. Each partner site also runs a crawler on its geographic part of the Web. Data are transferred between sites on magnetic tape because shipping magtape gives more bandwidth more cheaply.

The database uses extreme compression to get as many documents into memory as fast as possible. The database is updated by Web crawlers. The crawler reads the Web (54 million documents) as fast as possible, but without overwhelming international networks or individual Web servers. His crawler is multithreaded: it can crawl 10,000 sites at once-but to limit the impact, it hits any one site only once every two minutes or so. The crawling rate is adjustable by site: the New York Times is hit daily because the data change every day, but peoples' home pages that don't change often might be hit infrequently. The hit rate is decided by tracking how often each site has changed, over time, in the past.

Inktomi is at https://www.hotbot.com.

Originally published in ;login: Vol. 22, No.2, April 1997.


webster@usenix.org
Last changed: May 28, 1997 pc
Summaries Index
Anaheim Index
Proceedings Index
USENIX home