Tuesday, May 20, 2008

Introduction to Google Search Quality



Search Quality is the name of the team responsible for the ranking of Google search results. Our job is clear: A few hundreds of millions of times a day people will ask Google questions, and within a fraction of a second Google needs to decide which among the billions of pages on the web to show them -- and in what order. Lately, we have been doing other things as well. But more on that later.

For something that is used so often by so many people, surprisingly little is known about ranking at Google. This is entirely our fault, and it is by design. We are, to be honest, quite secretive about what we do. There are two reasons for it: competition and abuse. Competition is pretty straightforward. No company wants to share its secret recipes with its competitors. As for abuse, if we make our ranking formulas too accessible, we make it easier for people to game the system. Security by obscurity is never the strongest measure, and we do not rely on it exclusively, but it does prevent a lot of abuse.

The details of the ranking algorithms are in many ways Google's crown jewels. We are very proud of them and very protective of them. By some estimate, more than one thousand programmer/scientist years have gone directly into their development, and the rate of innovation has not slowed down.

But being completely secretive isn’t ideal, and this blog post is part of a renewed effort to open up a bit more than we have in the past. We will try to periodically tell you about new things, explain old things, give advice, spread news, and engage in conversations. Let me start with some general pieces of information about our group. More blog posts will follow.

I should take a moment to introduce myself. My name is Udi Manber, and I am a VP of engineering at Google in charge of Search Quality. I have been at Google for over two years, and I have been working on search technologies for almost 20 years.

The heart of the group is the team that works on core ranking. Ranking is hard, much harder than most people realize. One reason for this is that languages are inherently ambiguous, and documents do not follow any set of rules. There are really no standards for how to convey information, so we need to be able to understand all web pages, written by anyone, for any reason. And that's just half of the problem. We also need to understand the queries people pose, which are on average fewer than three words, and map them to our understanding of all documents. Not to mention that different people have different needs. And we have to do all of that in a few milliseconds.

The most famous part of our ranking algorithm is PageRank, an algorithm developed by Larry Page and Sergey Brin, who founded Google. PageRank is still in use today, but it is now a part of a much larger system. Other parts include language models (the ability to handle phrases, synonyms, diacritics, spelling mistakes, and so on), query models (it's not just the language, it's how people use it today), time models (some queries are best answered with a 30-minutes old page, and some are better answered with a page that stood the test of time), and personalized models (not all people want the same thing).

Another team in our group is responsible for evaluating how well we're doing. This is done in many different ways, but the goal is always the same: improve the user experience. This is not the main goal, it is the only goal. There are automated evaluations every minute (to make sure nothing goes wrong), periodic evaluations of our overall quality, and, most importantly, evaluations of specific algorithmic improvements. When an engineer gets a new idea and develops a new algorithm, we test their ideas thoroughly. We have a team of statisticians who look at all the data and determine the value of the new idea. We meet weekly (sometimes twice a week) to go over those new ideas and approve new launches. In 2007, we launched more than 450 new improvements, about 9 per week on the average. Some of these improvements are simple and obvious -- for example, we fixed the way Hebrew acronym queries are handled (in Hebrew an acronym is denoted by a (") next to the last character, so IBM will be IB"M), and some are very complicated -- for example, we made significant changes to the PageRank algorithm in January. Most of the time we look for improvements in relevancy, but we also work on projects where the sole purpose is to simplify the algorithms. Simple is good.

International search has been one of our key focus areas in the past two years. This means all spoken languages, not just the major ones. Last year, for example, we made major improvements in Azerbaijani, a language spoken by about 8 million people. In the past few months, we launched spell checking in Estonian, Catalan, Serbian, Serbo-Croatian, Ukranian, Bosnian, Latvian, Filipino Tagalog, Slovenian and Farsi. We organized a network of people all over the world who provide us with feedback, and we have a large set of volunteers from all parts of Google who speak different languages and help us improve search.

Another team is dedicated to new features and new user interfaces. Having a great engine is necessary for a great car, but it is not sufficient. The car has to be comfortable and easy to drive. The Google search user interface is quite simple. Very few of our users ever read our help pages, and they can do very well without them (but they're good reading nevertheless, and we're working to improve them). When we add new features we try to ensure that they will be intuitive and easy to use for everyone. One of the most visible changes we made in the past year was Universal Search. Others include the Google Notebook, Custom Search Engines, and of course, many improvements to iGoogle. The UI team is helped by a team of usability experts who conduct user studies and evaluate new features. They travel all over the world, and they even go to people's homes to see users in their natural habitat. (Don't worry, they do not come unannounced or uninvited!)

There is a whole team that concentrates on fighting webspam and other types of abuse. That team works on variety of issues from hidden text to off-topic pages stuffed with gibberish keywords, plus many other schemes that people use in an attempt to rank higher in our search results. The team spots new spam trends and works to counter those trends in scalable ways; like all other teams, they do it internationally. The webspam group works closely with the Google Webmaster Central team, so they can share insights with everyone and also listen to site owners.

There are other teams devoted to particular projects. In general, our organizational structure is quite informal. People move around, and new projects start all the time.

One of the key things about search is that users' expectations grow rapidly. Tomorrow's queries will be much harder than today's queries. Just as Moore's law governs the doubling of computing speed every 18 months, there is a hidden unwritten law that doubles the complexity of our most difficult queries in a short time. This is impossible to measure precisely, but we all feel it. We know we cannot rest on our laurels, we have to work hard to meet the challenge. As I mentioned earlier, we will continue providing you with updates on search quality in the coming months, so stay tuned.

No comments:

Post a Comment