Language models like GPT-3 could herald a new kind of search engine

Now, a team of Google researchers has published a radical overhaul proposal which discards the ranking approach and replaces it with a single large AI language model, such as BERT or GPT-3“Or a future version of them. The idea is that instead of searching for information in a large list of web pages, users would ask questions and have a language model trained on those pages answer them directly. The approach could change not only how search engines work, but also what they do and the way we interact with them.

Search engines have become faster and more accurate, although the size of the web has exploded. AI is now used to rank results, and Google uses BERT to understand search queries better. Yet under these adjustments, all mainstream search engines still operate the same way they did 20 years ago: Web pages are indexed by crawlers (software that reads the web non-stop and maintains a list of everything it finds), results that match a user’s query are gathered from that index, and the results are ranked.

“This model of index retrieval and then ranking has stood the test of time and has rarely been questioned or seriously rethought,” write Donald Metzler and his colleagues at Google Research.

The problem is that even the best search engines still respond today with a list of documents containing the requested information, and not with the information itself. Search engines are also not efficient at responding to queries that require responses from multiple sources. It’s like asking your doctor for advice and receiving a list of articles to read instead of a straightforward response.

Metzler and his colleagues are interested in a search engine that behaves like a human expert. He must produce natural language answers, synthesized from more than one document, and back up his answers with references to supporting evidence, as Wikipedia articles do.

The great linguistic models allow us to go part of the way. Trained on most websites and hundreds of books, GPT-3 pulls information from multiple sources to answer questions in natural language. The problem is that he does not keep track of these sources and cannot provide evidence for his answers. There’s no way to tell if GPT-3 is parroting trustworthy information or misinformation – or just spitting out nonsense of its own making.

Metzler and his colleagues call the dilettantes language models: “They are seen to know a lot, but their knowledge runs deep. The solution, they say, is to build and train future BERTs and GPT-3s to keep records of the origin of their words. No such model is yet able to do this, but it is possible in principle, and preliminary work is underway in this direction.

There have been decades of advancement in different areas of research, from responding to queries to synthesizing documents to structuring information, says Ziqi Zhang of the University of Sheffield in the UK, who studies search for information on the web. But none of these technologies have reshuffled the research because they each address specific problems and are not generalizable. The exciting premise of this article is that big language models are able to do all of these things at the same time, he says.

Still, Zhang notes that language models don’t work well with technical or specialist subjects because there are fewer examples in the text they are trained on. “There is probably hundreds of times more e-commerce data on the web than there is quantum mechanics data,” he says. Today’s language models are also biased towards English, which would leave non-English speaking parts of the web underserved.

Still, Zhang welcomes the idea. “This has not been possible in the past, because the great linguistic models have only recently taken off,” he says. “If it works, it would transform our research experience.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *