Google fixes 2 annoying quirks in its voice assistant

“Today when people want to talk to any digital assistant, they think of two things: what do I want to do and how should I word my order to achieve it,” says Subramanya. “I think it’s very unnatural. There is a huge cognitive burden when people talk to digital assistants; natural conversation is one way to alleviate the cognitive burden. ”

Making conversations with Assistant more natural means improving his reference resolution, his ability to link a sentence to a specific entity. For example, if you say “Set a timer for 10 minutes” and then say “Change it to 12 minutes”, a voice assistant needs to understand and resolve what you are referring to when you say “that”.

New NLU models are powered by machine learning technology, in particular bidirectional encoder representations from transformers, or BERT. Google unveiled this technique in 2018 and first applied it to Google search. The first language comprehension technology used to deconstruct every word in a single sentence, but BERT processes the relationship between all of the words in the sentence, dramatically improving the ability to identify context.

An example of how BERT has improved research (as referenced here), it is when you search for “Parking on a hill without border”. Before, the results still contained hills with borders. After activating BERT, Google searches came up with a website that advised drivers to steer the wheels to the side of the road. BERT has not been without its problems, however. Studies by Google researchers showed that the model has associated sentences referring to disabilities with negative language, prompting the company to be more careful in natural language processing projects.

But with the BERT models now used for timers and alarms, Subramanya says Assistant is now able to respond to related queries, like the aforementioned adjustments, with nearly 100% accuracy. But that superior contextual understanding isn’t working everywhere just yet – Google says it’s slowly working to incorporate the updated models into more tasks like reminders and controlling smart home devices.

William Wang, director of UC Santa Barbara’s Natural Language Processing group, says Google’s improvements are dramatic, especially since applying the BERT model to spoken language comprehension is “not a problem. very easy thing to do ”.

“In the whole field of natural language processing, after 2018, with the introduction of this BERT model by Google, everything changed,” says Wang. “BERT actually understands what follows naturally from sentence to sentence and what the relationship between the sentences is. You learn a contextual representation of the word, phrases and also phrases, so compared to previous work before 2018, it’s much more powerful. “

Most of these upgrades can be relegated to timers and alarms, but you will see a general improvement in the voice assistant’s ability to comprehensively understand the context. For example, if you ask him the weather in New York and ask questions such as “What is the tallest building?” and “Who built it?” The wizard will continue to provide answers knowing which city you are referring to. It’s not entirely new, but the update makes the wizard even more adept at solving these pop-up puzzles.

Names of teaching assistants

Video: Google

The wizard is now also better able to understand unique names. If you’ve tried calling or texting someone with an unusual name, there’s a good chance it will take several tries or not work at all because the Google Assistant didn’t know the pronunciation. correct.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *