In order for Google Assistant to really help users with daily tasks, the tool should be able to understand the human user. This means not only understanding the words you speak but also recognizing the meaning behind them. For this reason, the Assistant should adapt to your speech style without you needing to voice commands using specific words in the correct order.
However, comprehending verbal speech proves challenging, given the highly variable context often used from one person to another. Yet another obstacle arises with words that sound different but are spelled the same way, such as two names. Google has begun to tackle this issue by focusing on teaching Google to identify unique names.
In particular, Google aims to solve the common problem users face of attempting to use Assistant to select one of their contacts and the Assistant struggles to choose the correct contact. Instead, Google intends the Assistant feature to recognize and accurately articulate people’s names as often as possible, especially less common names.
Within the next few days, users will be able to teach Google Assistant how to properly pronounce and identify the names of their contacts based on the way you pronounce those names. The Assistant will do this by memorizing your pronunciation of a name without recording the utterance. This upgrade will improve the tool’s ability to understand the names of those contacts when you state them in the future. Initially, the feature will be available in English, with Google hopeful to add more languages soon.
Additionally, Google has announced an improvement to Assistant’s timer feature. This change aims to allow users to set multiple timers at once as well as to set and cancel a timer using varied types of vocabulary. This way, users can more easily activate and stop a timer without having to worry about using consistent vocabulary in their commands. Once again, Google emphasizes its prioritization of context recognition in order to save users time when using the Assistant. Once the tool can comprehend even in the face of varied contexts, the AI can support user needs in the best manner yet achieved.
On the technical side, Google engineers have completely reconstructed Assistant’s NLU models to help the tool better understand the context as well as enhance its “reference resolution,” or ability to recognize the intent behind an issued command. This new development works off of machine learning technology generated back in 2018 and implemented for Google Search that enabled comprehension of all searched words in relation to one another rather than one word at a time.
Thanks to this new feature, Google Assistant can now respond with nearly 100 percent accuracy to both alarms and timer tasks.
The post AI is Improving Google Assistant Conversations appeared first on ELE Times.
No comments:
Post a Comment