logo
logo
Sign in

Google has started to implement a new AI spelling algorithm to better understand misspelled queries

avatar
Janet McNeil
Google has started to implement a new AI spelling algorithm to better understand misspelled queries

Last year, the Google search engine switched to BERT (Bidirectional Encoder Representations from Transformers or bidirectional encoder network) technology, which gives the search engine the ability to understand natural human language. As part of Search On 2020, the company announced a number of other improvements to more accurately interpret user queries.

About 15% of Google's daily search volume is entirely new, according to Prabhakar Raghavan, head of search and corporate assistant. This means that the company needs to continually work to improve the SERP.

This is allegedly due to bad, error-prone queries. According to Katie Edwards, vice president of engineering at Google, one of ten search terms contains errors. Google has long struggled with illiterate queries with the "Perhaps You Meant:" feature, which suggests correct spelling. By the end of the month, this feature will receive a major update with a new 680 million parameter neural network spell checker. It fires in less than three milliseconds after each request, and the company promises that the new algorithm will offer even more accurate suggestions for misspelled words. This change alone will significantly improve spelling than all improvements in the past five years, according to a Google blog post.

Another innovation: Google search can now not just index entire web pages, but individual sections of these pages. By better understanding the relevance of specific passages rather than the entire page, you can more easily find the information you are looking for. Google assures that the introduction of this technology (next month) will improve the search results in all languages by 7%. The technology will be introduced all over the world.

Google will also connect neural networks to understand search query subtopics. This will provide a greater variety of content when the search term covers a broad term (for example, it can help you find home exercise equipment designed for small apartments, rather than just providing general information about training equipment).

Finally, Google is also starting to use computer vision and speech recognition to understand the deep semantics of videos and automatically highlight key points. Automatic tags will allow you to split the video into parts, similar to the sections of a book. For example, cooking videos or sports recordings can be analyzed and automatically divided into chapters. This will let you know what interests you without having to view or scroll through the recording. Google has begun testing video highlighting technology and expects to use it for at least 10% of all searches by the end of the year.

collect
0
avatar
Janet McNeil
guide
Zupyak is the world’s largest content marketing community, with over 400 000 members and 3 million articles. Explore and get your content discovered.
Read more