Google responds to Microsoft with multiple search, new maps and more AI in apps

Google responds to Microsoft with multiple search, new maps and more AI in apps

[ad_1]

Google holds 92% of the worldwide search engine market share compared to Bing’s 2%. Is ChatGpt integration with Microsoft products a threat? The Google managers who took turns on stage a few hours after Microsoft’s announcement must have wondered this obsessively. And what happened in Paris was a muscular demonstration of those who are convinced they hold a record in research and AI and do not intend to feel outdated in their own field.

Forty minutes of announcements about Google’s AI improvements on smartphones. From immersive maps to computer vision to searches using text and images. Together, these technologies should offer a more natural and visual search. Technically it is not a chat that communicates like ChatGpt, it does not respond (yet) with natural language and with summaries. However, it has the ambition to understand the context and better understand what we want to say. And reply with the best link, text or image. But let’s see better what changes.

What is multi-search?

With multiple search, you can search with an image and text at the same time. Today Multi-Search is available globally on mobile, in all languages ​​and in all countries where Lens is available. In short, you can take a photo and add “near me” to find what you need, whether you want to support the commercial activities of the neighborhood or need to find something quickly. It is currently available in English in the US and will expand globally in the coming months. For example, you might search for “modern living room ideas” and see a coffee table you like, but would prefer to have another shape, such as a rectangle instead of a circle. You can use multiple search to add the text “rectangle” and find the style you are looking for.

What’s new in Google Lens

Google Lens is Google’s computer vision app. It’s used more than 10 billion times a month by people looking for what they see using their camera or pictures. The update announced today will allow you to “search your screen” across the Android world. What it means? Thanks to this technology, you will be able to search for what you see in photos or videos on websites and apps without having to switch apps.

Find out more

The new immersive maps

The third novelty had actually already been announced. But this time it is explained. The new immersive maps that allow you to recreate streets and squares in 3D are obtained thanks to neural radiance fields (NeRF), an advanced artificial intelligence technique that transforms ordinary images into 3D representations. With NeRF, they can accurately recreate the entire context of a place, including lighting, texture of materials, and what is in the background. All this makes it possible to understand if the moody lighting of a bar is the right atmosphere for a date night or if the view of a café makes it the ideal place for a lunch with friends. The immersive view begins today in London, Los Angeles, New York, San Francisco and Tokyo. In the coming months it will be launched in other cities, including Amsterdam, Dublin, Florence and Venice.

[ad_2]

Source link