Topics
in style
AI
Amazon
Image Credits:Utku Uçrak/Anadolu Agency / Getty Images
Apps
Biotech & Health
Climate
Image Credits:Google
Cloud Computing
Commerce
Crypto
Image Credits:Google
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
ironware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
societal
Space
Startups
TikTok
Transportation
Venture
More from TechCrunch
effect
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
In addition to a newgesture - powered search feature for Android devices , Google today also inclose an AI - powered addition to its visual search capabilities in Google Lens . get today , users will be able to point their television camera or upload a photo or screenshot to Lens , then ask a interrogation about what they ’re seeing to get answer via reproductive AI .
The feature is an update to the multisearch capability in Lens , which allow web users to research using both text and images at the same time . Previously , these case of searches would take drug user to other visual match , but with today ’s launch , you ’ll receive AI - power results that offer insights , as well .
As one model , Google suggests the feature of speech could be used to larn more about a plant by snapping a photograph of the plant , then asking “ When do I irrigate this ? ” . Instead of just showing the user other images of the industrial plant , it identifies the industrial plant and tell the drug user how often it should be watered , e.g. “ every two weeks . ” This feature bank on selective information pulled from the World Wide Web , include entropy bump on websites , product sites and in videos .
The feature also works with Google ’s novel lookup motion , dubbedCircle to Search . That means you’re able to kick off these generative AI queries with a gesture , then ask a question about the item you ’ve circled , scribbled on or otherwise indicate you ’re interested in hear more about .
However , Google clarified that while the Lens multisearch lineament is offer productive AI insights , it ’s not the same production as Google ’s experimental GenAI search SGE ( Search Generative Experience ) , which continue opt - in only .
The AI - powered overview for multisearch in Lens are launch for everyone in the U.S. in English , starting today . Unlike some of Google ’s other AI experiments , it ’s not limited to Google Labs . To use the feature , you ’ll just tap on the Lens camera icon in the Google search app for Io or Android , or in the search box on your Android phone .
Similar to Circle to Search , the addition aims to maintain Google Search ’s relevance in the years of AI . While today ’s WWW is cluttered with SEO - optimise garbage , Circle to Search and this adjacent AI - powered capability in Lens aim to better search result by tap into a web of knowledge — including many web pages in Google ’s forefinger — but extradite the result in a unlike format .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Still , incline on AI entail that the result may not always be accurate or relevant . World Wide Web pages are not an encyclopedia , so the answers are only as exact as the underlying source textile and the AI ’s ability to resolve a interrogation without “ hallucinating ” ( coming up with simulated answers when actual reply are n’t available ) .
Google observe that its GenAI products — like its Google Search Generative Experience , for example — will cite their sources , to allow user to fact - check its answer . And thoughSGE will remain in Labs , Google said it will begin to introduce generative AI advances more broadly , when relevant , as it ’s doing now with multisearch termination .
The AI overview for multisearch in Lens come today , while thegesture - ground Circle to Searcharrives on January 31 .