Google Lens Gets Smarter With AI
During the Google I/O conference, Google has added new AI features to its Google Lens helping users find out more about the world around them.
The first new feature has smart text selection which connects the words you see with the ‘answers and actions you need’. So if you’re out at a restaurant and see a name of a dish you can’t recognise, Google Lens will show you a picture of the said dish.
Google says this requires not just recognising shapes of letters, but also the meaning and context behind the words.
Google Lens has style match where a user can hover their camera over an item of clothing or home decor item and the Lens will show you a match and similar styles.
Lens will now be working in real time giving users information instantly, anchoring it to the things you see. Google credits this to state-of-the-art machine learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second.
The Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and Google Pixel.
These new Lens features will start rolling out over the next couple of weeks.