The research giant uses machine learning to create descriptions of millions of images, which dominate the Internet and social media in particular, as much of the online content is visible.
Not everyone can see these images. Blind and visually impaired users rely on screen readers or Braille screens, but these devices rely on reminding website developers to create so-called alternate text, which provides a description of what is in the picture.
While many large websites include alternative text, smaller sites often do not. The alternative text does not always appear on social media, as images travel faster than some systems can cope.
Google’s new feature is based on the same technology that allows users to search for images by keyword, and the image description is automatically generated.
“There are still millions and millions of unnamed images on the web,” said Laura Allen, senior program manager at the Chrome Accessibility team, who understands the problem because she has low vision.
“When you get to one of those images using a screen reader or a braille display, you’ll hear an image, an unnamed graphic, or a very long string of numbers, the irrelevant file name,” she said.
Translations are not perfect, and if the algorithm is unsure of the image, you will not try to categorize it at all, and the tool was able to name more than 10 million images within a few months of testing.
The feature is slowly being deployed to users, as Chrome is specifically promoted by people who use screen readers to encourage them to try it out.
This feature is only available to users with screen readers that output spoke comments or braille, with image descriptions being read by the screen reader, but will not appear visually on the screen.