People whose vision is impaired increasingly use AI to assess their own appearance, raising questions about the psychological impact of AI models that are trained on conventional standards of beauty.
What’s new: Milagros Costabel, a blind freelance journalist, wrote about her experiences using a vision-language model as a virtual mirror. Her article on BBC.com explores challenges and potential pitfalls of relying on AI to judge personal qualities that are largely subjective and individual.
How it works: Costabel uses Be My Eyes, a smartphone app that provides a voice chatbot based on GPT-4 Vision. (Users can request to speak with a human volunteer to address critical or difficult issues.) She acknowledges the benefit of greater independence but highlights the challenge for blind people, who have little choice but to trust AI’s interpretation of what it sees. “For many blind people interviewed for this article, the experience feels both empowering and disorienting at once,” she writes.
- Using the app to apply skin-care products, Costabel finds that it “does more than simply describe an image — [it offers] critical feedback.” For instance, it said her skin “definitely doesn’t look like the almost perfect example of reflective skin,” and “maybe if your jaw was less elongated . . . your face would look a little more like what is objectively considered beautiful in your culture.”
- A similar app developed by Envision AI includes an AI assistant to handle tasks like checking calendar entries, reading product labels, and describing surroundings. The company’s CEO says he was “surprised by the number of customers who use it to do their makeup or coordinate their outfits. Often the first question they ask is how they look.”
- Costabel interviewed a blind 20-year-old man who used an AI model to help him select photos for a dating profile. However, he found that the model’s descriptions didn’t match his own understanding of his hair color and facial expressions. “This kind of thing can make you feel insecure,” he said.
- Psychologists worry that AI-generated assessments of physical beauty can contribute to depression and anxiety. Blind people, who can’t independently evaluate AI’s judgements about visual input, may be especially vulnerable. “AI not only allows blind people to . . . [compare] themselves to descriptions of photos of other human beings, but also to what AI might consider the perfect version of them,” said Helena Lewis-Smith, a psychologist at University of Bristol.
Behind the news: A number of products aim to use vision-language models to assist visually impaired users. In addition to Be My Eyes and Envision AI, offerings include Microsoft Seeing AI, Aira Explorer, and navigation app Oko. Such apps increasingly connect with wearable devices. For instance, Envision Glasses and Ray-Ban Meta Smart Glasses (you can read a vision-impaired user’s report here) provide hands-free, real-time narration that describes surroundings, reads documents, and identifies specific faces.
Why it matters: AI applications that serve visually impaired users should be able to provide objective, factual interpretations of visual input, to the extent that it’s feasible. More broadly, truly accessible AI products must accommodate users who have no way to verify their output. This may require further technology development, and meanwhile keeping humans in the loop (as Be My Eyes, Aira Explorer, and others do) or providing certainty scores that help users modulate their trust in the model’s output.
We’re thinking: Building products of any kind requires empathy with users, but building AI products that help people to overcome sensory and other impairments requires exceptional empathy. Extensive testing in the real world and careful revisions based on user feedback will go a long way toward making products that help people both do and feel their best.