StreetReaderAI: Towards making street view accessible via context-aware multimodal AI

We introduce StreetReaderAI, a new accessible street view prototype using context-aware, real-time AI and accessible navigation controls.

Interactive streetscape tools, available today in every major mapping service, have revolutionized how people virtually navigate and explore the world — from previewing routes and inspecting destinations to remotely visiting world-class tourist locations. But to date, screen readers have not been able to interpret street view imagery, and alt text is unavailable. We now have an opportunity to redefine this immersive streetscape experience to be inclusive for all with multimodal AI and image understanding. This could eventually allow a service like Google Street View, which has over 220 billion images spanning 110+ countries and territories, to be more accessible to people in the blind and low-vision community, offering an immersive visual experience and opening up new possibilities for exploration.

In “StreetReaderAI: Making Street View Accessible Using Context-Aware Multimodal AI”, presented at UIST’25, we introduce StreetReaderAI, a proof-of-concept accessible street view prototype that uses context-aware, real-time AI and accessible navigation controls. StreetReaderAI was designed iteratively by a team of blind and sighted accessibility researchers, drawing on previous work in accessible first-person gaming and navigation tools, such as Shades of DoomBlindSquare, and SoundScape. Key capabilities include:

  • Real-time AI-generated descriptions of nearby roads, intersections, and places.
  • Dynamic conversation with a multimodal AI agent about scenes and local geography.
  • Accessible panning and movement between panoramic images using voice commands or keyboard shortcuts.