The tool, called NaviSense, uses large language models and vision-language models to recognise objects from spoken prompts, without requiring preloaded templates.
Pic/iStock
Researchers at Pennsylvania State University have developed a smartphone app that uses artificial intelligence to help visually impaired people locate everyday objects in real time.
The tool, called NaviSense, uses large language models and vision-language models to recognise objects from spoken prompts, without requiring preloaded templates.
A key feature is real-time “hand guidance.” In tests with 12 participants, NaviSense identified objects faster and more accurately than two commercial visual-aid apps, and users reported a better overall experience. The team is now refining battery usage and model efficiency ahead of potential commercial release.
Subscribe today by clicking the link and stay updated with the latest news!" Click here!



