|国家预印本平台
首页|A Large Vision-Language Model based Environment Perception System for Visually Impaired People

A Large Vision-Language Model based Environment Perception System for Visually Impaired People

A Large Vision-Language Model based Environment Perception System for Visually Impaired People

来源:Arxiv_logoArxiv
英文摘要

It is a challenging task for visually impaired people to perceive their surrounding environment due to the complexity of the natural scenes. Their personal and social activities are thus highly limited. This paper introduces a Large Vision-Language Model(LVLM) based environment perception system which helps them to better understand the surrounding environment, by capturing the current scene they face with a wearable device, and then letting them retrieve the analysis results through the device. The visually impaired people could acquire a global description of the scene by long pressing the screen to activate the LVLM output, retrieve the categories of the objects in the scene resulting from a segmentation model by tapping or swiping the screen, and get a detailed description of the objects they are interested in by double-tapping the screen. To help visually impaired people more accurately perceive the world, this paper proposes incorporating the segmentation result of the RGB image as external knowledge into the input of LVLM to reduce the LVLM's hallucination. Technical experiments on POPE, MME and LLaVA-QA90 show that the system could provide a more accurate description of the scene compared to Qwen-VL-Chat, exploratory experiments show that the system helps visually impaired people to perceive the surrounding environment effectively.

Zezhou Chen、Zhaoxiang Liu、Kai Wang、Kohou Wang、Shiguo Lian

10.1109/IROS58592.2024.10801813

计算技术、计算机技术自动化技术、自动化技术设备遥感技术

Zezhou Chen,Zhaoxiang Liu,Kai Wang,Kohou Wang,Shiguo Lian.A Large Vision-Language Model based Environment Perception System for Visually Impaired People[EB/OL].(2025-04-24)[2025-05-23].https://arxiv.org/abs/2504.18027.点此复制

评论