|国家预印本平台
首页|Reading Recognition in the Wild

Reading Recognition in the Wild

Reading Recognition in the Wild

来源:Arxiv_logoArxiv
英文摘要

To enable egocentric contextual AI in always-on smart glasses, it is crucial to be able to keep a record of the user's interactions with the world, including during reading. In this paper, we introduce a new task of reading recognition to determine when the user is reading. We first introduce the first-of-its-kind large-scale multimodal Reading in the Wild dataset, containing 100 hours of reading and non-reading videos in diverse and realistic scenarios. We then identify three modalities (egocentric RGB, eye gaze, head pose) that can be used to solve the task, and present a flexible transformer model that performs the task using these modalities, either individually or combined. We show that these modalities are relevant and complementary to the task, and investigate how to efficiently and effectively encode each modality. Additionally, we show the usefulness of this dataset towards classifying types of reading, extending current reading understanding studies conducted in constrained settings to larger scale, diversity and realism.

Charig Yang、Samiul Alam、Shakhrul Iman Siam、Michael J. Proulx、Lambert Mathias、Kiran Somasundaram、Luis Pesqueira、James Fort、Sheroze Sheriffdeen、Omkar Parkhi、Carl Ren、Mi Zhang、Yuning Chai、Richard Newcombe、Hyo Jin Kim

计算技术、计算机技术

Charig Yang,Samiul Alam,Shakhrul Iman Siam,Michael J. Proulx,Lambert Mathias,Kiran Somasundaram,Luis Pesqueira,James Fort,Sheroze Sheriffdeen,Omkar Parkhi,Carl Ren,Mi Zhang,Yuning Chai,Richard Newcombe,Hyo Jin Kim.Reading Recognition in the Wild[EB/OL].(2025-05-30)[2025-06-28].https://arxiv.org/abs/2505.24848.点此复制

评论