GlassNage: Layout Recognition for Dynamic Content Retrieval in Multi-Section Digital Signage
We report our approach to support dynamic content transfer from publicly available large display digital signage to users’ private display, specifically Glass-like wearable devices. We aim to address issues concerning dynamic multimedia signage where the content are divided into several sections. This type of signage has become increasingly popular due to optimal content exposures. In contrast to prior research, our approach excludes computer vision based object recognition, and instead took an approach to identify how contents are being laid-out in a digital signage. We incorporate techniques to recognize basic layout features including corners, lines, edges, and line segments; which are obtained from the camera frame taken by the user using their own device. Consequently, these layout features are combined to generate signage layout map, which is then compared to pre-learned layout map for position detection and perspective correction using homography estimation. To grab a specific content, users are able to choose a section within the captured layout using the device’s interface, which in turn creates a request to contents server to send respective content information based on a timestamp and a unique section ID. In this paper, we describe implementation details, report user study results, and conclude with discussion of our experiences in implementation as well as highlighting future work.
Keywords: Digital signage Public display Public-to-private Multi section Layout recognition Computer vision Visual features Line segment User study.