IEEE Winter Conference on Applications of Computer Vision (WACV), Hawaii, USA, January 2015(Won the best paper award) |
Vinay BettadapuraGoogle, Inc.College of Computing, Georgia Techvinay [at] gatech.edu |
Irfan EssaGoogle, Inc.College of Computing, Georgia Techirfan [at] cc.gatech.edu |
Caroline PantofaruGoogle Inc.cpantofaru [at] google.com |
We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person’s field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person’s head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. |
@inproceedings{Bettadapura:2015:EgocentricLocalization,
|
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without explicit permission of the copyright holder. |