4th Workshop on Egocentric (First-Person) Vision

Keynote Speakers

Call for Participants

By taking advantage of the first-person point-of-view paradigm, there have been recent advances in areas such as visual grasp analysis,personalized video summarization, understanding concepts of social saliency, activity analysis with inside-out cameras (a camera to capture eye gaze and an outward-looking camera), recognizing human interactions and modeling focus of attention. However, in many ways we are as a community only beginning to understand the full potential (and limitations) of the first-person paradigm.

Presenter Instructions

Here are the instructions for preparation of the presentation.

  • Accepted papers / abstracts will be presented as spotlight or poster presentation
  • Each spotlight presenter will have a 2.5 minute oral presentation (with 20 seconds trasition time) to highlight the main contributions and novelties of their paper
  • Spotlight presenters have to prepare their own laptops for the presentation
  • Two poster sessions are allocated (morning / afternoon), with each session 45 minutes
  • The maximum poster size is 8'*4'

Program Committee

Leonid Segal
(Disney Research)
Sameh Khamis
(Microsoft Research)
Gregory Rogez
(INRIA)
Mohammad Moghimi
(Cornell Tech)
Nick Rhinehart
(CMU)
Minjie Cai
(Univ. of Tokyo)
Yang Wu
(NAIST)
Kai Kunze
(Keio Univ.)
Alejandro Betancourt
(TU/e Eindhoven Univ. of Technology)
Serge Belongie
(Cornell Tech)
Hamed Pirsiavash
(UMD, Baltimore County)
Walterio Mayol-Cuevas
(Univ. of Bristol)
Yumi Iwashita
(Kyushu Univ.)
Bo Xiong
(UT Austin)
Tae-Kyun Kim
(Imperial College London)
Chao-Yeh Chen
(UT Austin)
Sung Ju Hwang
(Ulsan Nat. Inst of Science and Tech)
Samson Cheung
(Univ. of Kentucky)
Yoichi Sato
(Univ. of Tokyo)
Yusuke Sugano
(MPI)
Giuseppe Serra
(Univ. of Modena and Reggio Emilia)
Rita Cucchiara
(Univ. of Modena and Reggio Emilia)
Remi Megret
(Univ. of Puerto Rico at Mayagüez)
Hironobu Fujiyoshi
(Chubu Univ.)
Krishna Kumar Singh
(UC Davis)
Hironobu Takagi
(IBM)
Tatsuya Ishihara
(IBM)
Takaaki Shiratori
(Occulus VR)
Hyun Soo Park
(UPenn)
Gunhee Kim
(Seoul Nat. Univ.)
Atsushi Nakazawa
(Univ of Kyoto)
Adriana Kovashka
(Univ. of Pittsburgh)
Javier Civera
(Univ. of Zaragoza)

Program

Start Time Paper / Talk Title Author / Speaker
09:20 Welcome
09:30 Keynote Talk Kristen Grauman (UT Austin)
10:00 Keynote Talk Shmuel Peleg (The Hebrew Univ. of Jerusalem)
10:30 Coffe Break
10:45 Spotlight Presentation 1
[PAPER] Body Part Based Re-identification from an Egocentric Perspective F. Fergnani, S. Alletto, G. Serra, J. de Mira, R. Cucchiara
[PAPER] Discovering Objects of Joint Attention via First-Person Sensing H. Kera, R. Yonetani, K. Higuchi, Y. Sato
KrishnaCam: Using a Longitudinal, Single-Person, Egocentric Dataset for Scene Understanding Tasks K. K. Singh, K. Fatahalian, Alexei A. Efros
Egocentric Place Recognition for AR Wayfinding Assistive Technology J. S. Zelek, C. Azzi
Towards Social Interaction Detection in Egocentric Photo-streams M. Aghaei, M. Dimiccoli, P. Radeva
Embedding of Egocentric Action Videos in Semantic-Visual Graph D. Moltisanti, M. Wray, W. Mayol-Cuevas, D. Damen
Where did I leave my phone? C. Reyes, E. Mohedano, K. McGuinness, N. E. O’Connor, X. Giro-i-Nieto
Segmenting Egocentric Videos to Highlight Personal Locations of Interest A. Furnari, G. Maria Farinella, S. Battiato
Finding Egocentric Image Topics through Convolutional Neural Network Based Representations K. Zhen, D. Crandall
Text Detection in Stores from a Wearable Camera Using a Repetition Prior B. Xiong, K. Grauman<
From Egocentric to Top-view S. Ardeshir, A. Borji
11:15 Poster Session 1
12:00 Lunch (on your own)
13:30 Keynote Talk David Crandall (Indiana Univ.)
14:00 Spotlight Presentation 2
[PAPER] A Pointing Gesture Based Egocentric Interaction System: Dataset, Approach and Application Y. Huang, X. Liu, X. Zhang, L. Jin
[PAPER] Multimodal Multi-stream Deep Learning for Egocentric Activity Recognition S. Song, V. Chandrasekhar, B. Mandal, L. Li, J. Lim, G. S. Babu, P. San, N. Cheung
Generating Notifications for Missing Actions: Don’t forget to turn the lights off! B. Soran, A. Farhadi, L. Shapiro
Egocentric Multi-Modal Dataset with Visual and Physiological Signals K. Nakamura, A. Alahi, S. Yeung, L. Fei-Fei
Head Gesture Recognition in Spontaneous Human Conversations: A Benchmark W. Yang, K. Akiyama, K. Kitani, L. Jeni, Y. Mukaigawa
Attention-driven Egocentric Computer Vision for Robotic Wheelchair Navigation H. Li, P. Mordohaiy, G. Hua
EgoMemNet: Visual Memorability Adaptation to Egocentric Images M. Carne', X. Giro-i-Nieto, P. Radeva, C. Gurrin
Efficiently Creating 3D Training Data for Fine Hand Pose Estimation M. Oberweger, G. Riegler, P. Wohlhart, V. Lepetit
Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd A. Doumanoglou, R. Kouskouridas, S. Malassiotis, T. Kim
14:25 Poster Session 2
15:10 Keynote Talk Wenyi Zhao, Chris Broaddus (DAQRI)
15:40 Coffe Break
16:00 Keynote Talk James Rehg (Georgia Tech)
16:30 Closing Remarks