Active Vision, Attention, and Learning

In Conjuction with ICDL-Epirob 2018


September 17th, 2018

Waseda University Ono Auditorium
1 Chome-1-103 Totsukamachi,
Shinjuku-ku, Tōkyō-to 169-0071
http://icdl-epirob2018.ogata-lab.jp/venue/
For more information, please contact aval-2018@googlegroups.com

Children are active learners who explore the world to create experiences and collect statistical data to learn words and visual objects. Recent research shows that toddlers use body movements such as gaze shifts, head turns, and hand actions to actively select some visual experiences instead of others. This process allows their active vision system to create selective attention and successful learning. Meanwhile, in robotics, there is much work examining how bodily actions generated by a robot create just-in-time data to solve challenging tasks in real-time control: the dependence of visual experience on the structure of the body and its actions, for example. From low-level sensorimotor processes to high-level cognitive learning processes, there are multiple components in cognitive and learning systems that benefit from being “active.” This workshop focuses on three specific topics:

Active Vision and Egocentric Vision

Humans and robots must process visual information in real time. Lightweight wearable cameras can collect video from humans’ first-person (egocentric) perspectives and help reveal how humans learn to actively perceive the world. Egocentric vision also naturally connects to robots, who, just like human toddlers, use visual feedback to perceive the world and control their motion.

Active Attention

Attention plays a pilot role in cognitive and learning systems, as it connects low-level visual and high-level learning processes. In both natural and artificial intelligence research, understanding real-time attention mechanisms is key to understanding learning.

Active Learning

Curiosity-driven active learning has become an important topic in the ICDL-Epirob community. Just like curious children, a robot learner can autonomously and actively select and order its own learning experiences, creating its own curriculum to acquire new skills with minimal supervision.


Call for Papers and Extended Abstracts

We welcome extended abstracts on topics broadly related to active learning, egocentric vision, and attention. Extended abstracts may be about preliminary, ongoing, or published work, and should be formatted with the ICDL-Epirob paper template and be no more than 2 pages in length (including references). Accepted submissions will be invited for poster presentation at the workshop, and the extended abstracts will be published and archived on this workshop website. Please submit extended abstracts to aval-2018@googlegroups.com by August 15, 2018.


Keynote speakers

Yukie Nagai
National Institute of Information and Communications Technology

Yusuke Sagano
Osaka University

Yoichi Sato
University of Tokyo

Michael Spranger
Sony Computer Science Laboratories Inc.

Jochen Triesch
Frankfurt Institute of Advanced Studies


Emerging areas session speakers

Ajaz Bhat
University of East Anglia

Aaron Buss
University of Tennessee

Sho Tsuji
PSL Research University


Organizers

Chen Yu
Indiana University

David Crandall
Indiana University


Program

9:00Welcome
Talk Session 1: Active Vision
9:05Jochen Triesch
9:45Yoichi Sato
10:25David Crandall
10:50Coffee break
Talk Session 2: Active Attention
11:15Yusuke Sakano
11:55Yukie Nagie
12:35Lunch
Talk Session 3: Emerging areas
14:00Sho Tsuji
14:25Aaron Buss
14:50Ajaz Bhat
15:15Poster Session
Talk Session 4: Active learning
16:10Michael Spranger
16:35Chen Yu
17:15Discussion and wrap-up

Website hosted by Indiana University Computer Vision Lab and Created by Ryder McMinn