Active Vision, Attention, and Learning

In Conjuction with ICDL-Epirob 2018


September 17th, 2018

Waseda University Building 14, Room 408
Shinjuku-ku, Tōkyō-to 169-0071
http://icdl-epirob2018.ogata-lab.jp/venue/
For more information, please contact aval-2018@googlegroups.com

Children are active learners who explore the world to create experiences and collect statistical data to learn words and visual objects. Recent research shows that toddlers use body movements such as gaze shifts, head turns, and hand actions to actively select some visual experiences instead of others. This process allows their active vision system to create selective attention and successful learning. Meanwhile, in robotics, there is much work examining how bodily actions generated by a robot create just-in-time data to solve challenging tasks in real-time control: the dependence of visual experience on the structure of the body and its actions, for example. From low-level sensorimotor processes to high-level cognitive learning processes, there are multiple components in cognitive and learning systems that benefit from being “active.” This workshop focuses on three specific topics:

Active Vision and Egocentric Vision

Humans and robots must process visual information in real time. Lightweight wearable cameras can collect video from humans’ first-person (egocentric) perspectives and help reveal how humans learn to actively perceive the world. Egocentric vision also naturally connects to robots, who, just like human toddlers, use visual feedback to perceive the world and control their motion.

Active Attention

Attention plays a pilot role in cognitive and learning systems, as it connects low-level visual and high-level learning processes. In both natural and artificial intelligence research, understanding real-time attention mechanisms is key to understanding learning.

Active Learning

Curiosity-driven active learning has become an important topic in the ICDL-Epirob community. Just like curious children, a robot learner can autonomously and actively select and order its own learning experiences, creating its own curriculum to acquire new skills with minimal supervision.


Call for Papers and Extended Abstracts

We welcome extended abstracts on topics broadly related to active learning, egocentric vision, and attention. Extended abstracts may be about preliminary, ongoing, or published work, and should be formatted with the ICDL-Epirob paper template and be no more than 2 pages in length (including references). Accepted submissions will be invited for poster presentation at the workshop, and the extended abstracts will be published and archived on this workshop website. Please submit extended abstracts to aval-2018@googlegroups.com by August 15, 2018.


Keynote speakers

Yukie Nagai
National Institute of Information and Communications Technology

Yusuke Sugano
Osaka University

Yoichi Sato
University of Tokyo

Michael Spranger
Sony Computer Science Laboratories Inc.

Jochen Triesch
Frankfurt Institute of Advanced Studies


Emerging areas session speakers

Ajaz Bhat
University of East Anglia

Aaron Buss
University of Tennessee

Sho Tsuji
PSL Research University


Organizers

Chen Yu
Indiana University

David Crandall
Indiana University


Program

9:00Welcome
Talk Session 1: Active Vision
9:05Jochen Triesch - Learning where to look: infants and robots
9:45Yoichi Sato - Attention and behavior from first-person views
10:25David Crandall
10:50Coffee break
Talk Session 2: Active Attention
11:15Yusuke Sugano - Appearance-based gaze estimation for daily-life unconstrained attention sensing
11:55Yukie Nagie - Where and Why Infants Look: A computational account for the development of visual attention
12:35Lunch
Talk Session 3: Emerging areas
14:00Sho Tsuji - The role of contingent reactivity in the absence of a human teacher on early word learning
14:25Aaron Buss - A unified theory of dimensional attention development: Integrating implicit and explicit attention
14:50Ajaz Bhat
15:15Short Talk: Stefan Heinrich, Matthias Kerzel, Erik Strahl, Stefan Wermter - Embodied Multi-modal Interaction in Language learning: the EMIL data collection [abstract]
15:27Short Talk: Pablo Lanillos - Active Attention Applications in Robotics
15:39Short Talk: Tanakan Pramot, Zeynep Yucel, Akito Monden, Pattara Leeaprute - Effect of motivation on gaze behavior over time
15:55Coffee Break
Talk Session 4: Active learning
16:10Michael Spranger - Learning to Communicate
16:35Chen Yu
17:15Discussion and wrap-up

Website hosted by Indiana University Computer Vision Lab and Created by Ryder McMinn