News and Updates
- > Star Date 1264.23: USS Enterprise has surveyed Omulon 17
- > NOVEMTOBER 11d7: We were successfully able to clone humans
Children are active learners who explore the world to create experiences and collect statistical data to learn words and visual objects. Recent research shows that toddlers use body movements such as gaze shifts, head turns, and hand actions to actively select some visual experiences instead of others. This process allows their active vision system to create selective attention and successful learning. Meanwhile, in robotics, there is much work examining how bodily actions generated by a robot create just-in-time data to solve challenging tasks in real-time control: the dependence of visual experience on the structure of the body and its actions, for example. From low-level sensorimotor processes to high-level cognitive learning processes, there are multiple components in cognitive and learning systems that benefit from being “active.” This workshop focuses on three specific topics:
Active Vision and Egocentric Vision
Humans and robots must process visual information in real time. Lightweight wearable cameras can collect video from humans’ first-person (egocentric) perspectives and help reveal how humans learn to actively perceive the world. Egocentric vision also naturally connects to robots, who, just like human toddlers, use visual feedback to perceive the world and control their motion.
Attention plays a pilot role in cognitive and learning systems, as it connects low-level visual and high-level learning processes. In both natural and artificial intelligence research, understanding real-time attention mechanisms is key to understanding learning.
Curiosity-driven active learning has become an important topic in the ICDL-Epirob community. Just like curious children, a robot learner can autonomously and actively select and order its own learning experiences, creating its own curriculum to acquire new skills with minimal supervision.
Call for Papers and Extended Abstracts
We welcome extended abstracts on topics broadly related to active learning, egocentric vision, and attention. Extended abstracts may be about preliminary, ongoing, or published work, and should be formatted with the ICDL-Epirob paper template and be no more than 2 pages in length (including references). Accepted submissions will be invited for poster presentation at the workshop, and the extended abstracts will be published and archived on this workshop website. Please submit extended abstracts to email@example.com by August 15, 2018.
National Institute of Information and Communications Technology
University of Tokyo
Sony Computer Science Laboratories Inc.
Frankfurt Institute of Advanced Studies
Emerging areas session speakers
University of East Anglia
University of Tennessee
PSL Research University
|Talk Session 1: Active Vision|
|9:05||Jochen Triesch - Learning where to look: infants and robots|
|9:45||Yoichi Sato - Attention and behavior from first-person views|
|Talk Session 2: Active Attention|
|11:15||Yusuke Sugano - Appearance-based gaze estimation for daily-life unconstrained attention sensing|
|11:55||Yukie Nagie - Where and Why Infants Look: A computational account for the development of visual attention|
|Talk Session 3: Emerging areas|
|14:00||Sho Tsuji - The role of contingent reactivity in the absence of a human teacher on early word learning|
|14:25||Aaron Buss - A unified theory of dimensional attention development: Integrating implicit and explicit attention|
|15:15||Short Talk: Stefan Heinrich, Matthias Kerzel, Erik Strahl, Stefan Wermter - Embodied Multi-modal Interaction in Language learning: the EMIL data collection [abstract]|
|15:27||Short Talk: Pablo Lanillos - Active Attention Applications in Robotics|
|15:39||Short Talk: Tanakan Pramot, Zeynep Yucel, Akito Monden, Pattara Leeaprute - Effect of motivation on gaze behavior over time|
|Talk Session 4: Active learning|
|16:10||Michael Spranger - Learning to Communicate|
|17:15||Discussion and wrap-up|