2023

  • Weslie Khoo, Long-Jing Hsu, Kyrie Jig Amon, Pranav Vijay Chakilam, Wei-Chu Chen, Zachary Kaufman, Agness Lungu, Hiroki Sato, Erin Seliger, Manasi Swaminathan, others, "Spill the Tea: When Robot Conversation Agents Support Well-being for Older Adults," Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, 2023. [ bibtek]
  • Feng Cheng, Xizi Wang, Jie Lei, David Crandall, Mohit Bansal, Gedas Bertasius, "VindLU: A recipe for Effective Video-and-Language Pretraining," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. [ bibtek]
  • Sam Goree, Weslie Khoo, David Crandall, "Correct for Whom? Subjectivity and the Evaluation of Personalized Image Aesthetics Assessment Models," AAAI Conference on Artificial Intelligence, 2023. [ bibtek]
  • Tianfei Zhou, Fatih Porikli, David Crandall, Luc Van Gool, Wenguan Wang, "A Survey on Deep Learning Techniques for Video Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2023. (impact factor 17.861, accepted, to appear) [ bibtek]
  • Sam Goree, David Crandall, Norman Su, "``It Was Really All About Books:'' Speech-like Techno-Masculinity in the Rhetoric of Dot-Com Era Web Design Books," ACM Transactions on Computer-Human Interaction, 2023. (impact factor = 3.147, accepted, to appear) [ bibtek]
  • Jane Yang, Linda Smith, David Crandall, Chen Yu, "Using manual actions to create visual saliency: an outside-in solution to sustained attention and joint attention," Annual Conference of the Cognitive Science Society (CogSci), 2023. [ bibtek]
  • Zheng Chen, Zhengming Ding, David Crandall, Lantao Liu, "Polyline Generative Navigable Space Segmentation for Autonomous Visual Navgation," IEEE Robotics and Automation Letters (RA-L), 2023. (impact factor 3.741, accepted, to appear) [ bibtek]

2022

  • David Leake, "Case-Based Explanation: Making the Implicit Explicit," Proceedings of XCBR-22: Fourth Workshop on Case-Based Reasoning for the Explanation of Intelligent Systems, ICCBR-22 Workshop Proceedings, 2022. In press [ Paper] [ bibtek]
  • Ziwei Zhao, David Leake, Xiaomeng Ye, David Crandall, "Generating Counterfactual Images: Toward a C2C-VAE Approach," International Conference on Case-based Reasoning Workshop on Case-Based Reasoning for the Explanation of Intelligent Systems, 2022. [ Paper] [ bibtek]
  • Kristen Grauman, others, "Ego4D: Around the World in 3,000 Hours of Egocentric Video," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [ Paper] [ bibtek]
  • Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Christian Fuegen, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Bernard Ghanem, Vamsi Krishna Ithapu, Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik, "Ego4D: Around the World in 3,000 Hours of Egocentric Video," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. [ Paper] [ bibtek]
  • Yu Yao, Xizi Wang, Mingze Xu, Zelin Pu, Yuchen Wang, Ella Atkins, David Crandall, "DoTA: Unsupervised Detection of Traffic Anomaly in Driving Videos," IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2022. (impact factor = 17.861, accepted, to appear) [ bibtek]
  • Zehua Zhang, David Crandall, "Hierarchically Decoupled Spatial-Temporal Contrast for Self-supervised Video Representation Learning," IEEE Winter Conference on Applications of Computer Vision (WACV), 2022. (35.0% acceptance rate) [ Paper] [ bibtek]
  • Zhenhua Chen, Chuhua Wang, David Crandall, "Semantically Stealthy Adversarial Attacks against Segmentation Models," IEEE Winter Conference on Applications of Computer Vision (WACV), 2022. (35.0% acceptance rate) [ Paper] [ bibtek]
  • Satoshi Tsutsui, Yanwei Fu, David Crandall, "Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained Visual Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2022. (impact factor = 17.861, accepted, to appear) [ bibtek]
  • Xiankai Lu, Wenguan Wang, Jianbing Shen, David Crandall, Jiebo Luo, "Zero-Shot Video Object Segmentation with Co-Attention Siamese Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2022. (impact factor = 17.861) [ bibtek]
  • Junbo Yin, Jianbing Shen, Xin Gao, David Crandall, Ruigang Yang, "Graph Neural Network and Spatiotemporal Transformer Attention for 3D Video Object Detection from Point Clouds," IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2022. (impact factor = 17.861, accepted, to appear) [ bibtek]
  • Chuhua Wang, Yuchen Wang, Mingze Xu, David Crandall, "Stepwise Goal-Driven Networks for Trajectory Prediction," IEEE Robotics and Automation Letters (RA-L), 2022. (impact factor = 3.741, accepted, to appear) [ bibtek]
  • David Leake, Zachary Wilkerson, David Crandall, "Extracting Case Indices from Convolutional Neural Networks: A Comparative Study," International Conference on Case-based Reasoning (ICCBR), 2022. [ bibtek]
  • Xiaomeng Ye, David Leake, David Crandall, "Case Adaptation with Neural Networks: Capabilities and Limitations," International Conference on Case-based Reasoning (ICCBR), 2022. [ bibtek]
  • Satoshi Tsutsui, Xizi Wang, Guangyuan Weng, Yayun Zhang, David Crandall, Chen Yu, "Action Recognition based on Cross-Situational Action-object Statistics," IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2022. [ bibtek]
  • Zehua Zhang, David Crandall, Michael Proulx, Sachin Talathi, Abhishek Sharma, "Can Gaze Inform Egocentric Action Recognition?," ACM Symposium on Eye Tracking Research and Applications (ETRA), 2022. [ bibtek]
  • Vibhas Vats, David Crandall, "Controlling the Quality of Distillation in Response-Based Network Compression," AAAI International Workshop on Practical Deep Learning in the Wild, 2022. [ Paper] [ bibtek]
  • Xiaomeng Ye, Ziwei Zhao, David Leake, David Crandall, "Generation and Evaluation of Creative Images from Limited Data: A Class-to-Class VAE Approach," International Conference on Computational Creativity (ICCC), 2022. [ bibtek] [ Video]

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

The IU Computer Vision Lab's projects and activities have been funded, in part, by grants and contracts from the Air Force Office of Scientific Research (AFOSR), the Defense Threat Reduction Agency (DTRA), Dzyne Technologies, EgoVid, Inc., ETRI, Facebook, Google, Grant Thornton LLP, IARPA, the Indiana Innovation Institute (IN3), the IU Data to Insight Center, the IU Office of the Vice Provost for Research through an Emerging Areas of Research grant, the IU Social Sciences Research Commons, the Lilly Endowment, NASA, National Science Foundation (IIS-1253549, CNS-1834899, CNS-1408730, BCS-1842817, CNS-1744748, IIS-1257141, IIS-1852294), NVidia, ObjectVideo, Office of Naval Research (ONR), Pixm, Inc., and the U.S. Navy. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.