News Archive

Congratulations to Dr. Stefan Lee and Dr. Sven Bambach!

Congratulations to our lab’s two newest alumni!

Stefan

Stefan Lee

Dr. Stefan Lee successfully defended his dissertation, Data-driven Computer Vision for Science and the Humanities, on July 20. He now joins the Machine Learning and Perception Group at Virgina Tech as a postdoc. Congratulations Stefan!

 

 

 

 

Dr. Sven Bambach defended his dissertation, Analyzing Hands with First-Person Computer Vision, on August 22. He simultaneously received not one but two degrees — a PhD in Computer Science, and a PhD in Cognitive Science. We’re very pleased that he’ll stay at IU as our very first lab postdoc. Congratulations Sven!

 


Welcome new IU faculty member Prof. Michael Ryoo!

Screen Shot 2016-01-28 at 3.46.45 PM
The School of Informatics and Computing is pleased to welcome Prof. Michael Ryoo! Michael joins IU from the NASA Jet Propulsion Lab, where he was a Research Technologist. He received his B.S. degree from KAIST and his Ph.D. from the University of Texas at Austin. His research interests are in computer vision and robotics, with a particular focus on human activity recognition in video. He’s teaching a computer graduate computer vision seminar in Fall 2015 and an undergraduate computer vision course in Spring 2016.


Congratulations Dr. Mohammed Korayem!

Congratulations to Dr. Mohammed Korayem for successfully defending his thesis, Social and Egocentric Image Classification for Scientific and Privacy Applications. Mohammed’s thesis applies deep learning-based image classification techniques to mining social media and analyzing first-person image collections. Dr. Korayem will join CareerBuilder as a Data Scientist.


New NSF grant to study first-person camera privacy


Vision lab PI David Crandall, IU Privacy Lab PI Apu Kapadia, and Denise Anthony at Dartmouth College have been awarded a 4-year National Science Foundation award to study privacy in first-person and wearable camera devices. From the award abstract:

Cameras are now pervasive on consumer devices, including smartphones, laptops, tablets, and new wearable devices like Google Glass and the Narrative Clip lifelogging camera. The ubiquity of these cameras will soon create a new era of visual sensing applications, for example, devices that collect photos and videos of our daily lives, augmented reality applications that help us understand and navigate the world around us, and community-oriented applications, e.g., where cameras close to a crisis tasked with obtaining a real-time “million-eye view” of the scene to guide first responders in an emergency. These technologies raise significant implications for individuals and society, including both potential benefits for individuals and communities, but also significant hazards including privacy invasion for individuals, and, if unchecked, for society, as surveillance causes a chilling effect in the public square. This research couples a sociological understanding of privacy with an investigation of technical mechanisms to address these needs. Issues such as context (e.g., capturing images for public use may be okay at a public event, but not in the home) and content (are individuals recognizable?) will be explored both on technical and sociological fronts: What can we determine about images, what does this mean in terms of privacy risk, and how can systems protect against risk to privacy?

Specific research challenges to be addressed include formulating technical means through image and context analysis to improve the privacy of people captured in images; exploring the unique privacy needs of camera owners and how image and contextual analysis can improve privacy; and developing image transformations to afford privacy as well as enable novel applications using the cloud and crowdsourcing. Companion sociological studies will examine how context affects privacy perceptions, the impact on perception of new technologies, and image-sharing behavior. These studies will guide each other, ensuring that mechanisms for image transformation/privatization, non-visual transformations (e.g., altering or obscuring image metadata) and other techniques can improve both privacy protection against automated analysis and how they affect individual perceptions of the invasiveness of the technology. Through a deeper understanding of the privacy implications of such technologies from both a social and technical perspective, the proposed research has the potential for profound and positive societal impact by laying a foundation for privacy-sensitive visual sensing techniques for a society where cameras are ubiquitous.

Check out our recent work in this and other areas on our lab projects page.


Congratulations Haipeng!

Haipeng_defense

Left to Right: Prof. David Crandall, Prof. David Leake, Haipeng, Prof. Johan Bollen, Prof. YY Ahn

Congratulations to the newest vision lab alum, Dr. Haipeng Zhang, on successfully defending his Ph.D. dissertation, Analyzing the Dynamics between User-sensed Data and the Real World. Haipeng’s work has studied how to use online and mobile data to estimate properties of and find connections in the physical world, demonstrated with applications to predicting consumer behavior, ecological events, mobile user properties, and concept relationships.


Congratulations Kun!

photo-1

Left to Right: Prof. David Crandall, Prof. Devi Parikh (via Skype), Prof. David Leake, Prof. Russell Lyons (via Skype), Kun, and Prof. Kris Hauser.

Congratulations to Dr. Kun Duan on successfully Ph.D. defense! His thesis, Conditional Random Field Models for Structured Visual Object Recognition, presents novel CRF-based solutions for three computer vision applications: human pose recognition, large-scale multimodal image clustering, and local attribute discovery for object recognition.


Best paper at CVPR workshop

photo-14

Congratulations to Stefan Lee, Sven Bambach, and David Crandall for receiving the best paper award at the 3rd Workshop on Egocentric (First-person) Vision in conjunction with CVPR 2014! Check out the paper, This Hand Is My Hand: A Probabilistic Approach to Hand Disambiguation in Egocentric Video.


Google Research Award

Google has awarded a grant to the IU Vision Lab and the IU Privacy Lab to investigate computer vision-based privacy-preserving technologies for wearable cameras. Read more about our joint Vision for Privacy project, or check out our recent NDSS 2013 and NDSS 2014 papers.


PlaceAvoider Featured in MIT Technology Review

Congratulations to lab member Mohammed Korayem and lab PI David Crandall, as well as collaborators Robert Templeman and Apu Kapadia from the IU Privacy Lab, for their PlaceAvoider project being featured in the MIT Technology Review. Read full MIT Technology Review article here.

In response to the rise of ubiquitous computing devices capable of recording images and video like Google Glass, the team developed “…PlaceAvoider, a technique for owners of first-person cameras to ‘blacklist’ sensitive spaces (like bathrooms and bedrooms). PlaceAvoider recognizes images captured in these spaces and flags them for review before the images are made available to applications. PlaceAvoider performs novel image analysis using both fine-grained image features (like specific objects) and coarse-grained, scene-level features (like colors and textures) to classify where a photo was taken.”

Full Text Paper [PDF]


2013 in Review

Now that 2013 has come to a close, it’s a good opportunity to pause and look back at the progress we’ve made in the past year. By any metric, it has been a busy year with:

  • 20 papers submitted, of which
  • roughly 10 were accepted (with many still under review),
  • 2 best paper awards,
  • over 25 talks,
  • at least 5 poster presentations,
  • around 10 proposals submitted with 4 successfully funded,
  • at least 6 PhD exams,
  • 2 great REUs (Research Experiences for Undergraduates),
  • and countless credit-hours of successful teaching and mentoring.

With continued effort, 2014 is sure to be even better.