Enhancing Lifelogging Privacy by Detecting Screens

Mohammed Korayem, Robert Templeman, Dennis Chen, David Crandall, Apu Kapadia

Low-cost, lightweight wearable cameras let us record (or lifelong) our lives from a first-person perspective for purposes ranging from fun to therapy. But they also capture deeply private information that users may not want to be recorded, especially if images are stored in the cloud or visible to other people. For example, recent studies suggest that computer screens may be lifeloggers’ single greatest privacy concern, because many people spend so much time in front of devices that display sensitive information. In this paper, we investigate using computer vision to automatically detect monitors in photo lifelogs. We evaluate our approach on an existing dataset from an in-situ user study of 36 people who wore cameras for a week, and show that our technique could help manage privacy in the upcoming era of wearable cameras.

Papers and presentations

BibTeX entries:

@inproceedings{screenavoider2016chi,
    title = {Enhancing Lifelogging Privacy by Detecting Screens},
    author = {Mohammed Korayem and Robert Templeman and Dennis Chen and David Crandall and Apu Kapadia},
    booktitle = {ACM CHI Conference on Human Factors in Computing Systems (CHI)},
    year = {2016}
}

@techreport{screenavoider2014arxiv,
    title = {ScreenAvoider: Protecting Computer Screens from Ubiquitous Cameras},
    author = {Mohammed Korayem and Robert Templeman and Dennis Chen and David Crandall and Apu Kapadia},
    institution = {arXiv 1412.0008},
    year = {2014}
}

Downloads

You can download our model files to test our classifiers on your own data. You will first need to download and install Caffe, an open source deep learning toolkit created by the Berkeley AI Research (BAIR). Then, you’ll need to download two files for each classifier you’re interested in: the Caffe prototxt file, which defines the structure of the convolutional neural network, and the model file, which defines the learned parameters of our model.

Acknowledgements

National Science Foundation Google Lilly Endowment
National Science
Foundation
Google Nvidia Lilly Endowment IU Pervasive Technology Institute IU Vice Provost for Research
The IU Computer Vision Lab's projects and activities have been funded, in part, by grants and contracts from the Air Force Office of Scientific Research (AFOSR), the Defense Threat Reduction Agency (DTRA), Dzyne Technologies, EgoVid, Inc., ETRI, Facebook, Google, Grant Thornton LLP, IARPA, the Indiana Innovation Institute (IN3), the IU Data to Insight Center, the IU Office of the Vice Provost for Research through an Emerging Areas of Research grant, the IU Social Sciences Research Commons, the Lilly Endowment, NASA, National Science Foundation (IIS-1253549, CNS-1834899, CNS-1408730, BCS-1842817, CNS-1744748, IIS-1257141, IIS-1852294), NVidia, ObjectVideo, Office of Naval Research (ONR), Pixm, Inc., and the U.S. Navy. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.