Enhancing Lifelogging Privacy by Detecting Screens

Mohammed Korayem, Robert Templeman, Dennis Chen, David Crandall, Apu Kapadia

Low-cost, lightweight wearable cameras let us record (or lifelong) our lives from a first-person perspective for purposes ranging from fun to therapy. But they also capture deeply private information that users may not want to be recorded, especially if images are stored in the cloud or visible to other people. For example, recent studies suggest that computer screens may be lifeloggers’ single greatest privacy concern, because many people spend so much time in front of devices that display sensitive information. In this paper, we investigate using computer vision to automatically detect monitors in photo lifelogs. We evaluate our approach on an existing dataset from an in-situ user study of 36 people who wore cameras for a week, and show that our technique could help manage privacy in the upcoming era of wearable cameras.

[papersandpresentations proj=privacy:screenavoider]

Downloads

You can download our model files to test our classifiers on your own data. You will first need to download and install Caffe, an open source deep learning toolkit created by the Berkeley AI Research (BAIR). Then, you’ll need to download two files for each classifier you’re interested in: the Caffe prototxt file, which defines the structure of the convolutional neural network, and the model file, which defines the learned parameters of our model.

Acknowledgements

National Science Foundation Google Lilly Endowment
National Science
Foundation
Google Nvidia Lilly Endowment IU Pervasive Technology Institute IU Vice Provost for Research