Interested in artificial intelligence, machine learning, robotics, computer vision, natural language processing, and/or broadly related areas? All are welcome! Most talks for Spring 2018 will be held on Mondays at 2:30pm – 3:30pm in 0117 Luddy Hall. Subscribe to our mailing list by sending a blank email to list@list.indiana.edu with the subject line: “subscribe iis-seminar-l”. Contact David Crandall, djcran@indiana.edu, with questions or suggested speakers.
Spring 2018
Prof. Karl MacDorman , IUPUI School of Informatics and Computing
Monday January 29, 2:30pm
107 Informatics West
Computer-modeled characters resembling real people sometimes elicit cold, eerie feelings. This effect, called the uncanny valley, has been attributed to uncertainty about whether the character is human or living or real. Uncertainty, however, neither explains why anthropomorphic characters lie in the uncanny valley nor their characteristic eeriness. We propose that realism inconsistency causes anthropomorphic characters to appear unfamiliar, despite their physical similarity to real people, owing to perceptual narrowing. We further propose that their unfamiliar, fake appearance elicits cold, eerie feelings, motivating threat avoidance. In our experiment, 365 participants categorized and rated objects, animals, and humans whose realism was manipulated along consistency-reduced and control transitions. These data were used to quantify a Bayesian model of categorical perception. In hypothesis testing, we found reducing realism consistency did not make objects appear less familiar, but only animals and humans, thereby eliciting cold, eerie feelings. Next, structural equation models elucidated the relation among realism inconsistency (measured objectively in a two-dimensional Morlet wavelet domain inspired by the primary visual cortex), realism, familiarity, eeriness, and warmth. The fact that reducing realism consistency only elicited cold, eerie feelings toward anthropomorphic characters, and only when it lessened familiarity, indicates the role of perceptual narrowing in the uncanny valley.
Karl F. MacDorman is an associate professor in the Indiana University School of Informatics and Computing, Indianapolis, where he is also a program director and associate dean. He completed a B.A. at U.C., Berkeley in 1988, and a Ph.D. at Cambridge University in 1997, both in computer science. MacDorman was previously an associate professor (2003–2005) and assistant professor (1997–2000) at Osaka University. He has published more than 100 papers in human-computer interaction, robotics, machine learning, and cognitive science, accessible from macdorman.com .
Prof. Minje Kim , IU Intelligent Systems Engineering
Monday February 5, 2:30pm
0117 Luddy Hall
Advances in machine learning and deep learning in terms of their performance in challenging pattern recognition tasks also encourage the use of AI models in applications with limited resources. Hence, efficiency in machine learning models, especially during the test time, is getting more important. This talk introduces two streamlined machine learning models: bitwise matrix factorization and psychoacoustically weighted cost functions for network compression. Bitwise matrix factorization converts a dictionary-based matrix factorization problem in a binarized feature space, so that the posterior probabilities are calculated in a bitwise fashion. It shows promising performance in denoising applications. Next, a psychoacoustically weighted cost function is introduced to lead deep neural networks to more relaxed local minima. Since the network can focus more on the perceptually more important sound components than inaudible ones, the network can produce perceptually equivalent speech enhancement results with a less complex network topology.
Prof. Luis Rocha , IU Informatics & Cognitive Science
Monday February 12, 2:30pm
0117 Luddy Hall
Social media, electronic health records and mobile application data enable population-level observation tools with the potential to speed translational research. I will discuss ongoing work in our group on this front. First, I will demonstrate Instagram’s importance for public surveillance of drug interactions. Our methodology is based on the longitudinal analysis of social media user timelines at different timescales: day, week and month. Weighted graphs are built from the co-occurrence of terms from various biomedical dictionaries (drugs, symptoms, natural products, side-effects, and sentiment) at various timescales. We showed that spectral methods, shortest- paths, and distance closures [2,3] reveal relevant drug-drug and drug-symptom pairs, as well as clusters of terms and drugs associated with the complex pathology associated with depression [1].
Another important component of our complex systems approach to public health is the lexical sentiment analysis of longitudinal social media content, which provides a useful tool to quantify the emotional states associated with collective social behavior. We have recently used these methods to provide strong evidence that the cyclic sexual and reproductive behavior of human populations is mostly driven by culture. This was done by measuring interest in sex at a planetary scale, via birth and Google trend data, as well as independent measurement of collective moods on Twitter. We showed that interest in sex is independent of geographical location and is instead correlated with specific moods characteristic of major cultural and religious celebrations [4]. This work shows how computational social science techniques can be used to test novel hypotheses of public-health relevance. In this case, we were able to provide strong evidence that a reigning biological hypothesis — human reproductive cycles are an adaptation to the seasonal, hemisphere-dependent solar cycles — is incompatible with newly available planetary data of online behavior. Indeed, the cultural hypothesis — human sexual cycles are driven by collective moods associated with cultural and religious celebrations — is more likely. I will discuss the methodology behind this analysis, by introducing a new sentiment analysis technique based on the singular value decomposition [5].
Finally, I will discuss upcoming work where we integrate social media analysis with other biomedical data such as electronic health records and genomic regulation using temporal multiplex network analysis. We exemplify the approach with an 18-month study of drug interaction occurrence in Blumenau, SC—a medium-sized city in southern Brazil—using city-wide drug dispensing data from both primary and secondary- care, via the city’s Health Information System (HIS) [6].
[1] R.B. Correia, L. Li, L.M. Rocha [2016]. Pac. Symp. Biocomp. 21:492-503.
[2] T. Simas and L.M. Rocha [2015]. Network Science, 3(2):227-268.
[3] G.L. Ciampaglia, P. Shiralkar, L.M. Rocha, J. Bollen, F. Menczer, A. Flammini [2015]. PLoS One. 10(6): e0128193.
[4] I.B Wood, P.L. Varela, J. Bollen, L.M. Rocha, J. Gonçalves-Sá [2017]. Scientific reports 7.1: 17973.
[5] M. Wall, A. Rechtsteiner, and L.M. Rocha. “”Singular value decomposition and principal component analysis.”” In: A practical approach to microarray data analysis. Springer US, 2003. 91-109.
[6] Correia, Araujo, Mattos, Wild & Rocha [2018]. In Preparation.
Prof. David Landy , IU Psychological and Brain Sciences
Monday February 19, 2:30pm
0117 Luddy Hall
Reasoning about abstract relational structures is hard. Even such a simple task as computing the value of arithmetic expressions (e.g., 2×3+8) requires the ability to combine symbols according to formal syntactic rules to generate ever more complex representations—but how can concrete, body-bound agents such as ourselves instantiate formal syntactic computations? External formal notations can play a central role in this instantiation by providing stable physical environments that are easily interpreted by powerful but domain-limited perceptual and motor processes—that is, by serving as diagrams of abstract structures. However, these notations serve multiple goals, and were developed under very different technological limitations than we currently face. I will present a theoretical approach to the use of notations in formal reasoning, experiments demonstrating the influence of these notations, and innovative user interfaces to algebra that may facilitate learning and teaching. Recognizing the importance of the physical structure of symbolic environments urges the construction of a mathematics pedagogy that puts perceptual-motor interactions with dynamic notations at the heart of syntactic understanding.
Mingze Xu , IU Computer Science
Monday March 5, 2:30pm
0117 Luddy Hall
Ground-penetrating radar on planes and satellites now makes it practical to collect 3D observations of the subsurface structure of the polar ice sheets, providing crucial data for understanding and tracking global climate change. But converting these noisy readings into useful observations is generally done by hand, which is impractical at a continental scale. Deep learning methods have surpassed the performance of traditional techniques on a wide range of problems in computer vision, but nearly all of this work has studied consumer photos, where precisely correct output is often not critical. It is less clear how well these techniques may apply on structured prediction problems where fine-grained output with high precision is required, such as in scientific imaging domains. Here we consider the problem of segmenting echogram radar data collected from the polar ice sheets, which is challenging because segmentation boundaries are often very weak and there is a high degree of noise. We propose a multi-task spatiotemporal neural network that combines 3D ConvNets (C3D) and Recurrent Neural Networks (RNNs) to estimate ice surface boundaries from sequences of tomographic radar images.
Prof. Rich Shiffrin, IU Psychological and Brain Sciences
Monday March 19, 2:30pm
0117 Luddy Hall
The so-called ‘reproducibility crisis’ concerns reliability of reports of data patterns. Bayesian inference has focused more on models and model selection than data. We have developed an extended form of Bayesian inference that focuses on data (model comparison is a special case), and will present this framework in the first part of the talk. We then use this framework to deal with the issues of reproducibility. However, it is validity not reproducibility that is the true target of science: There are uncountable cases of replication and reproduction of invalid results by a single set of investigators or by different investigators who desire to affirm the original report. We therefore take a different approach. We suppose the interest is in the validity of some statistic based on and extracted from the data (usually one-dimensional such as the value of a mean, interaction, or contrast). We use our inference system to produce a Bayesian posterior estimate of the size of this statistic based on three components: the reported data, prior knowledge of the likely size of this statistic, and beliefs about the likely distortions (experimenter induced or otherwise) that affect the statistic report. We provide an example based on published reports claiming ESP.
Adithya Vadapalli , IU Computer Science
Monday March 26, 2:30pm
0117 Luddy Hall
We present first massively parallel (MPC) algorithms and hardness of approximation results for computing Single-Linkage Clustering of $n$ input $d$-dimensional vectors under Hamming, $\ell_1, \ell_2$ and $\ell_\infty$ distances. All our algorithms run in $O(\log n)$ rounds of MPC for any fixed $d$ and achieve $(1+\epsilon)$-approximation for all distances (except Hamming for which we show an exact algorithm). We also show constant-factor inapproximability results for $o(\log n)$-round algorithms under standard MPC hardness assumptions (for sufficiently large dimension depending on the distance used). Efficiency of implementation of our algorithms in Apache Spark is demonstrated through experiments on the largest available vector datasets from the UCI machine learning repository exhibiting speedups of several orders of magnitude.
Please view here for LaTeX-formatted abstract
Prof. Linda Smith , IU Psychological and Brain Sciences
Monday April 2, 2:30pm
0117 Luddy Hall
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing visual environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computer vision (and perhaps machine learning more generally) might benefit from understanding what computational mechanisms could exploit the changing regularities.
Prof. Michael Ryoo , IU Computer Science
Monday April 9, 2:30pm
0117 Luddy Hall
This talk presents (1) how machines learn to perceive actions in videos as well as (2) whether they can learn to execute their own actions from videos. We first present computational models to learn representations optimized for recognizing human activities from videos. We introduce video-based convolutional neural networks (CNNs) capturing activities’ latent sub-events and super-events, and discuss how they can be used for understanding videos such as annotating actions in sports (e.g., MLB) videos. Next, we design and compare multiple CNN architectures to learn robot action policy. Specifically, the robot learns to choose/execute actions given its frame input for the goal. We describe such process with an imitation learning formulation of robots learning actions from videos of human expert examples, and discuss our initial findings.
* The talk will be given by Prof. Michael Ryoo and his lab members, AJ Piergiovanni and Alan Wu.
Madhavun Candadai Vasu , IU Cognitive Science
Monday April 23, 2:30pm
0117 Luddy Hall
Information theory allows us to quantify the encoding and flow of information in biological as well as computational models of adaptive behavior. In this talk, I will be discussing three fronts along which we apply information theory to computational models of brain-body-environment systems to better understand adaptive behavior. First, we generalize the Information Bottleneck principle shedding light on the distinctive information flow patterns between dynamical perception-action tasks and static classification-like tasks. Second, we demonstrate that as neural networks are optimized to perform a behavior they acquire predictive information about future stimuli. Such acquisition of predictive-coding is in accordance with the Free-energy principle. Lastly, extending from single tasks to multiple, we show that the dynamics in a neural network that has been optimized to perform multiple tasks leads to the emergence of a unique informational path through the network for each task. These paths represent the “effective network” associated with each task, which has the potential to yield insights into reuse of neural resources across tasks. These studies will also present the methodological benefits of using computational models to study complex biological systems.
Fall 2017
Prof. Rob Goldstone , IU Psychological and Brain Sciences
Monday September 18, 2:30pm
130 Informatics East
According to the phenomenon of categorical perception, we tend to perceive our world in terms of the categories that we have formed. Our perceptions are warped such that differences between objects that belong in different categories are accentuated, and differences between objects that fall into the same category are deemphasized. We describe a neural network model that accommodates several empirical findings related to categorical perception. The model is based on the assumption that perceptual categorization involves a set of lower level detectors tuned, with varying degrees of specificity, to various combinations of stimulus features. This tuning is modeled as circular or oval consequential regions in stimulus parameter space, with inputs falling within the zone eliciting a detector response. These detectors provide input to a higher level categorization process, which exerts a reciprocal, descending influence on the detectors, strengthening those most useful for categorization.
Prof. Whitney Yu , IUPUI School of Engineering and Technology
Monday September 25, 2:30pm
130 Informatics East
Patient-specific blood flow simulation in human arteries has emerged as a powerful research tool for quantification of complete 4-D (space+time) velocity and pressure fields and wall shear stress (WSS) distribution on inner wall. The attractive advantages include (1) the low cost of facility, personnel, and supplies; (2) the fully human subject protection; (3) the amenability to perform parametric analysis, and (4) the direct human subject results. Radiological scanning and animal model experimentation cannot compete with these advantages to achieve similar results with the same investment. We have recently developed a unique computational platform, named InVascular, for patient-specific and non-invasive diagnose of the severity of vascular abnormalities and assessment of the necessity of vascular treatment. InVascular integrates the advanced modeling technique of CFD based on patient’s clinical CT/MRI imaging information with the emerging GPU (graphic processing unit) parallel computing technology to enable fast quantification of velocity and pressure fields and massive numerical analysis to assess the severity of the vascular abnormalities. InVascular uses unified mesoscale modeling, i.e. lattice Boltzmann method (LBM), for both image segmentation and fluid dynamics. The LBM solves a level set equation for the extraction of 3-D morphological geometry and the orientation of the boundary from clinical imaging data first. The obtained morphological information are then seamlessly fed to the next step for solving unsteady pulsatile flow. From CT/MARI images to 4-D in vivo flow, there are no data transformation and software involved thus the computation can be efficiently accelerated by GPU technology. It has been estimated that a typical cardiac simulation of blood flow in a human artery can be completed within 30 minutes. This talk will focus on two ongoing clinical projects to demonstrate how engineering analysis can contribute to precise medicine including (1) noninvasive assessment of the severity of renal stenosis (hypertension) and (2) Design of alternatives to left ventricle assist device (LVAD) for minimal invasion (Heart transplant).
Prof. Eduardo Izquierdo , IU SICE and Cognitive Science
Monday October 2, 2:30pm
130 Informatics East
One of the grand scientific challenges of this century is to understand how behavior is grounded in the interaction between an organism’s brain, its body, and its environment. Although a lot of attention and resources are focused on understanding the human brain, I will argue that the study of simpler organisms are an ideal place to begin to address this challenge. I will introduce the nematode worm Caernohabditis elegans, with just 302 neurons, the only fully-reconstructed connectome at the cellular level, and a rich behavioral repertoire that we are still discovering. I will describe a computational approach to address such grand challenge. I will lay out some of the advantages of expressing our understanding in equations and computational models rather than just words. I will describe our unique methodology for exploring the unknown biological parameters of the model through the use of evolutionary algorithms. We train the neural networks on what they should do, with little or no instructions on how to do it. The effort is then to analyze and understand the evolved solutions as a way to generate novel, often unexpected, hypotheses. As an example, I will focus on how the rhythmic pattern is both generated and propagated along the body during locomotion. If we have time at the end, I will discuss parallel efforts to transfer our methodology to training biologically-inspired artificial neural networks for classical machine learning problems.
Dr. Sven Bambach , IU SICE
Monday October 9, 2:30pm
130 Informatics East
Recent advances in wearable camera technology have led many cognitive psychologists to study the development of the human visual system by recording the field of view of infants and toddlers. Meanwhile, the vast success of deep learning in computer vision is driving researchers in both disciplines to aim to benefit from each other’s understanding. Towards this goal, I set out to explore how deep learning models could be used to gain developmentally relevant insight from such first-person data. In this talk, I will focus on a dataset that consists of egocentric videos collected by toddlers and parents as they jointly and freely play with a set of toys. I will present three different approaches of training Convolutional Neural Network (CNN) models to recognize the toy objects based on the head-camera data. In each case, my goal is to use deep learning as a data analysis tool that casts light on why and when the appearance statistics of objects in the toddler’s view facilitate visual object learning.
Supervised Road Segmentation
Satoshi Tsutsui , IU SICE
Monday October 16, 2:30pm
130 Informatics East
A key problem in automatic analysis and understanding of scientific papers is to extract semantic information from non-textual paper components like figures, diagrams, tables, etc. Much of this work requires a very first preprocessing step: decomposing compound multi-part figures into individual subfigures. Previous work in compound figure separation has been based on manually designed features and separation rules, which often fail for less common figure types and layouts. Moreover, few implementations for compound figure decomposition are publicly available. This paper proposes a data driven approach to separate compound figures using modern deep Convolutional Neural Networks (CNNs) to train the separator in an end-to-end manner. CNNs eliminate the need for manually designing features and separation rules, but require a large amount of annotated training data. We overcome this challenge using transfer learning as well as automatically synthesizing training exemplars. We evaluate our technique on the ImageCLEF Medical dataset, achieving 85.9% accuracy and outperforming previous techniques. We have released our implementation as an easy-to-use Python library, aiming to promote further research in scientific figure mining.
We present an approach for road segmentation that only requires image-level annotations at training time. We leverage distant supervision, which allows us to train our model using images that are different from the target domain. Using large publicly available image databases as distant supervisors, we develop a simple method to automatically generate weak pixel-wise road masks. These are used to iteratively train a fully convolutional neural network, which produces our final segmentation model. We evaluate our method on the Cityscapes dataset, where we compare it with a fully supervised approach. Further, we discuss the trade-off between annotation cost and performance. Overall, our distantly supervised approach achieves 93.8% of the performance of the fully supervised approach, while using orders of magnitude less annotation work.
Marlena Fraune , IU Cognitive Science
Monday October 23, 2:30pm
130 Informatics East
As robots become more prevalent in our world, researchers and funders alike expect that humans and robots will be able to “symbiotically coexist” and collaborate. However, little research has examined whether and how the differences we commonly see between individual and intergroup interactions among humans will affect human-robot interaction. I will discuss my research, which illustrates that group effects do occur in human robot interaction, but also depend on the type of robots, how the robots interact with each other, and the context of interaction. These studies raise ethical questions about how scholars should design robots for interaction with humans and how humans might be negatively affected in circumstances under which people favor robots over humans.
Prof. Franco Pestilli , IU Psychological and Brain Sciences
Monday November 6, 2:30pm
130 Informatics East
The ability to map brain networks in living individuals is fundamental in efforts to chart the relation between brain and behavior in health and disease. We present a framework to encode brain connectomes and diffusion-weighted magnetic resonance data into multidimensional arrays. The framework goes beyond current methods by integrating the relation between connectome nodes, edges, white matter fascicles and diffusion data. We demonstrate the utility of the framework for in vivo white matter mapping and anatomical computing by evaluating more than 3,000 connectomes across thirteen tractography methods and four data sets in normal and clinical populations.
We show that this framework allows mapping connectivity matrices, edge anatomy, and microstructural properties of the white matter tissue in each connectome edge. The framework is based on statistical evaluation principles introduced with the Linear Fascicle Evaluation and virtual lesions methods (LiFE; Pestilli et al., 2014). In short, instead of building networks by relying uniquely on the terminations of fascicles into the cortex, we exploit the full measured signal available for each connectome edge by extracting a forward-prediction of the biological tissue properties of the edge. We validated the framework by comparing results with standard connectome measures (fiber count and density). To do so, we generated ten repeated-measures connectomes in each individual brain in various datasets, using different tracking methods. For each connectome estimated in an individual, we computed the mean network clustering coefficient across repeated measures. We demonstrate high reliability of the clustering coefficients. We also demonstrate profound differences in connectomes across brains, beyond what can be captured using standard measures (fiber density).
Prof. Amanda Mejia , IU Statistics
Monday November 13, 2:30pm
130 Informatics East
Cortical surface functional magnetic resonance imaging (cs-fMRI) has recently experienced a rise in popularity relative to traditional 3-dimensional volumetric fMRI. Cs-fMRI offers dimension reduction, removal of extraneous tissue types, improved alignment of cortical areas across subjects, and better spatial smoothing. Additionally, cs-fMRI is more compatible with common assumptions of spatial Bayesian models, unlike volumetric fMRI data, which exhibits a complex spatial dependence structure due to cortical folding and the presence of multiple tissue types. However, since no spatial Bayesian model has yet been developed for cs-fMRI data, most analyses continue to employ the classical general linear model (GLM), in which a linear regression model is fit separately at each location in the brain relating the observed fMRI time series to the expected neuronal response to a set of tasks or stimuli. At each location, a hypothesis test is then performed on the model coefficients to determine whether that location is “activated”. This presents a massive multiple comparisons problem that remains the subject of debate and controversy today. The classical GLM approach also fails to properly account for spatial dependence in the activation amplitudes of neighboring voxels. In this paper, we propose a Bayesian GLM approach to estimate task activation using cs-fMRI data, which employs a class of sophisticated spatial processes to flexibly model latent fields of task activation. To perform the Bayesian computation, we use integrated nested Laplacian approximation (INLA), a highly accurate and computationally efficient alternative to Markov chain Monte Carlo. To identify regions of activation, we propose a novel joint posterior probability map (PPM) method, which eliminates the problem of multiple comparisons. Finally, we extend the existing spatial model from the single-subject to the multi-subject case, thus facilitating group-level inference. The method is validated and compared to the classical GLM through simulation studies and a motor task fMRI study from the Human Connectome Project.
Prof. Predrag Radivojac , IU SICE
Monday November 27, 2:30pm
130 Informatics East
We propose a new class of metrics on sets, vectors, and functions that can be used in various stages of data mining, including exploratory data analysis, learning, and result interpretation. These new distance functions unify and generalize some of the popular metrics, such as the Jaccard and bag distances on sets, Manhattan distance on vector spaces, and Marczewski-Steinhaus distance on integrable functions. We show that the new metrics are complete for integrable functions and prove useful relationships with f-divergences for probability distributions. To further extend our approach to structured objects such as concept hierarchies and ontologies, we introduce information-theoretic metrics on directed acyclic graphs drawn according to a fixed probability distribution. We conduct empirical investigation to demonstrate intuitive interpretation of the new metrics and their effectiveness on real-valued, high-dimensional, and structured data. Extensive comparative evaluation demonstrates that the new metrics outperformed multiple similarity and dissimilarity functions traditionally used in data mining, including the Minkowski family, the fractional Lp family, several f-divergences, cosine distance, and two correlation coefficients. Finally, we argue that the new class of metrics is particularly appropriate for rapid processing of high-dimensional and structured data in distance-based learning.
Prof. Michael Trosset , IU Statistics
Monday December 4, 2:30pm
130 Informatics East
Pairwise information about the proximity of a pair of objects is usually expressed in one of two ways. Large similarity indicates that two objects are much alike; large dissimilarity indicates the reverse. Dissimilarity generalizes the mathematical concept of distance, which can be used to construct intuitive representations of dissimilarity data. It is less obvious how to represent similarity data. The mathematical concept of an inner product is often used to model similarity, but such constructions are less intuitive and the corresponding transformations from similarity to dissimilarity are often misconstrued. For example, cosine similarity is widely used in text mining and other disciplines, but the entirely plausible narrative that is invariably used to motivate cosine similarity specifies a quite natural measure of dissimilarity that is virtually never used in practice. This talk attempts to dispel some popular misconceptions about transformations from similarity to dissimilarity.
Dr. Jeffrey Kane Johnson , Maeve Automation
Monday December 11, 2:30pm
130 Informatics East
In industries as varied as mining, agriculture, health care, and automated driving, many practical applications in robotics involve interacting with intelligent agents while navigating dynamic environments. While impressive results have been demonstrated in these domains, there are still basic types of interacting navigation problems for which robust and general solutions have remained elusive. One such problem type is efficient navigation in the presence of non-cooperative and non-adversarial agents. This is the kind of problem pedestrians face when navigating crowded sidewalks or drivers face when navigating crowded roadways. Two primary reasons for difficulties addressing this problem are that the problem models used tend to exhibit prohibitive computational complexity and the problem formulations tend to have difficult-to-satisfy requirements for problem input and representations. This talk will present recent work that provides more efficient problem models for this problem, as well as new, vision-based problem formulations that seek to significantly simplify problem input and representation requirements.
Spring 2017
Dr. Kaya de Barbaro , School of Interactive Computing, Georgia Institute of Technology
Monday January 23, 2:30pm
130 Informatics East
Developments in mobile sensing technology are opening new possibilities for the study of human activity. Motivated by embodied and distributed cognition, the emerging field of “computational behavioral science” aims to characterize rich multimodal and dynamic trajectories of activity and interaction “in the wild”. The talk will outline theoretical foundations for this approach as well as detail my past, ongoing, and future projects applying these unique methods to study fundamental questions in infant development, ranging from the sensorimotor contributions to the emergence of joint attention to dynamic relations between infant arousal and attention. In my current work I am developing a mobile sensor paradigm to examine the contribution of day-to-day experiences of infant distress episodes to mental health risks for mothers and their infants. This project will combine high-density quantitative markers of mother and infant bio-behavioral activity with daily surveys of maternal mood, social support, and sense of parenting confidence in an effort to unravel the mechanisms that contribute to individual differences in infant social-emotional development and emerging risks for mental health. The ultimate goal of this work is to develop technologically-enhanced interventions to support infants and mothers at risk of depression and anxiety.
Dr. Kaya de Barbaro’s background is in cognitive science, an interdisciplinary field bridging psychology, neuroscience, and computer science. Her research has focused on developmental science, spanning the domains of infant social, cognitive, sensorimotor, and physiological development. Across these domains, she has characterized moment-to-moment multimodal dynamics of infants’ bio-behavioral activity—as they look, touch, and express patterns of arousal and affect— using video and specialized sensors in free-flowing interactions.
Prashant Shiralkar, IU School of Informatics and Computing
Monday January 30, 2:30pm – 3:00pm
130 Informatics East
We present RelSifter, a supervised learning approach to the problem of assigning relevance scores to triples expressing type-like relations such as ‘profession’ and ‘nationality.’ To provide additional contextual information about individuals and relations we supplement the data provided as part of the WSDM 2017 Triple Score contest with Wikidata and DBpedia, two large-scale knowledge graphs (KG). Our hypothesis is that any type relation, i.e., a specific profession like ‘actor’ or ‘scientist,’ can be described by the set of typical “activities” of people known to have that type relation. For example, actors are known to star in movies, and scientists are known for their academic affiliations. In a KG, this information is to be found on a properly defined subset of the second-degree neighbors of the type relation. This form of local information can be used as part of a learning algorithm to predict relevance scores for new, unseen triples. When scoring ‘profession’ and ‘nationality’ triples our experiments based on this approach result in an accuracy equal to 73% and 78%, respectively. These performance metrics are roughly equivalent or only slightly below the state of the art prior to the present contest. This suggests that our approach can be effective for evaluating facts, despite the skewness in the number of facts per individual mined from KGs.
AJ Piergiovanni, IU School of Informatics and Computing
Monday January 30, 3:00pm – 3:30pm
130 Informatics East
In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos. Many high-level activities are often composed of multiple temporal parts (e.g., sub-events) with different duration/speed, and our objective is to make the model explicitly learn such temporal structure using multiple attention filters and benefit from them. Our temporal filters are designed to be fully differentiable, allowing end-of-end training of the temporal filters together with the underlying frame-based or segment-based convolutional neural network architectures. This paper presents an approach of learning a set of optimal static temporal attention filters to be shared across different videos, and extends this approach to dynamically adjust attention filters per testing video using recurrent long short-term memory networks (LSTMs). This allows our temporal attention filters to learn latent sub-events specific to each activity. We experimentally confirm that the proposed concept of temporal attention filters benefits the activity recognition, and we visualize the learned latent sub-events.
Alexander Gates, IU School of Informatics and Computing
Monday February 6, 2:30pm
130 Informatics East
One of the most fundamental approaches for understanding complex data is clustering; for example, in network science, communities capture central organizing principles of the link structure and are critical for understanding the dynamical processes that operate on networks. Throughout many problems of clustering, such as evaluating clustering methods, identifying consensus clusterings, and tracking the evolution of clusters over time, the most basic task is quantitatively comparing clusterings. Most existing methods focus on comparing the clusters, either by measuring statistical independence, matching similar clusters, or counting co clustered element pairs. Yet, all common measures have critical biases and no measure accommodates both overlapping and hierarchical clusterings. Here, in collaboration with Ian Wood & Y.Y. Ahn, I demonstrate how standard clustering similarity measures fail to meet common sense expectations and propose a new framework that not only addresses such biases but also unifies the comparison of overlapping and hierarchically structured clusterings. Furthermore, we demonstrate that our framework can provide detailed insights into how the clusterings differ. We apply our method to neuroscience, handwriting, and social network datasets to illustrate the strengths of our framework and reveal new insights into these datasets. The universality of clustering across disciplines suggest the far reaching impact of our framework across all areas of science.
Alexander Gates is currently a doctoral candidate at Indiana University pursuing a joint degree in Informatics (complex systems track) and Cognitive Science. His academic research fuses mathematical and computational methods to study complex systems in biology, neuroscience, and sociology. Some of his recent contributions include a systematic quantification of control in gene regulatory networks, a dynamical protocell model for autopoiesis, and a novel framework for comparing overlapping and hierarchical clusters in human connectomes. Before studying at IU, Alex received a BA in mathematics from Cornell University and an MSc from Kings College London in complex systems modeling. In June 2017, Alex will join the Center for Complex Networks at Northeastern university as a post-doctoral scholar.
Haley MacLeod, IU School of Informatics and Computing
Monday February 13, 2:30pm – 3:00pm
130 Informatics East
Computational systems are now capable of automatically generating captions describing objects, people, and scenery in images. While these systems vary in accuracy, they are prominent enough that we are beginning to see them integrated into social media platforms (e.g., on Facebook). One group that stands to benefit from these advancements are blind and visually impaired people (BVIP), who have expressed frustration with increasingly visual content on social media. Automatic captioning tools have the potential to empower BVIPs to know more about these images without having to rely on human-authored alt text (which is often missing) or asking a sighted person (which can be time consuming or burdensome). These solutions are typically evaluated using standardized metrics measuring the similarity between a machine’s output and that of a sighted human. These metrics help compare various algorithms when they are run on common datasets. Researchers also often conduct user studies, asking sighted individuals to rate the quality of a caption for a given image, or asking them to choose the best of a series of captions to assess the quality difference between human-authored captions and machine-generated captions. This makes sense given that most work on automated captioning is not focused on generating alt text for BVIPs, but rather is motivated by scenarios like providing metadata to improve image search. The evaluation criteria for caption quality and the cost/benefit tradeoffs for different types of errors are different than they would be if designed with accessibility as the primary scenario. The fact that such systems are now being repurposed for accessibility purposes requires a reexamination of their fundamental assumptions, such as what makes a good caption or what the relative risks are in the precision/recall tradeoff.
In this paper, we explore how blind and visually impaired people experience automatically generated captions on social media. Using a contextual inquiry approach, we find that BVIPs place a great deal of trust in these captions, often filling in details to resolve differences between a tweet’s text and an incongruent caption (where the image caption does not seem to match the content or context of the tweet). We build on these findings by conducting an online experiment to explore this phenomenon on a larger scale and investigate the role of caption phrasing in encouraging trust or skepticism. Our findings suggest that captions worded in a way that emphasize the probability of error, rather than correctness, encourage BVIPs to attribute incongruence to an incorrect caption rather than to missing details.
Prof. Chung-chieh Shan, IU School of Informatics and Computing
Monday February 13, 3:00pm – 3:30pm
130 Informatics East
Bayesian inference, of posterior knowledge from prior knowledge and observed evidence, is typically defined by Bayes’s rule, which says the posterior multiplied by the probability of an observation equals a joint probability. But the observation of a continuous quantity usually has probability zero, in which case Bayes’s rule says only that the unknown times zero is zero. To infer a posterior distribution from a zero-probability observation, the statistical notion of _disintegration_ tells us to specify the observation as an expression rather than a predicate, but does not tell us how to compute the posterior. We present the first method of computing a disintegration from a probabilistic program and an expression of a quantity to be observed, even when the observation has probability zero. Because the method produces an exact posterior term and preserves a semantics in which monadic terms denote measures, it composes with other inference methods in a modular way without sacrificing accuracy or performance.
Introducing Shiny Interactive Web Applications with R
Dr. Olga Scrivner, IU Department of Computational Linguistics
Monday February 20, 2:30pm
130 Informatics East
Shiny is an R package that is used to build web applications for data analysis and visualization. Using R you create a user interface and a server, while Shiny compiles and executes R code on the backend. For example, you can make a web app that will run interactive statistics, data mining or machine learning methods. In this hands-on workshop you will learn the basics of R+Shiny structure and how to create and deploy your first app onto the web.
Resources for the workshop:
- To see what is possible with Shiny, check out the Shiny Showcase Gallery and this gallery.
- Here’s a machine learning example.
- Shiny is a Data Scientist’s best friend!
- Required downloads: Base R, R Studio.
- Workshop materials are here.
- Dr. Olga Scrivner also conducts a weekly R reading group. People who are interested in attending the reading group are welcome to contact her (obscrivn@indiana.edu) so that she can include them in her Canvas site, which also provides R slides and material.
Dr. Andrew J. Womack, IU Department of Statistics
Monday February 27, 2:30pm
130 Informatics East
We consider the question of forming estimators for the marginal probability of the observed data using the output from MCMC algorithms driven by Hamiltonian Dynamics. The estimators are based on the method of Chib & Jeliazkov (JASA 2001). We will review the background on marginal computation using MCMC draws as well as Hamiltonian Monte Carlo algorithms. Because the differential equations implied by Hamiltonian Dynamics are often not explicitly solvable, HMC takes advantage of discrete symplectic integrators that give rise to the need to solve implicit equations. While the L step integrator leads to L-1 uncoupled implicit equations for taking a draw, we show that it gives rise to 2L coupled implicit equations for estimating the marginal. Thus, though HMC is very promising for obtaining draws, its use might be limited in model comparison.
Shujon Naja, IU School of Informatics and Computing
Monday March 6, 2:30pm – 3:00pm
130 Informatics East
In this talk we will discuss different aspects of zero-shot learning and see solutions for three challenging visual recognition problems: 1) unknown object recognition from images 2) novel action recognition from videos and 3) unseen object segmentation. In all of these three problems, we have two different sets of classes, the “known classes”, which are used in the training phase and the “unknown classes” for which there is no training instance. My proposed approach exploits the available semantic relationships between known and unknown object classes and use them to transfer the appearance models from known object classes to unknown object classes to recognize unknown objects. I also discuss an approach to recognize novel actions from videos by learning a joint model that links videos and text. Finally, I will present a ranking based approach for zero-shot object segmentation. We represent each unknown object class as a semantic ranking of all the known classes and use this semantic relationship to extend the segmentation model of known classes to segment unknown class objects.
Katherine Metcalf, IU School of Informatics and Computing
Monday March 6, 3:00pm – 3:30pm
130 Informatics East
Segmenting observations from an input steam is an important capability of human cognition. Evidence suggests that humans refine this ability through experiences with the world. However, few models address the unsupervised development of event segmentation in artificial agents. This paper presents work towards developing a computational model of how an intelligent agent can independently learn to recognize meaningful events in continuous observations. In this model, the agent’s segmentation mechanism starts from a simple state and is refined. The agent’s interactions with the environment are unsupervised and driven by its expectation failures. The learning task is to reduce the model’s prediction error by identifying when one event transitions into another. Reinforcement learning drives the mechanism that identifies event boundaries by reasoning over a predictive gated-recurrent neural network model’s expectation failures. Our experimental results support that it is possible to for reinforcement learning to to enable detecting event boundaries in continuous observations based on a gated-recurrent neural network’s prediction error.
Dr. Manohar Swaminathan, Senior Researcher, Microsoft Research India
Monday March 20, 2:30pm
130 Informatics East
The current audio channels from computers to humans are restricted to a single synthesized voice, possibly combined with a variety of non-speech audio signals, such as earcons and audiocons. This is a gross under-utilization of the astonishing capabilities of the average human brain in discerning and discriminating details from complex audio environments. We propose a novel audio user interface ReSAC, a responsive spatial audio cloud, to bridge this mismatch and to dramatically enhance the richness of communication from the machines to humans through audio.
The transformation from simple text terminals to sophisticated graphical user interfaces has occurred through sustained research and market evolution over several decades. We believe we are at the earliest stages of a similar transformation in audio user interfaces. In this talk we describe ReSAC, the application of ReSAC to provide immersive audio interfaces for the visually impaired, and the research directions that the comparison with GUIs is opening up.
This is ongoing work with Swapna Joshi at SOIC, IUB, Sujeath Pareddy and Abhay Agarwal at MSR India.
Brief Bio:
Manohar Swaminathan is a senior researcher at Microsoft Research India, where he is part of the Technologies for Emerging Markets group. Manohar is an academic-turned technology entrepreneur-turned researcher with a driving passion to deploy technology for positive social impact. He has a PhD in CS from Brown University, worked through the ranks to become a Professor at the Indian Institute of Science, and has co-founded, managed, advised, and angel-funded several technology startups in India.
Marlena Fraune, IU Cognitive Science and Psychology
TBA
130 Informatics East
As robots become more prevalent in our world, researchers and funders alike expect that humans and robots will be able to “symbiotically coexist” and collaborate. However, little research has examined whether and how the differences we commonly see between individual and intergroup interactions among humans will affect human-robot interaction. I will discuss my research, which illustrates that group effects do occur in human robot interaction, but also depend on the type of robots, how the robots interact with each other, and the context of interaction. These studies raise ethical questions about how scholars should design robots for interaction with humans and how humans might be negatively affected in circumstances under which people favor robots over humans.
Dr. Michael Ryoo, IU School of Informatics and Computing
Monday April 3, 2:30pm
130 Informatics East
Privacy protection from unwanted video recordings is an important societal challenge. For example, we desire a computer vision system (e.g., a robot) that can recognize human activities and assist our daily life, yet ensure that it is not recording video that may invade our privacy. In this talk, we discuss computer vision approaches for privacy-preserving recognition of human activities from extreme low resolution (e.g., 16×12) anonymized videos. These approaches are designed to avoid processing of privacy-intruding data (i.e., high resolution videos with human faces) when performing the recognition, thereby minimizing the risk of hackers recording/stealing sensitive videos from your device. We focus on that multiple different low resolution images can be originated from a single high resolution image, and take advantage of such property for reliable recognition of activities from anonymized videos.
Andreas Bueckle and Dr. Katy Borner, IU School of Informatics and Computing
Monday April 10, 2:30pm
130 Informatics East
As the built environment becomes increasingly more complex and integrated with new technologies—including the emerging Internet of Things (IoT)—there is an urgent need to understand how embedded technologies affect the experience of individuals that inhabit these spaces and how these technologies can be most appropriately used to improve occupant experience, comfort, and well-being. In addition, the IoT provides an opportunity as well as a challenge when it comes to helping users understand how these intelligent systems gather and process information such as sensor data and internal feedback loops.
By visualizing data streams from living architecture projects, we aim to help system architects, designers, and general audiences understand the inner workings of tightly coupled sensor- actuator systems that interlink machine and human intelligence. Our project aims to empower many to master basic concepts related to the operation and design of complex dynamical systems and the IoT. Specifically, we use architectural blueprints of living architecture installations together with real-time data streams to generate augmented reality visualizations of the operation of living architecture installations to improve data visualization literacy in the visitors of those sentient architectures.
Brief Bio:
Andreas Bueckle is a PhD student in Information Science at Indiana University as well as a videographer and photographer. His academic interests revolve around information visualization, more specifically interactive and augmented reality visualizations. As a professional videographer and photographer, he has worked on video and photo projects on four continents, with a focus on documentary as well as nature, especially social issues and nature photography. Check out work samples at http://andreas-bueckle.com.
Katy Börner is the Victor H. Yngve Distinguished Professor at the School of Informatics and Computing and Adjunct Professor at the Department of Statistics in the College of Arts and Sciences at Indiana University where she directs the Cyberinfrastructure for Network Science Center. Her research focuses on the development of data analysis, modeling, and visualization techniques for improved information access, understanding, and management.
Prof. Ritch Savin-Williams, Cornell University
Monday April 17, 2:30pm
130 Informatics East
Scientists and laypeople have recently taken great interest in sexual orientation, especially if the person is a parent, friend, or romantic partner. Despite the common belief that assessing sexuality is straightforward, it is a difficult construct to assess. The most traditional method is self-report. Alternative, tech-oriented methods have recently evolved to correct complications: genital arousal, implicit viewing time, fMRI scanning, eye tracking, and pupil dilation. These are briefly reviewed with consensus findings. However, they fail to distinguish sexual from romantic orientation and to assess the full spectrum of sexuality. Thus, the real lives of individuals are misrepresented. A new sexual identity, mostly straight, is used to illustrate.
Brief Bio:
Ritch C. Savin-Williams is a developmental psychology professor of Human Development and Director of the Sex & Gender Lab at Cornell University. He received the Ph.D. from the University of Chicago. His research on differential developmental trajectories attempts to supplant our generic, stage models of identity development with a perspective that explores the similarities of sexual-minority youth with all youth and the ways in which sexual-minority adolescents vary among themselves and from heterosexual youth. He is also a licensed clinical psychologist with a private practice specializing in identity, relationship, and family issues among sexual-minority young adults. He has served as an expert witness on same-sex marriage, gay adoption, and Boy Scout court cases, is on numerous professional review boards, has consulted for MTV, 20/20, the Oprah Winfrey Show, and CNN, and his work has been cited in Newsweek, Time, Rolling Stone, Parent Magazine, Utne Reader, New York Magazine, Fortune, New York Times, Los Angeles Times, Washington Post, USA Today, and Chicago Sun Times. Dr. Savin-Williams received the 2001 Award for Distinguished Scientific Contribution, the 2005 Outstanding Book Award from Division 44 of the American Psychological Association, the 2006 APA Science Directorate’s Master Lecture in developmental psychology, 2009 APA Plenary Address, and fellow status from the Association for Psychological Science.
Dr. Damir Cavar, IU Department of Linguistics
Monday April 24, 2:30pm
130 Informatics East
The Free Linguistic Environment (FLE) engineering project emerged out of a need for a free and open platform for deep NLP that enables research in grammar engineering, as well as modeling of hybrid systems that utilize rule-based, probabilistic models, and machine learning techniques. Current open systems, as for example the Stanford CoreNLP and CoreIE, the NLTK, or spaCy provide basic NLP-functionalities with shallow linguistic processing, lacking essential depth of the analytical components with limited relevance for precision and performance demanding real world NLP or AI applications.
FLE is geared towards deep linguistic processing up to semantic representations, and pragmatic or discourse modeling that can be used in high performance and big-data processing environments. It is based on ideas and concepts of the Xerox Linguistic Environment (XLE) (Maxwell & Kaplan, 1996). It utilizes a lexical analysis component based on a computational model of two-level morphology using a Finite State Transducer architecture. The syntactic and shallow semantic component implements a parser that uses a Lexical-functional Grammar (LFG) framework (e.g. Bresnan 2001; Dalrymple 2001) for bidirectional NLP, i.e. natural language parsing and generation. While LFG provides the means to generate and analyze syntactic, functional, and shallow semantic properties of natural language, it can only be understood as a preprocessing phase for deep semantic analysis.
FLE is designed to be XLE-compatible. It extends the XLE functionality by providing a possibility to quantify existing qualitative models of lexical, morphological, syntactic, and functional linguistic properties using the parser itself, or by extraction of linguistic rules and distributional properties from corpora. It provides a probabilistic model for lexical feature descriptions, syntactic structures, and functional and semantic representations that enable new kinds of studies of intra- and cross-linguistic language variation, as well as efficient broad coverage NLP for numerous languages.
Wen Chen, IU School of Informatics and Computing
Monday May 1, 2:30pm – 3:00pm
130 Informatics East
In art and music, time periods like “classical” and “impressionist” are powerful means for academics and practitioners to compare and contrast artifacts that share aesthetics or
philosophies. While web designs have undergone changes for 25 years, we lack theories todescribe or explain these changes. In this paper, we take a first step towards identifying and understanding the design periods of websites. Drawing from humanistic HCI methods, subject experts of web design critically analyzed a dataset of prominent websites whose lifetimes span over a decade. These informed judgments reveal a set of key markers that signal shifts in design periods. For instance, advances in display technologies and changes in company strategies help explain how design periods demarcated by particular layout templates and navigation models arise. We suggest that designers and marketers can draw inspiration from website designs curated into design periods.
Hao Peng, IU School of Informatics and Computing
Monday May 1, 3:00pm – 3:30pm
130 Informatics East
Quantitative measurement is the cornerstone of scientific advancement. Here we present a framework for facilitating quantitative inquires about scientific disciplines. We adapt a popular word embedding technique to the data of scholarly citation trails among 53 million scientific papers to learn continuous vector-space representation of scientific venues. We obtain a high dimensional map of science from the learnt embeddings. Our map reveals the disciplinary organizations of science as exemplified in the direction and the spectrum of science by allowing arithmetic operations between venue vectors. Vector representations of scientific venues also facilitate downstream applications such as recommending similar venues and predicting discipline categories.
Fall 2016
Prof. Minje Kim, IU School of Informatics and Computing
Monday October 31, 2:30pm
130 Informatics East
This talk introduces some machine learning algorithms that are designed to process as much data as needed while spending the least possible amount of resources, such as time, energy, and memory. Examples of those applications, but not limited to, can be a large-scale multimedia information retrieval system where both queries and the database items are noisy signals; collaborative audio enhancement from hundreds of user-created Youtube clips of a music concert; an event detection system running in a small device that has to process various sensor signals in real time; a lightweight custom chipset for speech enhancement on hand-held devices; instant music analysis engine running on smartphone apps. In all those applications, efficient machine learning algorithms are supposed to achieve not only a good performance, but also a great resource-efficiency. To meet these contradicting requirements at the same time, I have developed various matrix factorization algorithms (or topic models): a topic model that takes sparse landmark representations as input, a latent component sharing technique to analyze a set of crowdsourced audio recordings, and a hashing-based speed-up technique for faster sparse coding in topic modeling. Finally, to describe an extremely optimized deep learning deployment system, Bitwise Neural Networks (BNN) will be also discussed. In BNNs, all the inputs, outputs, and operations are defined with Boolean algebra (e.g. in BNNs a multiplication between floating-points is reduced down to a single XNOR gate for the two binary inputs). Some preliminary results on the MNIST dataset and speech denoising demonstrate that a straightforward extension of backpropagation can successfully train BNNs whose performance is comparable while necessitating vastly fewer computational resources.
Prof. Donald S. Williamson , IU School of Informatics and Computing
Monday November 7, 2:30pm
130 Informatics East
Speech is an essential form of human communication and speech processing has a variety of real-world applications. Hearing aids help individuals with hearing impairments understand speech, and voice commands are used to interface with many electronic devices. In realistic environments, background sounds from construction noise, music, or competing talkers are present. The performance of speech processing algorithms degrades substantially in noisy environments, as noise may overlap with and overwhelm the speech signal across time and frequency. Many computational techniques have been proposed to address speech separation in noisy environments, but it is still difficult to produce intelligible and high quality speech estimates, especially at low signal-to-noise ratios.
Traditional speech separation systems operate on the magnitude response of the short-time Fourier transform and leave the phase response unchanged. Recent studies, however, show that the phase response is important for quality. In this talk, I will present an approach that jointly enhances the magnitude and phase of noisy speech by performing time-frequency masking in the complex domain. A deep neural network is used to estimate this complex time-frequency mask. This work has led to substantial improvements in perceptual speech quality.
Prof. Martha White , IU School of Informatics and Computing
Monday November 14, 2:30pm
130 Informatics East
The success of prediction algorithms relies heavily on the data representation. Representation learning reduces the need for feature engineering, with notable successes in applications using neural networks and dictionary learning. In this talk, I will discuss new insights into effectively learning representations, particularly through supervised dictionary learning and supervised autoencoders. In particular, I will discuss new results on obtaining globally optimal solutions, and provide simple algorithms that are amenable to incremental estimation. Further, I will highlight how techniques from dictionary learning can inform choices in supervised autoencoders, and lead to a more effective supervised representation learning architecture.
Prof. Grigory Yaroslavtsev , IU School of Informatics and Computing
Monday November 28, 2:30pm
130 Informatics East
In this talk I will cover some recent advances in theoretical foundations of big data analysis and its applications to distributed data storage, clustering and computer vision. I will discuss new topics in distributed algorithms and communication complexity motivated by advances in systems such as Hadoop/MapReduce/Apache Spark and services provided through cloud infrastructure. I will show how the number of interactive supersteps/rounds plays a crucial role in determining the overall performance and hence also the cost of performing a distributed computation.
I will illustrate this premise through multiple examples including:
— Tradeoffs between the number of rounds and communication required for checking consistency between two large distributed file systems.
— Round-efficient distributed algorithms for clustering and matching problems on multi- dimensional feature vectors.
Nathaniel Rodriguez, IU School of Informatics and Computing
Monday December 5, 2:30pm
130 Informatics East
Recurrent neural networks (RNN) have been used for a wide range of tasks in machine learning including signal generation, temporal pattern recognition, natural language processing, robotic control problems, and time-series prediction. Over the last couple decades RNNs have steadily improved in scalability, trainability, and performance with the addition of new network elements and architectural design choices. Fundamentally, these additions aim to position the system in a dynamical regime that is more amenable for solving the desired task. By understanding how design choices impact neural network dynamics and how those dynamics lead to greater computational power, we can make more informed prior decisions about how to design neural networks. We investigate the dynamical properties of a class of RNNs, known as reservoir computers, and focus on exploring the impact of community structure, also known as modularity in network science. Community structure has gained a great deal of attention in the brain sciences and computational neuroscience and is suspected to be a vital component in active memory processing and was shown to play an important role in controlling information diffusion across networks. We catch a glimpse of how community structure and neuron function can have profound impacts on the computing capabilities of the reservoir on a range of memory and signal processing tasks.
Spring 2016
Prof. Xiaozhong Liu, IU School of Informatics and Computing
Wednesday January 13, 4:00pm
130 Informatics East
STEM publications, for various reasons, generally do not place a premium on writing for readability, and young scholars/students struggle to understand the scholarly literature available to them. Unfortunately, few efforts have been made to help graduate students and other junior scholars understand and consume the essence of those scientific readings. This talk is based on the hypothesis and pilot-evidence that accessing multi-modal Open Data Resources (ODR) about a scholarly publication, including presentation videos, slides, tutorial, algorithm source code, or Wikipedia pages, in a collaborative framework will significantly enhance a student’s ability to understand the paper itself. To achieve this goal, I propose a novel learning/reading environment, ODR-based Collaborative PDF Reader (OCPR), that incorporates innovative text plus heterogeneous graph mining algorithms that can: 1) auto-characterize a student’s emerging information need while he/she reading a paper; 2) personalize or communitize students’ information needs based on the computational user profiles, and 3) enable students to readily access ODRs based on their information need and implicit/explicit feedback.
Based on the information need and various kinds of user feedback, the proposed algorithms will generate and select novel ranking features on the heterogeneous graph at a low cost for semi-supervised random walk and ODR recommendation. Experiment shows that the proposed system can effectively help graduate students and scholars better understand the complex publications in both cold start and context-rich environments, and the novel algorithms, e.g., personalized edge type usefulness estimation, can be generalized to other information recommendation/retrieval problems.
Prof. Eduardo Izquierdo , IU Cognitive Science
Wednesday January 20, 4:00pm
130 Informatics East
Even relatively simple animals exhibit a remarkable combination of flexibility and robustness to their behavior. One of the grand scientific challenges of this century is to understand how such adaptive behavior arises from the dynamical interaction of the organism’s nervous system, its body, and its environment. Although detailed reductionist analyses of the individual molecular, cellular and organismal components of biological systems have led to a remarkable wealth of data and insights throughout biology, a complementary synthetic approach that reintegrates these components into an understanding of whole systems has been lacking.
My research aims to address this challenge by constructing and analyzing empirically-grounded models of brain-body-environment systems. In this talk, I will introduce the nematode worm Caenorhabditis elegans as a convenient target for such integrated brain-body-environment modeling of a complete animal. I will describe my approach: using artificial evolution to explore the space of unknown electrophysiological parameters of the nervous system necessary to generate organism-like behavior. I will focus on the evolution and analysis of two of the worm’s behaviors: spatial orientation and locomotion, and their integration. I will show how this methodology allows us to begin to address key theoretical challenges in a situated, embodied, and dynamical understanding of cognition.
Prof. Jerome Busemeyer, Psychological and Brain Sciences
Wednesday January 27, 4:00pm
130 Informatics East
The project aims to develop and empirically test a new measurement model based on quantum probability theory, called the Hilbert space multi-dimensional model. The model provides a promising solution to the problems of the violations of joint distribution assumptions faced by complex data. With the striking advancement of modern data collection methods, complex and massive data sets are generated from various sources and contexts that are conceptually connected (e.g., big data). This promises to provide a better understanding of complex social and behavioral phenomena, but also presents unprecedented challenges for the integration and interpretation of the data. When large data sets are collected from different contexts, often they can be summarized by contingency tables. Suppose there are K tables (T1,…,Tk,…TK), each collected in a different context k. Also suppose that each table Tk is a joint frequency table based on a subset of p variables (Y1, …, Yp). For example, the research could involve four variables (Y1, Y2, Y3, Y4), but each table might include only two of the four, so that table T1 might be a 2-way frequency table composed of two variables (Y1, Y2), table T2 might be another 2-way table composed of another two variables (Y1, Y3), and so on. A critical problem arises: How to integrate and synthesize these K tables into a compressed, coherent, and interpretable representation?
Currently, a common solution is to try to construct a p-way joint probability distribution to reproduce the frequency data observed in the K tables. Often Bayesian causal networks are then used to reduce the number of estimated parameters by imposing conditional independence assumptions. Unfortunately, however, in many cases, no such p-way joint distribution exists that can reproduce the observed tables. This occurs because the data tables violate consistency constraints required by classical (Kolmogorov) probability theory that Bayes nets are built upon. Research supported by prior NSF grants has accumulated strong evidence for the violations of the often-assumed classical joint probability idea on complex, contextualized data. The Hilbert space model, as we proposed here, is based on quantum probability theory. It provides a promising solution to the problems of the violations of joint distribution assumptions by constructing a single finite state vector that lies within a low dimensional Hilbert space, and by forming a set of non-commuting measurement operators that represent the p measurements. In this way, we achieve a compressed, coherent, and interpretable representation of the p variables that form the complex collection of K tables even when no p-way joint distribution
exists.
Prof. Nathan Jacobs, University of Kentucky
Wednesday February 10, 4:00pm
130 Informatics East
Every day billions of images are uploaded to the Internet. Together they provide many high-resolution pictures of the world, from panoramic views of natural landscapes to detailed views of what someone had for dinner. This imagery has the potential to drive discoveries in a wide variety of disciplines, from environmental monitoring to cultural anthropology. Significant research progress has been made in automatically extracting information from such imagery. One of the key remaining challenges is that we often don’t know where an image was captured and usually know very little about other geometric properties of the camera, such as orientation and focal length. In other words, most images are not geocalibrated. This talk provides an overview of my work on using novel cues, including partly cloudy days, rainbows, and human faces, to geocalibrate Internet imagery and video
Praveen Narayanan, IU School of Informatics and Computing
Wednesday February 17, 4:00pm
130 Informatics East
There is a wide gap in the field of machine learning, between the language that is used to describe and share successful ideas, and the code that is used to execute them. While the language of machine learning allows us to reuse a set of concepts, the same cannot be said of the code. For example, if we wanted to query an image processing model by using an inference technique originally written for a speech recognition model, we’d likely have to write an inference method from scratch even if the models shared structural similarities. Instead, we would like small conceptual changes to lead to proportionally small code changes, which is the basis of programming more modularly.
A first step towards modularity is to decouple
– the code describing the model from
– the code describing the inference method.
This type of modularity is a direct result of programming in the “generative story” style of probabilistic programs. A second step towards modularity is to compose bigger models from smaller (working) models, and to similarly compose inference methods.
This talk will consider modularity in machine learning programs by surveying the generative story style of programming and examining the tools for model and inference composition provided by Hakaru, a probabilistic programming system being developed at Indiana.
Prof. Weihua An, IU Department of Statistics
Wednesday February 24, 4:00pm
130 Informatics East
Exponential random graph models (ERGMs) have become a standard statistical tool for modeling social networks. In particular, ERGMs provide great flexibility to account for both covariates effects on tie formation and endogenous network formation processes (e.g., reciprocity and transitivity). However, due to the reliance on Monte Carlo Markov Chains, it is difficult to fit ERGMs on large networks (e.g., networks composed of hundreds of nodes and edges). This paper describes a series of (existent and new) methods for estimating ERGMs on large networks and compares their advantages and disadvantages. Selected methods are illustrated through analyzing school friendship networks, etc.
Rob Zinkov, IU School of Informatics and Computing
Wednesday March 2, 4:00pm
130 Informatics East
Probabilistic inference procedures are usually coded painstakingly from scratch, for each target model and each inference algorithm. In this talk, I will show how inference procedures can be posed as program transformations which transform one probabilistic model into another probabilistic model. These transformations allow us to generate programs which express exact and approximate inference, and allow us to compose multiple inference procedures over a single model. The resulting inference procedure runs in
time comparable to a handwritten procedure.
Erik Weitnauer and Christian Achgill
Wednesday March 9, 4:00pm
130 Informatics East
Formal notations like algebraic notations are powerful tools of thinking that can vastly extend our cognitive capabilities. Yet, many students don’t get much out of using them except frustration. While
many of our reasoning tools have changed dramatically with the availability of computers and digital user interfaces, we still mostly write equations using pen and paper.
A few years ago, our team set out to design and implement a digital math notation, inspired by modern user interface design and guided by cognitive research on how people use their visual-motor systems to work with formal notations. Since then, we have built a web-based dynamic algebra notation system we call Graspable Math. With it, users can solve a subset of algebra problems faster and more accurately than on paper, while still working through all transformation steps themselves.
In our talk, we will briefly discuss the research and motivation behind building Graspable Math, demonstrate the system, and share some details about its architecture and our software development strategies. We’ll then discuss our plans for the future and some of the challenges that lie ahead.
(Check out our system at http://graspablemath.com)
Jaimie Murdock, Colin Allen, and Simon DeDeo, IU School of Informatics and Computing
Wednesday March 23, 4:00pm
130 Informatics East
Search in an environment with an uncertain distribution of resources involves a trade-off between exploitation of past discoveries and further exploration. This extends to information foraging, where a knowledge-seeker shifts between reading in depth and studying new domains. To study this cognitive process, we examine the reading choices made by one of the most celebrated scientists of the modern era: Charles Darwin. From the full-text of books listed in his chronologically-organized reading journals, we generate topic models to quantify his local (text-to-text) and global (text-to-past) reading decisions using Kullback-Liebler Divergence, a cognitively-validated, information-theoretic measure of relative surprise. Rather than a pattern of surprise-minimization, corresponding to a pure exploitation strategy, Darwin’s behavior shifts from early exploitation to later exploration, seeking unusually high levels of cognitive surprise relative to previous eras. These shifts, detected by an unsupervised Bayesian model, correlate with major intellectual epochs of his career as identified both by traditional, qualitative scholarship and Darwin’s own self-commentary. In addition to quantifying Darwin’s individual-level foraging, our methods allow us to compare his consumption of texts with their publication order. We find Darwin’s consumption more exploratory than the culture’s production, suggesting that underneath gradual societal changes are the explorations of individual synthesis and discovery. Our quantitative methods advance the study of cognitive search through a framework for testing interactions between individual and collective behavior and between short- and long-term consumption choices. This novel application of
topic modeling to characterize individual reading complements widespread studies of collective scientific behavior.
Paper preprint: http://arxiv.org/abs/1509.07175
Zachary Tosi (IU CogSci)
Wednesday March 30, 4:00pm
130 Informatics East
Emerging technologies have revealed many features of the behavior and structure of cortical microcircuits. We now know that neuronal firing rates are roughly log-normal and that synaptic connectivity is highly non-random, containing over represented motifs, hubs, and heavy-tailed distributions in both synaptic efficacy and degree. Recent self-organizing neural models have begun to use known homeostatic mechanisms in combination with other forms of neural plasticity to explain these phenomena. However, while many of these models take into account homeostasis few put a great deal of consideration into how the set-points of said homeostasis are reached. They–therefore–largely ignore developmental differentiation as a key aspect of self-organization in the brain. The following talk presents a novel neural circuit model wherein the individual set-points of each neuron’s homeostatic mechanisms are allowed to self-organize alongside the network’s excitatory and inhibitory synaptic growth. Along those same lines, the network is initialized with no pre-defined synaptic connectivity and must grow and then prune its synaptic connections. From there, the network self-organizes a modular/hierarchical structure, rich-club organization, and broadly emulates other qualities of living circuits such as their degree, versatility, synaptic efficacy, and 3-motif distributions. The network also shows an ability to optimally conform around the structure of its inputs, scoring high on measures of pattern separation or generalization depending on which task is favored by the inputs. The results demonstrate that the self-organization of homeostatic set points in a cortical model can reproduce a plethora of highly non-random network
features known to exist in living neural circuits.
Dr. Adam White, IU School of Informatics and Computing
Wednesday April 6, 4:00pm
130 Informatics East
Understanding how an artificial agent may represent, acquire, update, and use large amounts of knowledge has long been an important research challenge in artificial intelligence. This talk explores the predictive approach to knowledge. Predictive knowledge can be maintained without human intervention, and thus its acquisition can potentially scale with available data and computing resources. Unfortunately, technical challenges related to numerical instability, divergence under off-policy sampling, and computational complexity have limited the applicability and scalability of predictive knowledge acquisition in practice.
This talk describes a new approach to representing and acquiring predictive knowledge on a robot. The key idea is that value functions, from reinforcement learning, can be used to represent policy-contingent declarative and goal-oriented predictive knowledge. This talk explores the practicality of making and updating many predictions in parallel, while the agent interacts with the world. I will demonstrate the applicability and scalability of our approach with a demonstration of the psychological phenomenon of nexting, making and updating thousands of predictions from hundreds of thousands of multi-dimensional data samples, in realtime and on a robot—beyond the scalability of related predictive approaches.
Xiaoran Yan, Indiana University Network Science Institute
Wednesday April 13, 4:00pm
130 Informatics East
In this talk, we highlight the interplay between a dynamic process and the structure of the network on which it is defined. We start by examining the impact of different random walks on the quality-measure of network clusters and node centrality. We introduce an umbrella framework for defining and characterizing an ensemble of dynamic processes as linear operators. We show that the traditional Laplacian framework for diffusion and random walks is a special case of this framework. Further generalizations will allow us to model epidemic and information diffusion over a networks.
Based on this generalized Laplacian framework, we will demonstrate how linear transformations of graphs can represent the flow of different dynamic processes on networks. We will show some empirical examples of how such transformations can be applied in real world problems where additional data is available along side the network structure. In the case of multiple graphs, the transformations lead to more principled composition of multi-layered networks.
Dr. Sun Sun, University of Toronto
Wednesday April 20, 4:00pm
130 Informatics East
With growing concerns about environment, economy, sustainability, and security, more and more renewable energy resources are expected to be integrated into future electrical grids. However, large-scale integration of intermittent renewables such as wind and solar creates noticeable imbalance between supply and demand, hence jeopardizing grid reliability. To combat the variability of renewable generation, energy storage (e.g., batteries, pumped hydro storage, and compressed air energy storage) and flexible loads (e.g., heating, ventilation, and air conditioning) are suggested to be applied in many grid-wideservices.
In this talk, I will present my work on intelligent control of storage and flexible loads in renewable-integrated electrical grids. First, I introduce an aggregator-storage system that provides service of power balancing to a grid. Both static and dynamic storage (e.g., storage inside electric vehicles) are considered with a wide range of storage characteristics being explicitly modeled. Second, for a substation to maintain phase balance, I suggest intelligent control of storage charging and discharging to balance energy flows. Third, with the inclusion of flexible loads in energy management, I propose the joint optimization of supply, demand, and storage for power balancing in a grid. To improve long-term system performance (e.g., reliability, welfare, and cost effectiveness), in each case I offer efficient centralized algorithms with strong theoretical performance guarantee, and distributed implementation with limited requirement of information exchange.
Prof. Patrick Shih, IU School of Informatics and Computing
Wednesday April 27, 4:00pm
130 Informatics East
The emergence of social media has had a significant impact on how people communicate and socialize. Teens use social media to make and maintain social connections with friends and build their reputation. Research has suggested that teens are more active and engaged than adults on social media. Most of such observations, however, have been made through the analysis of limited ethnographic or cross-sectional data. This paper shows the possibility of detecting age information in user profiles by using a combination of textual and facial recognition methods and presents a comparative study of 27K teens and adults in Instagram. We examined how and why the age difference in the behaviors of users in Instagram might have occurred through the lenses of social cognition, developmental psychology, and human-computer interaction. We proposed two hypotheses — teens as digital natives and the need for social interactions — as the theoretical framework for understanding the factors that help explain the behavioral differences. We demonstrate the application of our novel method that shows clear trends of age differences as well as substantiates previous insights in social media. Our computational analysis identified the following novel findings: (1) teens post fewer photos than adults; (2) teens remove more photos based on the number of Likes the photos received; and (3) teens have less diverse photo content. Our analysis was also able to confirm prior ethnographic accounts that teens are more engaged in Liking and commenting, and express their emotions and social interests more than adults. We discussed theoretical and practical interpretations and implications as well as future research directions from the results.
Fall 2015
Prof. Chris Raphael, IU School of Informatics and Computing
Wednesday September 9, 4:00pm
107 Informatics West
I’ll discuss my work with musical accompaniment systems both giving live and video demonstrations as well as presenting the underlying modeling.
Prof. Michael Ryoo, IU School of Informatics and Computing
Wednesday September 16, 4:00pm
107 Informatics West
Can robots understand human activities based on their visual perception? How can we make them do so? This talk discusses Computer Vision algorithms necessary to provide activity-level situation awareness to robots. The activities a robot must recognize not only include simple actions by users in front of it, but also include interactions directly involving the robot such as ‘a person attacking the robot’. The objective is to reliably handle such videos obtained during human/robot activities (which often display significant ego-motion) and associate semantic annotations. The videos taken from a robot’s own perspective are called first-person (or egocentric) videos and we review methods necessary for them. The talk also overviews ongoing work on robot learning of ‘actionable’ activity representations.
Can Liu, IU School of Informatics and Computing
Wednesday September 23, 4:00pm
107 Informatics West
Sentiment Analysis, also known as opinion mining, is the task of extracting opinions/emotions from user generated text, including blogs, conversations, reviews, etc. In this talk, we investigate feature selection methods for sentiment analysis. In particular we address two questions: 1) Feature selection as a mechanism to cut down the high dimensional feature vectors by identifying salient features, are normally defined for binary classification; we will investigate what is the best method to extend them to a multi-class setting. 2) One issue that is pervasive but usually overlooked in sentiment analysis research is imbalanced data; we will investigate whether the selected features are representative of the minority class, and whether we could mitigate the effect of skewing by making the features more general.
Marlena Fraune, IU Cognitive Science
Wednesday September 30, 4:00pm
107 Informatics West
In the near future, every house is predicted to have multiple robots: One welcomes you home, another washes your dishes, and a third folds laundry. Researchers and funders alike expect that humans and robots will be able to “symbiotically coexist” and collaborate; however, researchers have yet to study whether and how the differences we commonly see between individual and intergroup interactions among humans will affect human-robot interaction.
I will discuss my research, which begins to examine how interactions with groups of robots in an intergroup context may differ from one-on-one interactions. I will finish by discussing what future directions this research should include. Overall, the goal is to make human-robot collaboration more effective, efficient, and pleasant.
Phillip Odom, School of Informatics and Computing
Wednesday October 7, 4:00pm
107 Informatics West
Building human-in-the-loop intelligent systems that effectively utilize expert feedback to learn robust decision models has been a long cherished goal of Artificial Intelligence. However, modern decision systems are overly reliant on ideal training data and
consequently, any imperfections in the data will be reflected in the final model. It is vital that many real world decision systems make use of the vast amount of expert knowledge gained in fields such as
robotics, navigation and personal healthcare assistants.
First, in this talk, I will outline our algorithm for exploiting domain knowledge in sequential decision making systems. While the acquisition of domain knowledge is important, many domain experts are not machine learning experts and cannot define the most useful feedback. Ideal algorithms can communicate to the expert where they require assistance. In the second half of the talk, I present a method that, by actively soliciting expert feedback, trades off between the performance of the system and the effort required by the expert. We show empirically the contribution of more expressive advice over traditional learning approaches.
Prof. Pedja Radivojac, IU School of Informatics and Computing
Wednesday October 14, 4:00pm
107 Informatics West
Estimation of class proportions is one of the most fundamental tasks in machine learning yet this research problem is surprisingly open in many applications. In this presentation I will discuss supervised, semi-supervised and unsupervised approaches to these estimation problems, with a particular focus on semi-supervised techniques in which only positive and unlabeled data are available. There, we are interested in non-parametric techniques for class prior estimation. We recently formulated this problem as an estimation of mixing proportions in two-component mixture models. We then showed that estimation of mixing proportions is generally ill-defined and proposed a canonical form to obtain identifiability while maintaining flexibility to model any distribution. We used insights from this theory to elucidate the optimization surface of the class priors and proposed an algorithm for estimating them. The efficacy of the approach was evaluated on univariate and multivariate data. At the end, I hope to show some fun results obtained on estimation problems in bioinformatics where we do not know the truth. This is a joint work with Shantanu Jain, Martha White, and Michael Trosset.
Shantanu Jain, School of Informatics and Computing
Wednesday October 21, 4:00pm
107 Informatics West
We develop parametric and nonparametric algorithms to estimate the mixing proportion of a two component mixture given a sample from the
mixture and a sample from one of the components. This problem occurs in many domains, and has implications towards binary classification in the positive and unlabeled data setting. In the parametric setting, the components are assumed to have a skew normal distribution. In general, the problem is ill-defined, in the sense that the mixing proportion is not identifiable. We develop conditions that lead to identifiability. To handle multi-dimensional data, we give a transform that reduces the data to one-dimension and preserves the mixing proportion as well.
Prof. Daniel McDonald, Department of Statistics
Wednesday October 28, 4:00pm
107 Informatics West
Principal components analysis (PCA) is a classical dimension reduction method that involves the projection of the data onto the subspace spanned by the leading eigenvectors of the covariance matrix. This projection can be used either for exploratory purposes or as an input for further analysis, e.g. regression. If the data have billions of entries or more, the computational and storage requirements for saving and manipulating the design matrix in fast memory is prohibitive. Recently, the Nyström and column-sampling methods have appeared in the numerical linear algebra community for the randomized approximation of the singular value decomposition of large matrices. However, their utility for statistical applications remains unclear. We compare these approximations theoretically by bounding the distance between the induced subspaces and the desired, but computationally infeasible, PCA subspace. Additionally we show empirically, through simulations and a real data example, the trade-off of approximation accuracy and computational complexity.
Prof. Seita Koike, Tokyo City University
Wednesday Nov 4, 2015, 4pm
107 Informatics West
Social robotics is one of the main areas of robotics development in Japan, particularly in the domain of personal robots for everyday use. This talk will first report on the current state of the art in Japanese robotics, including commercialized platforms like Aldebaran’s Pepper humanoid, Palro, and the seal-like robot Paro. We will then describe our lab’s work on designing a robot prototype “Mugbot” as an example of how a social robot can be developed through interaction with the community, and how the use of a robot shapes community in turn. Mugbot has been used by children in educational contexts, and has also been presented at several Maker Faires as a DIY robotics platform.
Bio: Seita Koike is a Professor of Informatics at Tokyo City University. He founded and directs the Information Design Laboratory. The lab’s research focuses on information design, social robotics, education, and human-robot interaction. After working for many years with NEC’s personal robot Papero, Dr. Koike developed his own prototype social robot, Mugbot, which won four Blue Ribbon Maker Faire awards in 2012. Dr. Koike got his PhD in Industrial Design from Chiba University in 1986.
Roman Fedorov, Politecnico di Milano
Wednesday, November 11, 4:00pm
Informatics West 107
The environmental monitoring field often suffers from lack of significant and exhaustive input data. On the other hand, the quantity of public content generated by users or by sensors available on the Web nowadays is reaching unprecedented volumes. This massive collection of data contains an enormous amount of latent knowledge, which can be used to improve environmental models. In particular, computer vision techniques can be applied to visual data, in order to extract relevant environmental information. For example, estimating snow cover in mountainous regions, that is, the spatial extent of the earth surface covered by snow, is an important challenge for efficient environmental monitoring and water management. Publicly available visual content, in the form of user generated photographs and image feeds from outdoor webcams, can be leveraged as additional measurement sources, complementing existing ground, satellite and airborne sensor data. Our SnowWatch platform implements two content acquisition and processing pipelines that are tailored to such sources, addressing the specific challenges posed by each of them, e.g., identifying the mountain peaks, filtering out images taken in bad weather conditions, handling varying illumination conditions, and classifying a pixel as snow or non-snow area. The final outcome is summarized in a snow cover index, which indicates for a specific mountain and day of the year, the fraction of visible area covered by snow, possibly at different elevations. Feeding snow cover indexes to real environmental models reveals environmental consistency and utility of the produced data.
Prof. Michael Trosset, IU Department of Statistics
Wednesday, November 18, 4:00pm
Informatics West 107
Isomap, Locally Linear Embedding (LLE), and Laplacian Eigenmaps were originally promoted as techniques for nonlinear dimension reduction. Efforts to develop theories of manifold learning have assumed that data lie on low-dimensional manifolds and that the goal of manifold learning is to identify these manifolds. I will argue that this goal is conceptually distinct from the goal of nonlinear dimension reduction. For example, Hessian eigenmaps can (in theory) recover parametrizations that Isomap cannot, but Isomap can construct low-dimensional Euclidean representations of Riemannian manifolds that elude Hessian eigenmaps.
Sample Efficient Methods for Reinforcement Learning
Prof. Martha White, IU School of Informatics and Computing
Wednesday, December 2, 4:00pm
Informatics West 107
Balancing between computational efficiency and sample efficiency is an important goal in reinforcement learning. Temporal difference learning algorithms stochastically update the value function, with a linear time complexity in the number of features, whereas least-squares temporal difference algorithms are sample efficient but can be quadratic in the number of features. In this talk, I will discuss recent progress towards the goal of better balancing computation and sample efficiency with a new class of algorithms that use incremental matrix approximations.
Effective Statistical Relational Learning
Prof. Sriraam Natarajan, IU School of Informatics and Computing
Wednesday, December 9, 4:00pm
Informatics West 107
Statistical Relational Learning (SRL) Models combine the powerful formalisms of probability theory and first-order logic to handleuncertainty in large, complex problems. While they provide a very effective representation paradigm due to their succinctness and parameter sharing, efficient learning is a significant problem in these models. First, I will discuss state-of-the-art learning method
based on boosting that is representation independent. Our results demonstrate that learning multiple weak models can lead to a dramatic improvement in accuracy and efficiency.
One of the key attractive properties of SRL models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. However, in current SRL research, the human is restricted to either being a mere labeler or being an oracle who provides the entire model. I will present our recent work that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. Finally, I will discuss our work on employing SRL models for achieving transfer across seemingly unrelated domains allowing for more efficient learning in data-scarce domains.