Previous Behavior Imaging Seminars

Spring 2014 Schedule

February 5, 2014
Lisa Parr , Emory University, Yerkes National Primate Research Center
TSRB 175 (Auditorium) 12:00-1:00pm

Title: Primate social cognition and its translational relevance

Abstract: I will describe some of my older studies of comparative primate social cognition, particularly face and expression discrimination in chimpanzees and rhesus monkeys. I will also present new data on how these studies have informed our current work on rhesus monkeys to address the effect of oxytocin on social cognition and the development of social attention in infant monkeys.

February 26, 2014
Mubarak Shah , University of Central Florida, Computer Science
TSRB 132 12:00-1:00pm

Title: Counting, Detecting and Tracking of People in Crowded Scenes

Abstract: In this talk, first I will present a new approach for counting people in extremely dense crowds. Our approach relies on multiple sources of information such as low confidence head detections, repetition of texture elements (using SIFT), and frequency-domain analysis to estimate counts, along with confidence associated with observing individuals, in an image region. In addition, we employ a global consistency constraint on counts using Markov Random Field. This caters for disparity in counts in local neighborhoods and across scales. Next, I will discuss how we explore context for human detection in dense crowds in the form of locally-consistent scale prior which captures the similarity in scale in local neighborhoods and its smooth variation over the image. Using the scale and confidence of detections obtained from an underlying human detector, we infer scale and confidence priors using Markov Random Field. In an iterative mechanism, the confidences of detections are modified to reflect consistency with the inferred priors, and the priors are updated based on the new detections. The final set of detections obtained are then reasoned for occlusion using Binary Integer Programming where overlaps and relations between parts of individuals are encoded as linear constraints. Finally, I will present a method for tracking in dense crowds using prominence and neighborhood motion concurrence. Our method begins with the automatic identification of prominent individuals from the crowd that are easy to track. Then, we use Neighborhood Motion Concurrence to model the behavior of individuals in a dense crowd, this predicts the position of an individual based on the motion of its neighbors.

March 26, 2014
Guillermo Sapiro, Duke University, Electrical & Computer Engineering
MIRC 102a 12:00-1:00pm

Click Here to view Dr. Sapiro's talk.

Title: Signal and Video Processing for Aiding in the Screening of Child Mental Health

Abstract: In this talk I will discuss our activities in data analysis for aiding in the screening of child mental health. We will cover from automatic video analysis to shortening lengthy parent questionnaires. I will also describe in general our interdisciplinary efforts, and how a team of engineers and child psychiatrists working together can produce a societal contribution.

April 9, 2014
Wendy Rogers, Georgia Institute of Technology, Psychology
Klaus 1116 East 12:00-1:00pm

Title: Human-Robot Interaction: Understanding the Potential of Robots for Older Adults

Abstract: There is much potential for robots to support older adults in their goal of successful aging with high quality of life. However, for human-robot interactions to be successful, the robots must be designed with user needs in mind. In the Human Factors and Aging Laboratory at the Georgia Institute of Technology we are conducting research in the nascent field of older adult-robot interactions. In this presentation I will provide an overview of the needs, capabilities, preferences, and limitations of older adults. I will then discuss our current and planned research on the design of robots to support older adults and health care providers. Our focus is on understanding the interactions between user characteristics, robot characteristics, and the context of the interactions (e.g., task demands).

April 16, 2014
Nilanjan Sarkar, Vanderbilt University, Mechanical & Computer Engineering
TSRB 132 12:00-1:00pm

Title:Title: Can Robotics Play a Role in Advancing Autism Treatment?

Abstract: The Centers for Disease Control and Prevention estimates 1 in 68 children in the United States have an Autism Spectrum Disorder (ASD). ASD is associated with enormous individual, familial, and social cost across the lifespan. There is an urgent need for efficacious treatments that can be delivered across resource-strained environments. In this context, robotic technologies may yield intervention platforms with substantial promise for impacting early core symptoms of ASD. This seminar will provide a brief background of robotics research in ASD intervention and then present our work on the design and preliminary testing of robotic intervention platforms and environments for young children with ASD.

September 9, 2014
Andrea Thomaz , Georgia Institute of Technology, School of Interactive Computing
Pettit Microelectronics Building (MiRC) 102 12:00-1:00pm

Title: Socially Interactive Robot Learning

Abstract: The vision of our research is to enable robot to function in dynamic human environments by allowing them to flexibly adapt their skill set via learning interactions with end users. This explores the ways in which Machine Learning agents can exploit principles of human social learning to build robot systems that can learn from everyday human teachers. We look at standard Machine Learning interactions and redesign interfaces and algorithms to support the collection of learning input from naive humans. This talk covers results on building computational models of reciprocal social interactions, high-level task goal learning, low-level skill learning, and active learning interactions using several humanoid robot platforms.

September 23, 2014
Yaser Sheikh , Carnegie Mellon University, Robotics Institute
Pettit Microelectronics Building (MiRC) 102 12:00-1:00pm

Title: The Panoptic Studio: Perceiving Small Details in Large-scale Social Interaction

Abstract: In this talk, I will present a massively multi-view system designed to capture the subtle details of motion produced by *polyads*, i.e., multiple people engaged in a social interaction. Polyadic interactions present three challenges to motion capture methods: (1) subtle details that encode crucial information about the interaction need to be resolved over a volume large enough to house several people, (2) severe occlusions frequently occur in any small subset of views, and (3) no dominant direction of observation exists. Our system uses 500 synchronized cameras of hybrid resolutions mounted over the surface of a geodesic dome to capture detailed motion of approximately fifty thousands of points over volumes sufficient for polyadic interactions of ten people. Using a dataset with participants involved in polyadic games (to evoke natural interactions), we establish the performance of this large-scale multi-view system by demonstrating robustness to occlusion, increased capture density, and agnosticism to the orientation of activity. I will show raw 3D reconstruction of unprecedented motion detail over an order of magnitude longer durations than have been previously recorded.

Biography of the Speaker: Yaser Sheikh is an Assistant Research Professor at the Robotics Institute, Carnegie Mellon University, with appointments in the Mechanical Engineering Department and the Quality of Life Technology Center. His research interests span computer vision, computer graphics, and robotics, and are focused on the machine perception of social behavior. He has won the Honda Initiation Award (2010), best paper awards at SAP (2012), WACV (2012), SCA (2010), and ICCV THEMIS (2009), and the Hillman Fellowship for Excellence in Computer Science Research (2004). He has over 50 publications in leading conferences in computer vision, computer graphics, and machine learning, and holds four patents on his research. Yaser chairs leading conferences in computer vision, computer graphics, and robotics including ICCP (2011), SIGGRAPH (2013, 2014), CVPR (2014), and ICRA (2014). His research is sponsored by various government research offices, including NSF and DARPA, and several industrial partners including the Intel corporation, the Walt Disney Company, Nissan, Honda, and Samsung Group. Yaser's research has been covered in various media outlets including New Scientist, BBC, slashdot, and WIRED.

October 6, 2014
Omer Inan , Georgia Institute of Technology, Electrical & Computer Engineering
TSRB 132 12:00-1:00pm

Talk Title: Non-Invasive Physiological Monitoring Technologies for Human Health and Performance

Abstract: By 2030, the American Heart Association projects that 40% of Americans (150 million) will suffer from cardiovascular disease and the annual costs will approach $1 trillion. Cardiovascular monitoring at home could improve the quality of care and life for these millions of patients, and reduce healthcare costs for all Americans. Rather than reacting to catastrophic cardiac events such as heart attacks or strokes in the emergency room, care could be delivered proactively by tailoring treatment strategies to the changing needs of the patients. To accelerate this transition from reactive to proactive care, we need systems-level innovations in multi-modal physiological monitoring and signal interpretation. These solutions would leverage advances in embedded systems and sensor technology to achieve accurate and robust monitoring of clinically relevant parameters in the home.
This talk focuses primarily on one such system: a modified electronic weighing scale designed to monitor the electrical and mechanical health of the heart. With this scale, we measured fluctuations in bodyweight resulting from the movement of blood throughout the vasculature, and combined physiologic insights with feature extraction concepts. Subsequently, we conducted multiple clinical studies with healthy and diseased populations in collaboration with cardiologists in the Medical School and local clinics to demonstrate the efficacy of this device for home health monitoring. 

Biography:
Omer Inan is an Assistant Professor of Electrical and Computer Engineering at the Georgia Institute of Technology, where he researches physiological and biomedical sensing and monitoring. Before joining Georgia Tech, Omer finished his BS, MS, and PhD in electrical engineering in ‘04, ‘05, and ‘09, respectively, from Stanford University, where he was awarded the Lieberman Fellowship in 2008-’09 for outstanding scholarship, teaching, and service. From 2009-‘13, he continued his research at Stanford as a Visiting Scholar while also working as Chief Engineer at Countryman Associates, a high-end professional audio manufacturer. He is an Associate Editor for the IEEE Journal of Biomedical and Health Informatics and the annual IEEE Engineering in Medicine and Biology Conference, and technical program committee member for several international biomedical engineering conferences. During his undergraduate studies at Stanford, Omer competed as a discus and shot put thrower on the Track and Field Team, and was a three-time All-American in the discus throw. 

October 22, 2014
Karen Adolph , New York University, Psychology
TSRB 175 (Auditorium) 2:00-3:00pm

Title: Learning to Move

Abstract: A central issue for research in psychology, biology, and robotics is how movements are generated and controlled. I address this issue by asking how infants solve the problem of moving. My research shows that basic motor skills such as looking, reaching, and walking do not simply appear as the result of maturation. Rather, motor skills are learned over months or years of practice. Learning entails discovering new forms of movements, using perceptual information to select movements appropriately and to modify movements prospectively, and honing motor skills to make them more fluent and efficient. How do babies do it? One clue is that infants acquire immense amounts of variable, distributed practice with basic motor skills. Practicing movements—moving eyes, head, limbs, and body under varying local conditions—teaches infants to control their movements more adaptively, provides new opportunities for learning, and new motivations for goal-directed action. A second clue is that the learning process is geared toward flexibility rather than rote performance: Infants are “learning to learn” rather than acquiring fixed solutions. However, because learning to learn is limited to a particular problem space, learning does not transfer across action systems in development.

2013 Behavior Imaging Seminars