Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Bambach is active.

Publication


Featured researches published by Sven Bambach.


international conference on computer vision | 2015

Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions

Sven Bambach; Stefan Lee; David J. Crandall; Chen Yu

Hands appear very often in egocentric video, and their appearance and pose give important cues about what people are doing and what they are paying attention to. But existing work in hand detection has made strong assumptions that work well in only simple scenarios, such as with limited interaction with other people or in lab settings. We develop methods to locate and distinguish between hands in egocentric video using strong appearance models with Convolutional Neural Networks, and introduce a simple candidate region generation approach that outperforms existing techniques at a fraction of the computational cost. We show how these high-quality bounding boxes can be used to create accurate pixelwise hand regions, and as an application, we investigate the extent to which hand segmentation alone can distinguish between different activities. We evaluate these techniques on a new dataset of 48 first-person videos of people interacting in realistic environments, with pixel-level ground truth for over 15,000 hand instances.


computer vision and pattern recognition | 2014

This Hand Is My Hand: A Probabilistic Approach to Hand Disambiguation in Egocentric Video

Stefan Lee; Sven Bambach; David J. Crandall; John M. Franchak; Chen Yu

Egocentric cameras are becoming more popular, introducing increasing volumes of video in which the biases and framing of traditional photography are replaced with those of natural viewing tendencies. This paradigm enables new applications, including novel studies of social interaction and human development. Recent work has focused on identifying the camera wearers hands as a first step towards more complex analysis. In this paper, we study how to disambiguate and track not only the observers hands but also those of social partners. We present a probabilistic framework for modeling paired interactions that incorporates the spatial, temporal, and appearance constraints inherent in egocentric video. We test our approach on a dataset of over 30 minutes of video from six pairs of subjects.


joint ieee international conference on development and learning and epigenetic robotics | 2016

Objects in the center: How the infant's body constrains infant scenes

Sven Bambach; Linda B. Smith; David J. Crandall; Chen Yu

During early visual development, the infants body and actions both create and constrain the experiences on which the visual system grows. Evidence on early motor development suggests a bias for acting on objects with the eyes, head, trunk, hands, and object aligned at midline. Because these sensory-motor bodies structure visual input, they may also play a role in the development of visual attention: attended objects are at the center of a head- and body-centered scene. In this study, we designed a table-top object exploration task, in which infants and parents were presented with novel objects for joint play and examination. Using a head-mounted eye-tracking system, we measured each infants point of gaze relative to the head when attending objects. With an additional overhead camera, we recorded the position of each object relative to the infants body. We show that even during free toy play, infants tend to bring attended objects towards their bodys midline and attend objects with head and eyes aligned, systematically creating images with the attended object at center.


international conference on multimodal interfaces | 2015

Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View

Sven Bambach; David J. Crandall; Chen Yu

Wearable devices are becoming part of everyday life, from first-person cameras (GoPro, Google Glass), to smart watches (Apple Watch), to activity trackers (FitBit). These devices are often equipped with advanced sensors that gather data about the wearer and the environment. These sensors enable new ways of recognizing and analyzing the wearers everyday personal activities, which could be used for intelligent human-computer interfaces and other applications. We explore one possible application by investigating how egocentric video data collected from head-mounted cameras can be used to recognize social activities between two interacting partners (e.g. playing chess or cards). In particular, we demonstrate that just the positions and poses of hands within the first-person view are highly informative for activity recognition, and present a computer vision approach that detects hands to automatically estimate activities. While hand pose detection is imperfect, we show that combining evidence across first-person views from the two social partners significantly improves activity recognition accuracy. This result highlights how integrating weak but complimentary sources of evidence from social partners engaged in the same task can help to recognize the nature of their interaction.


Journal of Visualized Experiments | 2018

A View of Their Own: Capturing the Egocentric View of Infants and Toddlers with Head-Mounted Cameras

Jeremy I. Borjon; Sara E. Schroer; Sven Bambach; Lauren K. Slone; Drew H. Abney; David J. Crandall; Linda B. Smith

Infants and toddlers view the world, at a basic sensory level, in a fundamentally different way from their parents. This is largely due to biological constraints: infants possess different body proportions than their parents and the ability to control their own head movements is less developed. Such constraints limit the visual input available. This protocol aims to provide guiding principles for researchers using head-mounted cameras to understand the changing visual input experienced by the developing infant. Successful use of this protocol will allow researchers to design and execute studies of the developing childs visual environment set in the home or laboratory. From this method, researchers can compile an aggregate view of all the possible items in a childs field of view. This method does not directly measure exactly what the child is looking at. By combining this approach with machine learning, computer vision algorithms, and hand-coding, researchers can produce a high-density dataset to illustrate the changing visual ecology of the developing infant.


arXiv: Computer Vision and Pattern Recognition | 2015

A Survey on Recent Advances of Computer Vision Algorithms for Egocentric Video.

Sven Bambach


international conference on development and learning | 2013

Understanding embodied visual attention in child-parent interaction

Sven Bambach; David J. Crandall; Chen Yu


Cognitive Science | 2014

Detecting Hands in Children's Egocentric Views to Understand Embodied Attention during Social Interaction

Sven Bambach; John M. Franchak; David J. Crandall; Chen Yu


Cognitive Science | 2016

Active Viewing in Toddlers Facilitates Visual Object Learning: An Egocentric Vision Approach.

Sven Bambach; David J. Crandall; Linda B. Smith; Chen Yu


joint ieee international conference on development and learning and epigenetic robotics | 2017

An egocentric perspective on active vision and visual object learning in toddlers

Sven Bambach; David J. Crandall; Linda B. Smith; Chen Yu

Collaboration


Dive into the Sven Bambach's collaboration.

Top Co-Authors

Avatar

David J. Crandall

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linda B. Smith

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zehua Zhang

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Drew H. Abney

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satoshi Tsutsui

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge