Eric Martinson
Toyota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eric Martinson.
IEEE Transactions on Haptics | 2015
German H. Flores; Sri Kurniawan; Roberto Manduchi; Eric Martinson; Lourdes Morales; Emrah Akin Sisbot
We propose a vibrotactile interface in the form of a belt for guiding blind walkers. This interface enables blind walkers to receive haptic directional instructions along complex paths without negatively impacting users ability to listen and/or perceive the environment the way some auditory directional instructions do. The belt interface was evaluated in a controlled study with 10 blind individuals and compared to the audio guidance. The experiments were videotaped and the participants behaviors and comments were content analyzed. Completion times and deviations from ideal paths were also collected and statistically analyzed. By triangulating the quantitative and qualitative data, we found that the belt resulted in closer path following to the expense of speed. In general, the participants were positive about the use of vibrotactile belt to provide directional guidance.
robot and human interactive communication | 2014
Eric Martinson
Often overlooked in human-robot interaction is the challenge of people detection. For natural interaction, a robot must detect people without waiting for them to face the camera, get far enough away to be fully present, or center themselves fully within the field of view. Furthermore, it must happen without requiring immense amounts of processing that are not practical for real systems. In this work we focus on person detection in a guidance scenario, where occlusion is particularly prevalent. Using a layered approach with depth images, we can substantially improve detection rates under high levels of occlusion, and enable a robot to detect a target that is moving into and out of the field of view.
robot and human interactive communication | 2016
Eric Martinson; V. Yalla
Convolutional neural networks (CNNs), in combination with big data, are increasingly being used to engineer robustness into visual classification systems including human detection. One significant challenge to using a CNN on a mobile robot, however, is the associated computational cost and detection rate of running the network. In this work, we demonstrate how fusion with a feature-based layered classifier can help. Not only does score-level fusion of a CNN with the layered classifier improve precision/recall for detecting people on a mobile robot, but using the layered system as a pre-filter can substantially reduce the computational cost of running a CNN - reducing the number of objects that need to be classified while still improving precision. The combined real-time system is implemented and evaluated on a two robots with very different GPU capabilities.
intelligent robots and systems | 2016
Eric Martinson; V. Yalla
Deep convolutional neural networks are being increasingly deployed for image classification tasks as they can learn sensor and environmental independence from large quantities of training data. Most, however, have focused on classifying uploaded photographs rather than the often occluded, arbitrary height and camera angles images found commonly in robotic applications. In this work, we look at the performance of the popular AlexNet architecture to detect people in different robotic scenarios using different sensors and/or environments. Furthermore, we demonstrate how fusing this architecture with the depth-based layered detection system, a more traditional geometric feature-based classifier, leads to significant improvements in classification precision/recall, whether working with depth data alone or a combination of depth and RGB images.
intelligent robots and systems | 2014
Eric Martinson
Blind or visually impaired people want to know more about things they hear in the world. They want to know what other people can “see”. With its cameras, a robot can fill that role. But how can an individual make requests about arbitrary objects they can only hear? How can people make requests about objects they do not know either the exact location of, or any uniquely identifiable traits? This work describes a solution for querying the robot about unknown, audible objects in the surrounding environment through a combination of computational sound source localization and human input including pointing gestures and spoken descriptors.
robot and human interactive communication | 2016
Eric Martinson; A. Blasdel; E. Akin Sisbot
Comfortable object handover involves searching for a person, identifying them, and then actually handing them an object. It is a goal of many robotics applications, particularly in healthcare. But people can have a wide variety of physical, perceptual, or cognitive limitations. To address these variable patient conditions with an autonomous robot, a robot must adapt to these individual differences. In this work, we first interviewed nurses regarding common issues faced by healthcare providers. Then we designed a system that interfaces with an individuals electronic health record, and adapts its search and handover capabilities to improve the quality of object handover.
intelligent robots and systems | 2016
David Kim; Eric Martinson
Spatial affordance can be defined as the functionality a space, or place, lends to human activity. Different places afford different activity possibilities - sleeping is mostly done in the bedroom, and cooking is mostly done in the kitchen. Semantic place labels like kitchen and bedroom, therefore, provide context with which a robot can better infer human activity. Real rooms, however, often defy simple place labels, as they can be multi-purpose, supporting many different types of human activity. The solution is to identify the spatial affordances associated with the current nexus of human activity - a microlevel place labeling. In this paper, we will demonstrate how to estimate these local spatial affordances by integrating a deep learning based place estimator with human pose estimation. The resulting affordances are then used to improve activity recognition using Bayesian belief network.
Archive | 2016
Eric Martinson; Veeraganesh Yalla
Archive | 2014
Eric Martinson
Archive | 2015
Eric Martinson; Emrah Akin Sisbot; Joseph Djugash; Kentaro Oguchi; Yutaka Takaoka; Yusuke Nakano