Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Meisner is active.

Publication


Featured researches published by Eric Meisner.


international conference on robotics and automation | 2007

Triangulation Based Multi Target Tracking with Mobile Sensor Networks

Seema Kamath; Eric Meisner; Volkan Isler

We study the problem of designing motion-planning and sensor assignment strategies for tracking multiple targets with a mobile sensor network. We focus on triangulation based tracking where two sensors merge their measurements in order to estimate the position of a target. We present an iterative and distributed algorithm for the tracking problem. An iteration starts with an initialization phase where targets are assigned to sensor pairs. Afterwards, assigned sensors relocate to improve their estimates. We refer to the problem of computing new locations for sensors (for given target assignments) as one-step tracking. After observing that one-step tracking is computationally hard, we show how it can be formulated as an energy-minimization problem. This allows us to adapt well-studied distributed algorithms for energy minimization. We present simulations to compare the performance of two such algorithms and conclude the paper with a description of the full tracking strategy. The utility of the presented strategy is demonstrated with simulations and experiments on a sensor network platform


international conference information processing | 2011

Visual tracking of surgical tools for proximity detection in retinal surgery

Rogério Richa; Marcin Balicki; Eric Meisner; Raphael Sznitman; Russell H. Taylor; Gregory D. Hager

In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor and lack of tactile feedback. Such difficulties increase the risks of incorrect surgical gestures which may cause retinal damage. In this context, robotic assistance has the potential to overcome current technical limitations and increase surgical safety. In this paper we present a method for robustly tracking surgical tools in retinal surgery for detecting proximity between surgical tools and the retinal surface. An image similarity function based on weighted mutual information is specially tailored for tracking under critical illumination variations, lens distortions, and rapid motion. The proposed method was tested on challenging conditions using a phantom eye and recorded human in vivo data acquired by an ophthalmic stereo microscope.


IEEE Transactions on Biomedical Engineering | 2012

Vision-Based Proximity Detection in Retinal Surgery

Rogério Richa; Marcin Balicki; Raphael Sznitman; Eric Meisner; Russell H. Taylor; Gregory D. Hager

In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.


Autonomous Robots | 2008

Controller design for human-robot interaction

Eric Meisner; Volkan Isler; Jeffrey C. Trinkle

Abstract Many robotics tasks require a robot to share the same workspace with humans. In such settings, it is important that the robot performs in such a way that does not cause distress to humans in the workspace. In this paper, we address the problem of designing robot controllers which minimize the stress caused by the robot while performing a given task. We present a novel, data-driven algorithm which computes human-friendly trajectories. The algorithm utilizes biofeedback measurements and combines a set of geometric controllers to achieve human friendliness. We evaluate the comfort level of the human using a Galvanic Skin Response (GSR) sensor. We present results from a human tracking task, in which the robot is required to stay within a specified distance without causing high stress values.


human-robot interaction | 2009

ShadowPlay: a generative model for nonverbal human-robot interaction

Eric Meisner; Selma Àbanovic; Volkan Isler; Linnda Caporeal R. Caporeal; Jeffrey C. Trinkle

Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaning-to communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents.


Laryngoscope | 2013

Anatomical reconstructions of pediatric airways from endoscopic images: A pilot study of the accuracy of quantitative endoscopy

Eric Meisner; Gregory D. Hager; Stacey L. Ishman; David A. Brown; David E. Tunkel; Masaru Ishii

To evaluate the accuracy of three‐dimensional (3D) airway reconstructions obtained using quantitative endoscopy (QE). We developed this novel technique to reconstruct precise 3D representations of airway geometries from endoscopic video streams. This method, based on machine vision methodologies, uses a post‐processing step of the standard videos obtained during routine laryngoscopy and bronchoscopy. We hypothesize that this method is precise and will generate assessment of airway size and shape similar to those obtained using computed tomography (CT).


computer vision and pattern recognition | 2012

Color-based hybrid reconstruction for endoscopy

Haluk N. Tokgozoglu; Eric Meisner; Michael M. Kazhdan; Gregory D. Hager

Three-dimensional (3D) reconstruction of images acquired during endoscopy presents an enormous opportunity for computer vision, however reconstructing geometry from such images is challenging due to lack of features. Shape From Shading (SFS) is an approach to obtain the shape of an object from a single image, but most current methods that are applicable to endoscopy are susceptible to errors caused by surfaces with differing reflectance characteristics. Another weakness of SFS is that while it has high frequency detail information, its shape is inaccurate in the low frequency sense, which makes it difficult to compare to ground truth. Multiview reconstruction (MR), on the other hand, yields reliable shape but lacks details. In this paper, we propose a novel method to perform SFS using a color projection that minimizes intensity variance caused by differing surface characteristics. We then combine the resulting reconstruction with a multiview reconstruction obtained from bundle adjustment, and combine the two reconstructions using an approach inspired by Laplacian surface editing to get a reconstruction that is accurate both in details and overall shape. We compare our results to ground truth to show improvement over existing approaches.


international conference on robotics and automation | 2010

Predictive State Representations for grounding human-robot communication

Eric Meisner; Sanmay Das; Volkan Isler; Jeffrey C. Trinkle; Selma Sabanovic; Linnda R. Caporael

Allowing robots to communicate naturally with humans is an important goal for social robotics. Most approaches have focused on building high-level probabilistic cognitive models. However, research in cognitive science shows that people often build common ground for communication with each other by seeking and providing evidence of understanding through behaviors like mimicry. Predictive State Representations (PSRs) allow one to build explicit, low-level models of the expected outcomes of actions, and are therefore well-suited for tasks that require providing such evidence of understanding. Using human-robot shadow puppetry as a prototype interaction study, we show that PSRs can be used successfully to both model human interactions, and to allow a robot to learn on-line how to engage a human in an interesting interaction.


international workshop algorithmic foundations robotics | 2009

Probabilistic Network Formation through Coverage and Freeze-Tag

Eric Meisner; Wei Yang; Volkan Isler

We address the problem of propagating a piece of information among robots scattered in an environment. Initially, a single robot has the information. This robot searches for other robots to pass it along. When a robot is discovered, it can participate in the process by searching for other robots. Since our motivation for studying this problem is to form an ad-hoc network, we call it the Network Formation Problem. In this paper, we study the case where the environment is a rectangle and the robots’ locations are unknown but chosen uniformly at random. We present an efficient network formation algorithm, Stripes, and show that its expected performance is within a logarithmic factor of the optimal performance. We also compare Stripes with an intuitive network formation algorithm in simulations. The feasibility of Stripes is demonstrated with a proof-of-concept implementation.


national conference on artificial intelligence | 2009

Outside-in Design for Interdisciplinary HRI Research

Selma Sabanovic; Eric Meisner; Linnda R. Caporael; Volkan Isler; Jeffrey C. Trinkle

Collaboration


Dive into the Eric Meisner's collaboration.

Top Co-Authors

Avatar

Volkan Isler

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Jeffrey C. Trinkle

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Selma Sabanovic

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Linnda R. Caporael

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge