Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Jagersand is active.

Publication


Featured researches published by Martin Jagersand.


international conference on robotics and automation | 1997

Experimental evaluation of uncalibrated visual servoing for precision manipulation

Martin Jagersand; Olac Fuentes; Randal C. Nelson

We present an experimental evaluation of adaptive and non-adaptive visual servoing in 3, 6 and 12 degrees of freedom (DOF), comparing it to traditional joint feedback control. While the purpose of experiments in most other work has been to show that the particular algorithm presented indeed also works in practice, we do not focus on the algorithm but rather on properties important to visual servoing in general. Our main results are: positioning of a 6 axis PUMA 762 arm is up to 5 times more precise under visual control than under joint control; positioning of a Utah/MIT dextrous hand is better under visual control than under joint control by a factor of 2; and a trust-region-based adaptive visual feedback controller is very robust. For m tracked visual features the algorithm can successfully estimate online the m/spl times/3 (m/spl ges/3) image Jacobian (J) without any prior information, while carrying out a 3 DOF manipulation task. For 6 and higher DOF manipulation, a rough initial estimate of J is beneficial. We also verified that redundant visual information is valuable. Errors due to imprecise tracking and goal specification were reduced as the number of visual features, m, was increased. Furthermore highly redundant systems allow us to detect outliers in the feature vector and deal with partial occlusion.


international conference on computer vision | 1995

Saliency maps and attention selection in scale and spatial coordinates: an information theoretic approach

Martin Jagersand

Information measures with respect to spatial locations and scales of objects in an image are important to image processing and interpretation. It allows us to focus attention on relevant data, saving effort and reducing false positives. In particular, the information content of a man-made scene is typically confined to a small set of scales. We devise a scale space based measure of image information. Kullback contrasts between successive resolution lengths gives the differential information gain. Experiments show that this measure gives a clear indication of characteristic lengths in a variety of real world images and is superior to power spectrum based measurements. Decomposing the expected information gain into spatial coordinates gives us a saliency map for use by an attention selector. We combine the scale and spatial decompositions into a single information measure, giving both the spatial extent and scale range of interest. The information measure has an efficient implementation, and thus can be used routinely in early vision processing.<<ETX>>


international conference on computer vision | 2007

3D Variational Brain Tumor Segmentation using a High Dimensional Feature Set

Dana Cobzas; Neil Birkbeck; Mark W. Schmidt; Martin Jagersand; Albert Murtha

Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue, among different patients and, in many cases, similarity between tumor and normal tissue. One other challenge is how to make use of prior information about the appearance of normal brain. In this paper we propose a variational brain tumor segmentation algorithm that extends current approaches from texture segmentation by using a high dimensional feature set calculated from MRI data and registered atlases. Using manually segmented data we learn a statistical model for tumor and normal tissue. We show that using a conditional model to discriminate between normal and abnormal regions significantly improves the segmentation results compared to traditional generative models. Validation is performed by testing the method on several cancer patient MRI scans.


international conference on robotics and automation | 2010

Robust Jacobian estimation for uncalibrated visual servoing

Azad Shademan; Amir Massoud Farahmand; Martin Jagersand

This paper addresses robust estimation of the uncalibrated visual-motor Jacobian for an image-based visual servoing (IBVS) system. The proposed method does not require knowledge of model or system parameters and is robust to outliers caused by various visual tracking errors, such as occlusion or mis-tracking. Previous uncalibrated methods are not robust to outliers and assume that the visual-motor data belong to the underlying model. In unstructured environments, this assumption may not hold. Outliers to the visual-motor model may deteriorate the Jacobian, which can make the system unstable or drive the arm in the wrong direction. We propose to apply a statistically robust M-estimator to reject the outliers. We compare the quality of the robust Jacobian estimation with the least squares-based estimation. The effect of outliers on the estimation quality is studied through MATLAB simulations and eye-in-hand visual servoing experiments using a WAM arm. Experimental results show that the Jacobian estimated by robust M-estimation is robust when up to 40% of the visualmotor data are outliers.


international symposium on computer vision | 1995

Visual space task specification, planning and control

Martin Jagersand; Randal C. Nelson

Robot manipulators, some thirty years after their commercial introduction, have found widespread application in structured industrial environments, performing, for instance, repetitive tasks in an assembly line. Successful application in unstructured environments however has proven much harder. Yet there are many such tasks where robots would be useful. We present a promising approach to visual (and more general sensory) robot control, that does not require modeling of robot transfer functions or the use of absolute world coordinate systems, and thus is suitable for use in unstructured environments. Our approach codes actions and tasks in terms of desired general perceptions rather than motor sequences. We argue that our vision space approach is particularly suited for easy teaching/programming of a robot. For instance a task can be taught by supplying an image sequence illustrating it. The resulting robot behavior is robust to changes in the environment, dynamically adjusting the motor control rules in response to environmental variation.


Medical Image Analysis | 2012

Tumor invasion margin on the Riemannian space of brain fibers

Parisa Mosayebi; Dana Cobzas; Albert Murtha; Martin Jagersand

Glioma is one of the most challenging types of brain tumors to treat or control locally. One of the main problems is to determine which areas of the apparently normal brain contain glioma cells, as gliomas are known to infiltrate several centimeters beyond the clinically apparent lesion that is visualized on standard Computed Tomography scans (CT) or Magnetic Resonance Images (MRIs). To ensure that radiation treatment encompasses the whole tumor, including the cancerous cells not revealed by MRI, doctors treat the volume of brain that extends 2cm out from the margin of the visible tumor. This approach does not consider varying tumor-growth dynamics in different brain tissues, thus it may result in killing some healthy cells while leaving cancerous cells alive in the other areas. These cells may cause recurrence of the tumor later in time, which limits the effectiveness of the therapy. Knowing that glioma cells preferentially spread along nerve fibers, we propose the use of a geodesic distance on the Riemannian manifold of brain diffusion tensors to replace the Euclidean distance used in the clinical practice and to correctly identify the tumor invasion margin. This mathematical model results in a first-order Partial Differential Equation (PDE) that can be numerically solved in a stable and consistent way. To compute the geodesic distance, we use actual Diffusion Weighted Imaging (DWI) data from 11 patients with glioma and compare our predicted infiltration distance map with actual grwoth in follow-up MRI scans. Results show improvement in predicting the invasion margin when using the geodesic distance as opposed to the 2cm conventional Euclidean distance.


ieee virtual reality conference | 2003

Recent methods for image-based modeling and rendering

Darius Burschka; Gregory D. Hager; Zachary Dodds; Martin Jagersand; Dana Cobzas; Keith Yerex

A long-standing goal in image-based modeling and rendering is to capture a scene from camera images and construct a sufficient model to allow photo-realistic rendering of new views. With the confluence of computer graphics and vision, the combination of research on recovering geometric structure from un-calibrated cameras with modeling and rendering has yielded numerous new methods. Yet, many challenging issues remain to be addressed before a sufficiently general and robust system could be built to (for instance) allow an average user to model their home and garden from camcorder video. This tutorial aims to give researchers and students in computer graphics a working knowledge of relevant theory and techniques covering the steps from real-time vision for tracking and the capture of scene geometry and appearance, to the efficient representation and real-time rendering of image-based models. It also includes hands-on demos of real-time visual tracking, modeling and rendering systems.


european conference on computer vision | 2006

Variational shape and reflectance estimation under changing light and viewpoints

Neil Birkbeck; Dana Cobzas; Peter F. Sturm; Martin Jagersand

Fitting parameterized 3D shape and general reflectance models to 2D image data is challenging due to the high dimensionality of the problem. The proposed method combines the capabilities of classical and photometric stereo, allowing for accurate reconstruction of both textured and non-textured surfaces. In particular, we present a variational method implemented as a PDE-driven surface evolution interleaved with reflectance estimation. The surface is represented on an adaptive mesh allowing topological change. To provide the input data, we have designed a capture setup that simultaneously acquires both viewpoint and light variation while minimizing self-shadowing. Our capture method is feasible for real-world application as it requires a moderate amount of input data and processing time. In experiments, models of people and everyday objects were captured from a few dozen images taken with a consumer digital camera. The capture process recovers a photo-consistent model of spatially varying Lambertian and specular reflectance and a highly accurate geometry.


intelligent robots and systems | 2007

Global visual-motor estimation for uncalibrated visual servoing

Amir Massoud Farahmand; Azad Shademan; Martin Jagersand

In this paper, we present two methods for the estimation of a globally valid visual-motor model of a robotic manipulator. In conventional uncalibrated visual servoing, the visuo-motor function is approximated locally with a Jacobian. However, for optimal task planning, or nonlinear controller design with global stability guarantee, one needs to know a model that provides some information about the behavior of the system over the whole workspace. Our presented methods remedy this drawback in uncalibrated visual servoing by incrementally building a global estimator based on the movement history. We implement two such methods. The first method is a K-nearest neighborhood regressor over Jacobian that uses previously estimated local models. The second method stores previous movements and computes an estimate of the Jacobian by solving a local least squares problem. Experimental results show that both methods provide better global estimation quality compared to the conventional local estimation method with much lower estimation variance.


international conference on robotics and automation | 2003

Image-based localization with depth-enhanced image map

Dana Cobzas; Hong Zhang; Martin Jagersand

In this paper, we present an image-based robot incremental localization algorithm which uses a panoramic image-based map enhanced with depth from a laser range finder. The image-based map (model) contains both intensity information as well as sparse 3D geometric features. By assuming motion continuity, a robot can use the depth information in the image-model to project the relevant 3D model features, specifically vertical lines, of the environment to its camera coordinate frame. To determine its location, the robot first acquires an intensity image and then matches the 2D geometric features in the image with the projected model features. The first contribution of this research is that we avoid the difficult problem of full 3D reconstruction from images by employing a range sensor registered with respect to the intensity image sensor; secondly, we provide an algorithm that performs incremental robot localization using only 2D images. Experimental results in indoor map building and localization demonstrate the feasibility of our approach and evaluate the performance of the algorithm.

Collaboration


Dive into the Martin Jagersand's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge