Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maneesh Dewan is active.

Publication


Featured researches published by Maneesh Dewan.


computer vision and pattern recognition | 2004

Multiple kernel tracking with SSD

Gregory D. Hager; Maneesh Dewan; Charles V. Stewart

Kernel-based objective functions optimized using the mean shift algorithm have been demonstrated as an effective means of tracking in video sequences. The resulting algorithms combine the robustness and invariance properties afforded by traditional density-based measures of image similarity, while connecting these techniques to continuous optimization algorithms. This paper demonstrates a connection between kernel-based algorithms and more traditional template tracking methods. here is a well known equivalence between the kernel-based objective function and an SSD-like measure on kernel-modulated histograms. It is shown that under suitable conditions, the SSD-like measure can be optimized using Newton-style iterations. This method of optimization is more efficient (requires fewer steps to converge) than mean shift and makes fewer assumptions on the form of the underlying kernel structure. In addition, the methods naturally extend to objective functions optimizing more elaborate parametric motion models based on multiple spatially distributed kernels. We demonstrate multi-kernel methods on a variety of examples ranging from tracking of unstructured objects in image sequences to stereo tracking of structured objects to compute full 3D spatial location.


Robotics and Autonomous Systems | 2005

Navigating inner space: 3-D assistance for minimally invasive surgery

Darius Burschka; Jason J. Corso; Maneesh Dewan; William W. Lau; Ming Li; Henry C. Lin; Panadda Marayong; Nicholas A. Ramey; Gregory D. Hager; David Q. Larkin; Christopher J. Hasser

Abstract Since its inception about three decades ago, modern minimally invasive surgery has made huge advances in both technique and technology. However, the minimally invasive surgeon is still faced with daunting challenges in terms of visualization and hand-eye coordination. At the Center for Computer Integrated Surgical Systems and Technology (CISST) we have been developing a set of techniques for assisting surgeons in navigating and manipulating the three-dimensional space within the human body. In order to develop such systems, a variety of challenging visual tracking, reconstruction and registration problems must be solved. In addition, this information must be tied to methods for assistance that improve surgical accuracy and reliability but allow the surgeon to retain ultimate control of the procedure and do not prolong time in the operating room. In this article, we present two problem areas, eye microsurgery and thoracic minimally invasive surgery, where computational vision can play a role. We then describe methods we have developed to process video images for relevant geometric information, and related control algorithms for providing interactive assistance. Finally, we present results from implemented systems.


Medical Image Analysis | 2011

Segmentation of pulmonary nodules of various densities with morphological approaches and convexity models

Toshiro Kubota; Anna Jerebko; Maneesh Dewan; Marcos Salganicoff; Arun Krishnan

Accurate segmentation of a pulmonary nodule is an important and active area of research in medical image processing. Although many algorithms have been reported in literature for this problem, those that are applicable to various density types have not been available until recently. In this paper, we propose a new algorithm that is applicable to solid, non-solid and part-solid types and solitary, vascularized, and juxtapleural types. First, the algorithm separates lung parenchyma and radiographically denser anatomical structures with coupled competition and diffusion processes. The technique tends to derive a spatially more homogeneous foreground map than an adaptive thresholding based method. Second, it locates the core of a nodule in a manner that is applicable to juxtapleural types using a transformation applied on the Euclidean distance transform of the foreground. Third, it detaches the nodule from attached structures by a region growing on the Euclidean distance map followed by a procedure to delineate the surface of the nodule based on the patterns of the region growing and distance maps. Finally, convex hull of the nodule surface intersected with the foreground constitutes the final segmentation. The performance of the technique is evaluated with two Lung Imaging Database Consortium (LIDC) data sets with 23 and 82 nodules each, and another data set with 820 nodules with manual diameter measurements. The experiments show that the algorithm is highly reliable in segmenting nodules of various types in a computationally efficient manner.


intelligent robots and systems | 2007

Kernel-based visual servoing

Vinutha Kallem; Maneesh Dewan; John P. Swensen; Gregory D. Hager; Noah J. Cowan

Traditionally, visual servoing is separated into tracking and control subsystems. This separation, though convenient, is not necessarily well justified. When tracking and control strategies are designed independently, it is not clear how to optimize them to achieve a certain task. In this work, we propose a framework in which spatial sampling kernels - borrowed from the tracking and registration literature - are used to design feedback controllers for visual servoing. The use of spatial sampling kernels provides natural hooks for Lyapunov theory, thus unifying tracking and control and providing a framework for optimizing a particular servoing task. As a first step, we develop kernel-based visual servos for a subset of relative motions between camera and target scene. The subset of motions we consider are 2D translation, scale, and roll of the target relative to the camera. Our approach provides formal guarantees on the convergence/stability of visual servoing algorithms under putatively generic conditions.


computer vision and pattern recognition | 2011

Sparse shape composition: A new framework for shape prior modeling

Shaoting Zhang; Yiqiang Zhan; Maneesh Dewan; Junzhou Huang; Dimitris N. Metaxas; Xiang Sean Zhou

Image appearance cues are often used to derive object shapes, which is usually one of the key steps of image understanding tasks. However, when image appearance cues are weak or misleading, shape priors become critical to infer and refine the shape derived by these appearance cues. Effective modeling of shape priors is challenging because: 1) shape variation is complex and cannot always be modeled by a parametric probability distribution; 2) a shape instance derived from image appearance cues (input shape) may have gross errors; and 3) local details of the input shape are difficult to preserve if they are not statistically significant in the training data. In this paper we propose a novel Sparse Shape Composition model (SSC) to deal with these three challenges in a unified framework. In our method, training shapes are adaptively composed to infer/refine an input shape. The a-priori information is thus implicitly incorporated on-the-fly. Our model leverages two sparsity observations of the input shape instance: 1) the input shape can be approximately represented by a sparse linear combination of training shapes; 2) parts of the input shape may contain gross errors but such errors are usually sparse. Using L1 norm relaxation, our model is formulated as a convex optimization problem, which is solved by an efficient alternating minimization framework. Our method is extensively validated on two real world medical applications, 2D lung localization in X-ray images and 3D liver segmentation in low-dose CT scans. Compared to state-of-the-art methods, our model exhibits better performance in both studies.


medical image computing and computer-assisted intervention | 2011

Deformable segmentation via sparse shape representation

Shaoting Zhang; Yiqiang Zhan; Maneesh Dewan; Junzhou Huang; Dimitris N. Metaxas; Xiang Sean Zhou

Appearance and shape are two key elements exploited in medical image segmentation. However, in some medical image analysis tasks, appearance cues are weak/misleading due to disease/artifacts and often lead to erroneous segmentation. In this paper, a novel deformable model is proposed for robust segmentation in the presence of weak/misleading appearance cues. Owing to the less trustable appearance information, this method focuses on the effective shape modeling with two contributions. First, a shape composition method is designed to incorporate shape prior on-the-fly. Based on two sparsity observations, this method is robust to false appearance information and adaptive to statistically insignificant shape modes. Second, shape priors are modeled and used in a hierarchical fashion. More specifically, by using affinity propagation method, our deformable surface is divided into multiple partitions, on which local shape models are built independently. This scheme facilitates a more compact shape prior modeling and hence a more robust and efficient segmentation. Our deformable model is applied on two very diverse segmentation problems, liver segmentation in PET-CT images and rodent brain segmentation in MR images. Compared to state-of-art methods, our method achieves better performance in both studies.


medical image computing and computer assisted intervention | 2004

Vision-Based Assistance for Ophthalmic Micro-Surgery

Maneesh Dewan; Panadda Marayong; Allison M. Okamura; Gregory D. Hager

This paper details the development and preliminary testing of a system for 6-DOF human-machine cooperative motion using vision-based virtual fixtures for applications in retinal micro-surgery. The system makes use of a calibrated stereo imaging system to track surfaces in the environment, and simultaneously tracks a tool held by the JHU Steady-Hand Robot. As the robot is guided using force inputs from the user, a relative error between the estimated surface and the tool position is established. This error is used to generate an anisotropic stiffness matrix that in turn guides the user along the surface in both position and orientation. Preliminary results show the effectiveness of the system in guiding a user along the surface and performing different sub-tasks such as tool alignment and targeting within the resolution of the visual system.The accuracy of surface reconstruction and tool tracking obtained from stereo imaging was validated through comparison with measurements made by an infrared optical position tracking system.


medical image computing and computer assisted intervention | 2009

Multi-level Ground Glass Nodule Detection and Segmentation in CT Lung Images

Yimo Tao; Le Lu; Maneesh Dewan; Albert Y. C. Chen; Jason J. Corso; Jianhua Xuan; Marcos Salganicoff; Arun Krishnan

Early detection of Ground Glass Nodule (GGN) in lung Computed Tomography (CT) images is important for lung cancer prognosis. Due to its indistinct boundaries, manual detection and segmentation of GGN is labor-intensive and problematic. In this paper, we propose a novel multi-level learning-based framework for automatic detection and segmentation of GGN in lung CT images. Our main contributions are: firstly, a multi-level statistical learning-based approach that seamlessly integrates segmentation and detection to improve the overall accuracy for GGN detection (in a subvolume). The classification is done at two levels, both voxel-level and object-level. The algorithm starts with a three-phase voxel-level classification step, using volumetric features computed per voxel to generate a GGN class-conditional probability map. GGN candidates are then extracted from this probability map by integrating prior knowledge of shape and location, and the GGN object-level classifier is used to determine the occurrence of the GGN. Secondly, an extensive set of volumetric features are used to capture the GGN appearance. Finally, to our best knowledge, the GGN dataset used for experiments is an order of magnitude larger than previous work. The effectiveness of our method is demonstrated on a dataset of 1100 subvolumes (100 containing GGNs) extracted from about 200 subjects.


IEEE Transactions on Medical Imaging | 2011

Robust Automatic Knee MR Slice Positioning Through Redundant and Hierarchical Anatomy Detection

Yiqiang Zhan; Maneesh Dewan; Martin Harder; Arun Krishnan; Xiang Sean Zhou

Diagnostic magnetic resonance (MR) image quality is highly dependent on the position and orientation of the slice groups, due to the intrinsic high in-slice and low through-slice resolutions of MR imaging. Hence, the higher speed, accuracy, and reproducibility of automatic slice positioning , make it highly desirable over manual slice positioning. However, imaging artifacts, diseases, joint articulation, variations across ages and demographics as well as the extremely high performance requirements prevent state-of-the-art methods, such as volumetric registration, to be an off-the-shelf solution. In this paper, we address all these issues through an automatic slice positioning framework based on redundant and hierarchical learning. Our method has two hallmarks that are specifically designed to achieve high robustness and accuracy. 1) A redundant set of anatomy detectors are learned to provide local appearance cues. These detections are pruned and assembled according to a distributed anatomy model, which captures group-wise spatial configurations among anatomy primitives. This strategy brings about a high level of robustness and works even if a large portion of the target is distorted, missing, or occluded. 2) The detectors are learned and invoked in a hierarchical fashion, with each local detection scheduled and iterated according to its intrinsic invariance property. This iterative alignment process is shown to dramatically improve alignment accuracy. The proposed system is extensively validated on a large dataset including 744 clinical MR scans. Compared to state-of-the-art methods, our method exhibits superior performance in terms of robustness, accuracy, and reproducibility. The methodology is general and can be applied to other anatomies and other imaging modalities.


computer vision and pattern recognition | 2006

Toward Optimal Kernel-based Tracking

Maneesh Dewan

The design and development of methods for tracking targets in visual images has developed rapidly in the past decade. However, in practice the design of tracking algorithms is still largely ad-hoc, based on trial and error. As a result, the performance of such algorithms can vary widely based on the properties of the target of interest and the choice of design. The use of spatial sampling kernels on multiple feature spaces has recently emerged as a promising approach to visual target tracking. In particular, it is possible to show that most popular tracking algorithms can be expressed within this framework. As a result, sampling kernels can be viewed as a flexible design space for tracking algorithms. However, in the current approaches, the kernels are placed in an adhoc fashion at the center of the target with a scale equal to the size of the target. This can lead to sub-optimal tracking results. In this paper, we present results pointing toward the design of optimal and approximately optimal target-specific tracking algorithms. The target tracking problem is formulated in terms of an optimization over a family of kernelbased sampling functions. This optimization is solved to produce an optimal target-specific kernel configuration. Experimental results show greatly improved performance over classical template tracking and naive kernel-based tracking.

Collaboration


Dive into the Maneesh Dewan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaoting Zhang

University of North Carolina at Charlotte

View shared research outputs
Researchain Logo
Decentralizing Knowledge