Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abhishek Kolagunda is active.

Publication


Featured researches published by Abhishek Kolagunda.


international symposium on visual computing | 2013

Improving Image-Based Localization through Increasing Correct Feature Correspondences

Guoyu Lu; Vincent Ly; Haoquan Shen; Abhishek Kolagunda; Chandra Kambhamettu

Image-based localization is to provide contextual information based on a query image. Current state-of-the-art methods use 3D Structure-from-Motion reconstruction model to aid in localizing the query image, either by 2D-to-3D matching or by 3D-to-2D matching. By adding camera pose estimation, the system can perform image localization more accurately. However, incorrect feature correspondences between the 2D image and 3D reconstruction remains the main reason for failures in image localization. In our paper, we introduce a new method, which adds features embedding, to reduce the incorrect feature correspondences. We do the query expansion to add correspondences, where the associated 3d point has a high probability to be found in the same camera as the seed set. Using the techniques described, the registration accuracy can be significantly improved. Experiments on several large image datasets have shown our methods to outperform most state-of-the-art methods.


conference on multimedia modeling | 2016

A Fast 3D Indoor-Localization Approach Based on Video Queries

Guoyu Lu; Yan Yan; Abhishek Kolagunda; Chandra Kambhamettu

Image-based localization systems are typically used in outdoor environments, as high localization accuracy can be achieved particularly near tall buildings where the GPS signal is weak. Weak GPS signal is also a critical issue for indoor environments. In this paper, we introduce a novel fast 3D Structure-from-Motion (SfM) model based indoor localization approach using videos as queries. Different from an outdoor environment, the captured images in an indoor environment usually contain people, which often leads to an incorrect camera pose estimation. In our approach, we segment out people in the videos by means of an optical flow technique. This way, we filter people in video and complete the captured video for localization. A graph matching based verification is adopted to enhance both the number of images that are registered and the camera pose estimation. Furthermore, we propose an initial correspondence selection method based on local feature ratio test instead of traditional RANSAC which leads to a much faster image registration speed. The extensive experimental results show that our proposed approach has multiple advantages over existing indoor localization systems in terms of accuracy and speed.


computer vision and pattern recognition | 2015

Material classification with thermal imagery

Philip Saponaro; Scott Sorensen; Abhishek Kolagunda; Chandra Kambhamettu

Material classification is an important area of research in computer vision. Typical algorithms use color and texture information for classification, but there are problems due to varying lighting conditions and diversity of colors in a single material class. In this work we study the use of long wave infrared (i.e. thermal) imagery for material classification. Thermal imagery has the benefit of relative invariance to color changes, invariance to lighting conditions, and can even work in the dark. We collect a database of 21 different material classes with both color and thermal imagery. We develop a set of features that describe water permeation and heating/cooling properties, and test several variations on these methods to obtain our final classifier. The results show that the proposed method outperforms typical color and texture features, and when combined with color information, the results are improved further.


Microscopy Research and Technique | 2018

Semiautomated confocal imaging of fungal pathogenesis on plants: Microscopic analysis of macroscopic specimens

Katharine R. Minker; Meredith L. Biedrzycki; Abhishek Kolagunda; Stephen Rhein; Fabiano J. Perina; Samuel S. Jacobs; Michael T. Moore; Tiffany M. Jamann; Qin Yang; Rebecca J. Nelson; Peter J. Balint-Kurti; Chandra Kambhamettu; Randall J. Wisser; Jeffrey L. Caplan

The study of phenotypic variation in plant pathogenesis provides fundamental information about the nature of disease resistance. Cellular mechanisms that alter pathogenesis can be elucidated with confocal microscopy; however, systematic phenotyping platforms—from sample processing to image analysis—to investigate this do not exist. We have developed a platform for 3D phenotyping of cellular features underlying variation in disease development by fluorescence‐specific resolution of host and pathogen interactions across time (4D). A confocal microscopy phenotyping platform compatible with different maize–fungal pathosystems (fungi: Setosphaeria turcica, Cochliobolus heterostrophus, and Cercospora zeae‐maydis) was developed. Protocols and techniques were standardized for sample fixation, optical clearing, species‐specific combinatorial fluorescence staining, multisample imaging, and image processing for investigation at the macroscale. The sample preparation methods presented here overcome challenges to fluorescence imaging such as specimen thickness and topography as well as physiological characteristics of the samples such as tissue autofluorescence and presence of cuticle. The resulting imaging techniques provide interesting qualitative and quantitative information not possible with conventional light or electron 2D imaging. Microsc. Res. Tech., 81:141–152, 2018.


international conference on image processing | 2015

Refractive stereo ray tracing for reconstructing underwater structures

Scott Sorensen; Abhishek Kolagunda; Philip Saponaro; Chandra Kambhamettu

Underwater objects behind a refractive surface pose problems for traditional 3D reconstruction techniques. Scenes where underwater objects are visible from the surface are commonplace, however the refraction of light causes 3D points in these scenes to project non-linearly. Refractive Stereo Ray Tracing allows for accurate reconstruction by modeling the refraction of light. Our approach uses techniques from ray tracing to compute the 3D position of points behind a refractive surface. This technique aims to reconstruct underwater structures in situations where access to the water is dangerous or cost prohibitive. Experimental results in real and synthetic scenes show this technique effectively handles refraction.


conference on multimedia modeling | 2017

A Virtual Reality Framework for Multimodal Imagery for Vessels in Polar Regions

Scott Sorensen; Abhishek Kolagunda; Andrew R. Mahoney; Daniel Zitterbart; Chandra Kambhamettu

Maintaining total awareness when maneuvering an ice-breaking vessel is key to its safe operation. Camera systems are commonly used to augment the capabilities of those piloting the vessel, but rarely are these camera systems used beyond simple video feeds. To aid in visualization for decision making and operation, we present a scheme for combining multiple modalities of imagery into a cohesive Virtual Reality application which provides the user with an immersive, real scale, view of conditions around a research vessel operating in polar waters. The system incorporates imagery from a \(360^{\circ }\) Long-wave Infrared camera as well as an optical band stereo camera system. The Virtual Reality application allows the operator multiple natural ways of interacting with and observing the data, as well as provides a framework for further inputs and derived observations.


international conference on acoustics, speech, and signal processing | 2016

Neural network shape: Organ shape representation with radial basis function neural networks

Guoyu Lu; Li Ren; Abhishek Kolagunda; Chandra Kambhamettu

We propose to represent the shape of an organ using a neural network classifier. The shape is represented by a function learned by a neural network. Radial Basis Function (RBF) is used as the activation function for each perceptron. The learned implicit function is a combination of radial basis functions, which can represent complex shapes. The organ shape representation is learned using classification methods. Our testing results show that the neural network shape provides the best representation accuracy. The use of RBF provides a rotation, translation and scaling invariant feature to represent the shape. Experiments show that our method can accurately represent the organ shape.


Journal of Visual Communication and Image Representation | 2016

Representing 3D shapes based on implicit surface functions learned from RBF neural networks

Guoyu Lu; Li Ren; Abhishek Kolagunda; Xiaolong Wang; Ismail Baris Turkbey; Peter L. Choyke; Chandra Kambhamettu

We propose a 3D shape representation method based on neural network classifier.The combination of radial base functions can implicitly represent complex shapes.The use of neural network can represent the shape with 3 classes of points.We conduct extensive experiments on medical and non-medical data.Our method can accurately and memory-efficiently represent shapes.We introduced a new prostate dataset. We propose to represent the shape of 3D objects using a neural network classifier. The 3D shape is learned from a neural network, where Radial Basis Function (RBF) is applied as the activation function for each perceptron. The implicit functions derived from the neural network is a combination of radial basis functions, which can represent complex shapes. The use of RBF provides a rotation, translation and scaling invariant feature to represent the shape. We conduct experiments on a new prostate dataset and public datasets. Our testing results show that our neural network-based method can accurately represent various shapes.


bioinformatics and biomedicine | 2016

Detection of fungal spores in 3D microscopy images of macroscopic areas of host tissue

Abhishek Kolagunda; Randall J. Wisser; Timothy Chaya; Jeffrey L. Caplan; Chandra Kambhamettu

The measurement of variation in characteristics of an organism is referred to as phenotyping, and using image data to extract phenotypes is a rapidly developing area in biological research. For studying host-pathogen interactions, 3D microscopy data can provide useful information about mechanisms of infection and defense. Performing research on a fungal pathogen of a plant, we recently developed methods to image and combine multiple fields of view of microscopy data across a macroscopic scale. This study was focused on using macroscopic microscopy data to digitally extract the top epidermal cell layer of plant leaves and to count the number of fungal spores on the epidermis. This was achieved using an active surface approach to estimate the 3D position of the epidermis and a shape-template matching approach to detect spores. A compact shape representation is proposed to model spore shapes and generate candidate templates for detecting spores. Our experiments show results that indicate strong promise for the proposed approach in studying plant-fungal interactions.


british machine vision conference | 2015

Hierarchical Hybrid Shape Representation for Medical Shapes.

Abhishek Kolagunda; Guoyu Lu; Chandra Kambhamettu

Recently, shape analysis has become of increasing interest in the medical community due to its potential in capturing the morphological variations across a population. The high quality 3D images captured can be used to extract 3D shape of the organs. 3D models of organs can also be used for training personnel, for visualization during image guided interventions and in simulations. A compact shape model that has implicit and explicit forms will aid in some of these medical use-cases. We propose a compact hybrid shape model as a combination of Extended Superquadrics (ESQ) [1] and Radial basis interpolation function (RBF). The hybrid shape model in its parametric form is given as ( f (θ ,φ)= h(θ ,φ)+g(θ ,φ)). h is the extended superquadric function and g is radial basis interpolation function. The points on the surface of the shape are given by

Collaboration


Dive into the Abhishek Kolagunda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guoyu Lu

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baris Turkbey

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Peter A. Pinto

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Peter L. Choyke

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sherif Mehralivand

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge