Anup Basu
University of Alberta
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anup Basu.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1994
Don Murray; Anup Basu
This paper describes a method for real-time motion detection using an active camera mounted on a pan/tilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation, Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pan/tilt angles between successive frames are as large as 3. >
Pattern Recognition Letters | 1995
Anup Basu; Sergio Licardie
Abstract The human visual system can be characterized as a variable-resolution system: foveal information is processed at very high spatial resolution whereas peripheral information is processed at low spatial resolution. Various transforms have been proposed to model spatially varying resolution. Unfortunately, special sensors need to be designed to acquire images according to existing transforms. In this work, two models of the fish-eye transform are presented. The validity of the transformations is demonstrated by fitting the alternative models to a real fish-eye lens.
IEEE Transactions on Biomedical Engineering | 2009
Tao Wang; Irene Cheng; Anup Basu
In this paper, we propose a new approach that we call the ldquofluid vector flowrdquo (FVF) active contour model to address problems of insufficient capture range and poor convergence for concavities. With the ability to capture a large range and extract concave shapes, FVF demonstrates improvements over techniques like gradient vector flow, boundary vector flow, and magnetostatic active contour on three sets of experiments: synthetic images, pediatric head MRI images, and brain tumor MRI images from the Internet brain segmentation repository.
IEEE Transactions on Image Processing | 2011
Rui Shen; Irene Cheng; Jianbo Shi; Anup Basu
A single captured image of a real-world scene is usually insufficient to reveal all the details due to under- or over-exposed regions. To solve this problem, images of the same scene can be first captured under different exposure settings and then combined into a single image using image fusion techniques. In this paper, we propose a novel probabilistic model-based fusion technique for multi-exposure images. Unlike previous multi-exposure fusion methods, our method aims to achieve an optimal balance between two quality measures, i.e., local contrast and color consistency, while combining the scene details revealed under different exposures. A generalized random walks framework is proposed to calculate a globally optimal solution subject to the two quality measures by formulating the fusion problem as probability estimation. Experiments demonstrate that our algorithm generates high-quality images at low computational cost. Comparisons with a number of other techniques show that our method generates better results in most cases.
IEEE Transactions on Multimedia | 2005
Yixin Pan; L. Irene Cheng; Anup Basu
Many factors, such as the number of vertices and the resolution of texture, can affect the display quality of three-dimensional (3-D) objects. When the resources of a graphics system are not sufficient to render the ideal image, degradation is inevitable. It is, therefore, important to study how individual factors will affect the overall quality, and how the degradation can be controlled given limited resources. In this paper, the essential factors determining the display quality are reviewed. We then integrate two important ones, resolution of texture and resolution of wireframe, and use them in our model as a perceptual metric. We assess this metric using statistical data collected from a 3-D quality evaluation experiment. The statistical model and the methodology to assess the display quality metric are discussed. A preliminary study of the reliability of the estimates is also described. The contribution of this paper lies in: 1) determining the relative importance of wireframe versus texture resolution in perceptual quality evaluation and 2) proposing an experimental strategy for verifying and fitting a quantitative model that estimates 3-D perceptual quality. The proposed quantitative method is found to fit closely to subjective ratings by human observers based on preliminary experimental results.
computer vision and pattern recognition | 1993
Anup Basu
Almost all camera calibration techniques use a known calibrating pattern and a static camera. Techniques based on an active camera, which does not need any predefined patterns, are introduced. All that is required is a scene with some strong and stable edges. Two algorithms are presented and analyzed. It is shown that one strategy performs much better in the presence of noise, and thus is preferable in practical situations. Experimental results are shown, demonstrating the validity of the algorithms.<<ETX>>
IEEE Transactions on Biomedical Engineering | 2013
Rui Shen; Irene Cheng; Anup Basu
Joint analysis of medical data collected from different imaging modalities has become a common clinical practice. Therefore, image fusion techniques, which provide an efficient way of combining and enhancing information, have drawn increasing attention from the medical community. In this paper, we propose a novel cross-scale fusion rule for multiscale-decomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set of coefficients from the multiscale representations of the source images is determined by effective exploitation of neighborhood information. An efficient color fusion scheme is also proposed. Experiments demonstrate that our fusion rule generates better results than existing rules.
IEEE Transactions on Circuits and Systems for Video Technology | 2004
Victor Sanchez; Anup Basu; Mrinal K. Mandal
A method is proposed to encode multiple regions of interest in the JPEG2000 image-coding framework. The algorithm is based on the rearrangement of packets in the code-stream to place the regions of interest before the background coefficients. In order to improve the quality of the reconstructed image, partial background information is included with the regions of interest. The proposed technique is fully compatible with the current JPEG2000 standard and allows transmission of different regions of interest with different priorities. Experimental results demonstrating the validity of the proposed approach are presented and compared with existing region of interest coding techniques.
IEEE Transactions on Biomedical Engineering | 2010
Rui Shen; Irene Cheng; Anup Basu
Tuberculosis (TB) is a deadly infectious disease and the presence of cavities in the upper lung zones is a strong indicator that the disease has developed into a highly infectious state. Currently, the detection of TB cavities is mainly conducted by clinicians observing chest radiographs. Diagnoses performed by radiologists are labor intensive and very often there is insufficient health care personnel available, especially in remote communities. After assessing existing approaches, we propose an automated segmentation technique, which takes a hybrid knowledge-based Bayesian classification approach to detect TB cavities automatically. We apply gradient inverse coefficient of variation and circularity measures to classify detected features and confirm true TB cavities. By comparing with nonhybrid approaches and the classical active contour techniques for feature extraction in medical images, experimental results demonstrate that our approach achieves high accuracy with a low false positive rate in detecting TB cavities.
systems man and cybernetics | 1997
Anup Basu; Kavita Ravi
Three dimensional vision applications, such as robot vision, require modeling of the relationship between the two-dimensional images and the three-dimensional world. Camera calibration is a process which accurately models this relationship. The calibration procedure determines the geometric parameters of the camera, such as focal length and center of the image. Most of the existing calibration techniques use predefined patterns and a static camera. Recently, a novel calibration technique for computing the focal length and image center, which uses an active camera, has been developed. This technique does not require any predefined patterns or point-to-point correspondence between images-only a set of scenes with some stable edges. It was observed that the algorithms developed for the image center are sensitive to noise and hence unreliable in real situations. This report extends the techniques provided to develop a simpler, yet more robust method for computing the image center.