Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Can is active.

Publication


Featured researches published by Ali Can.


international conference of the ieee engineering in medicine and biology society | 1999

Rapid automated tracing and feature extraction from retinal fundus images using direct exploratory algorithms

Ali Can; Hong Shen; James N. Turner; Howard L. Tanenbaum; Badrinath Roysam

Algorithms are presented for rapid, automatic, robust, adaptive, and accurate tracing of retinal vasculature and analysis of intersections and crossovers. This method improves upon prior work in several ways: automatic adaptation from frame to frame without manual initialization/adjustment, with few tunable parameters; robust operation on image sequences exhibiting natural variability, poor and varying imaging conditions, including over/under-exposure, low contrast, and artifacts such as glare; does not require the vasculature to be connected, so it can handle partial views; and operation is efficient enough for use on unspecialized hardware, and amenable to deadline-driven computing, being able to produce a rapidly and monotonically improving sequence of usable partial results. Increased computation can be traded for superior tracing performance. Its efficiency comes from direct processing on gray-level data without any preprocessing, and from processing only a minimally necessary fraction of pixels in an exploratory manner, avoiding low-level image-wide operations such as thresholding, edge detection, and morphological processing. These properties make the algorithm suited to real-time, on-line (live) processing and is being applied to computer-assisted laser retinal surgery.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002

A feature-based, robust, hierarchical algorithm for registering pairs of images of the curved human retina

Ali Can; Charles V. Stewart; Badrinath Roysam; Howard L. Tanenbaum

This paper describes a robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope. Accurate registration is essential for mosaic synthesis, change detection, and design of computer-aided instrumentation. Central to the algorithm is a 12-parameter interimage transformation derived by modeling the retina as a rigid quadratic surface with unknown parameters. The parameters are estimated by matching vascular landmarks by recursively tracing the blood vessel structure. The parameter estimation technique, which could be generalized to other applications, is a hierarchy of models and methods, making the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions. Experiments involving 3,000 image pairs from 16 different healthy eyes were performed. Final registration errors less than a pixel are routinely achieved. The speed, accuracy, and ability to handle small overlaps compare favorably with retinal image registration techniques published in the literature.


IEEE Transactions on Biomedical Engineering | 2006

Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy

Harihar Narasimha-Iyer; Ali Can; Badrinath Roysam; V. Stewart; Howard L. Tanenbaum; Anna Majerovics; H. Singh

A fully automated approach is presented for robust detection and classification of changes in longitudinal time-series of color retinal fundus images of diabetic retinopathy. The method is robust to: 1) spatial variations in illumination resulting from instrument limitations and changes both within, and between patient visits; 2) imaging artifacts such as dust particles; 3) outliers in the training data; 4) segmentation and alignment errors. Robustness to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown retinal curvature and camera parameters. Bayesian detection and classification algorithms are used to generate a color-coded output that is readily inspected. A multiobserver validation on 43 image pairs from 22 eyes involving nonproliferative and proliferative diabetic retinopathies, showed a 97% change detection rate, a 3 % miss rate, and a 10% false alarm rate. The performance in correctly classifying the changes was 99.3%. A self-consistency metric, and an error factor were developed to measure performance over more than two periods. The average self consistency was 94% and the error factor was 0.06%. Although this study focuses on diabetic changes, the proposed techniques have broader applicability in ophthalmology.


international conference of the ieee engineering in medicine and biology society | 2003

Median-based robust algorithms for tracing neurons from noisy confocal microscope images

Khalid Al-Kofahi; Ali Can; Sharie Lasek; Donald H. Szarowski; Natalie Dowell-Mesfin; William Shain; James N. Turner; Badrinath Roysam

This paper presents a method to exploit rank statistics to improve fully automatic tracing of neurons from noisy digital confocal microscope images. Previously proposed exploratory tracing (vectorization) algorithms work by recursively following the neuronal topology, guided by responses of multiple directional correlation kernels. These algorithms were found to fail when the data was of lower quality (noisier, less contrast, weak signal, or more discontinuous structures). This type of data is commonly encountered in the study of neuronal growth on microfabricated surfaces. We show that by partitioning the correlation kernels in the tracing algorithm into multiple subkernels, and using the median of their responses as the guiding criterion improves the tracing precision from 41% to 89% for low-quality data, with a 5% improvement in recall. Improved handling was observed for artifacts such as discontinuities and/or hollowness of structures. The new algorithms require slightly higher amounts of computation, but are still acceptably fast, typically consuming less than 2 seconds on a personal computer (Pentium III, 500 MHz, 128 MB). They produce labeling for all somas present in the field, and a graph-theoretic representation of all dendritic/axonal structures that can be edited. Topological and size measurements such as area, length, and tortuosity are derived readily. The efficiency, accuracy, and fully-automated nature of the proposed method makes it attractive for large-scale applications such as high-throughput assays in the pharmaceutical industry, and study of neuron growth on nano/micro-fabricated structures. A careful quantitative validation of the proposed algorithms is provided against manually derived tracing, using a performance measure that combines the precision and recall metrics.


IEEE Transactions on Biomedical Engineering | 1998

Image processing algorithms for retinal montage synthesis, mapping, and real-time location determination

Douglas E. Becker; Ali Can; James N. Turner; Howard L. Tanenbaum; Badrinath Roysam

Although laser retinal surgery is the best available treatment for choroidal neovascularization, the current procedure has a low success rate (50%). Challenges, such as motion-compensated beam steering, ensuring complete coverage and minimizing incidental photodamage, can be overcome with improved instrumentation. This paper presents core image processing algorithms for (1) rapid identification of branching and crossover points of the retinal vasculature; (2) automatic montaging of video retinal angiograms; (3) real-time location determination and tracking using a combination of feature-tagged point-matching and dynamic-pixel templates. These algorithms tradeoff conflicting needs for accuracy, robustness to image variations (due to movements and the difficulty of providing steady illumination) and noise, and operational speed in the context of available hardware. The algorithm for locating vasculature landmarks performed robustly at a speed of 16-30 video image frames/s depending upon the field on a Silicon Graphics workstation. The montaging algorithm performed at a speed of 1.6-4 s for merging 5-12 frames. The tracking algorithm was validated by manually locating six landmark points on an image sequence with 180 frames, demonstrating a mean-squared error of 1.35 pixels. It successfully detected and rejected instances when the image dimmed, faded, lost contrast, or lost focus.


computer vision and pattern recognition | 1999

Robust hierarchical algorithm for constructing a mosaic from images of the curved human retina

Ali Can; Charles V. Stewart; Badrinath Roysam

This paper describes computer vision algorithms to assist in retinal laser surgery, which is widely used to treat leading blindness causing conditions but only has a 50% success rate, mostly due to a lack of spatial mapping and reckoning capabilities in current instruments. The novel technique described here automatically constructs a composite (mosaic) image of the retina from a sequence of incomplete views. This mosaic will be useful to ophthalmologists for both diagnosis and surgery. The new technique goes beyond published methods in both the medical and computer vision literatures because it is fully automated, models the patient-dependent curvature of the retina, handles large interframe motions, and does not require calibration. At the heart of the technique is a 12-parameter image transformation model derived by modeling the retina as a quadratic surface and assuming a weak perspective camera, and rigid motion. Estimating the parameters of this transformation model requires robustness to unmatchable image features and mismatches between features caused by large interframe motions. The described estimation technique is a hierarchy of models and methods: the initial match set is pruned based on a 0th order transformation estimated using a similarity-weighted histogram; a 1st order affine transformation is estimated using the reduced match set and least-median of squares; and the final, 2nd order 12-parameter transformation is estimated using an M-estimator initialized from the 1st order results. Initial experimental results show the method to be robust and accurate in accounting for the unknown retinal curvature in a fully automatic manner while preserving image details.


Journal of Microscopy | 2003

Attenuation correction in confocal laser microscopes: a novel two‐view approach

Ali Can; Omar Al-Kofahi; S. Lasek; Donald H. Szarowski; James N. Turner; Badrinath Roysam

Confocal microscopy is a three‐dimensional (3D) imaging modality, but the specimen thickness that can be imaged is limited by depth‐dependent signal attenuation. Both software and hardware methods have been used to correct the attenuation in reconstructed images, but previous methods do not increase the image signal‐to‐noise ratio (SNR) using conventional specimen preparation and imaging. We present a practical two‐view method that increases the overall imaging depth, corrects signal attenuation and improves the SNR. This is achieved by a combination of slightly modified but conventional specimen preparation, image registration, montage synthesis and signal reconstruction methods. The specimen is mounted in a symmetrical manner between a pair of cover slips, rather than between a slide and a cover slip. It is imaged sequentially from both sides to generate two 3D image stacks from perspectives separated by approximately 180° with respect to the optical axis. An automated image registration algorithm performs a precise 3D alignment, and a model‐based minimum mean squared algorithm synthesizes a montage, combining the content of both the 3D views. Experiments with images of individual neurones contrasted with a space‐filling fluorescent dye in thick brain tissue slices produced precise 3D montages that are corrected for depth‐dependent signal attenuation. The SNR of the reconstructed image is maximized by the method, and it is significantly higher than in the single views after applying our attenuation model. We also compare our method with simpler two‐view reconstruction methods and quantify the SNR improvement. The reconstructed images are a more faithful qualitative visualization of the specimens structure and are quantitatively more accurate, providing a more rigorous basis for automated image analysis.


Journal of Microscopy | 2003

Algorithms for accurate 3D registration of neuronal images acquired by confocal scanning laser microscopy

Omar Al-Kofahi; Ali Can; S. Lasek; Donald H. Szarowski; James N. Turner; Badrinath Roysam

This paper presents automated and accurate algorithms based on high‐order transformation models for registering three‐dimensional (3D) confocal images of dye‐injected neurons. The algorithms improve upon prior methods in several ways, and meet the more stringent image registration needs of applications such as two‐view attenuation correction recently developed by us. First, they achieve high accuracy (≈ 1.2 voxels, equivalent to 0.4 µm) by using landmarks, rather than intensity correlations, and by using a high‐dimensional affine and quadratic transformation model that accounts for 3D translation, rotation, non‐isotropic scaling, modest curvature of field, distortions and mechanical inconsistencies introduced by the imaging system. Second, they use a hierarchy of models and iterative algorithms to eliminate potential instabilities. Third, they incorporate robust statistical methods to achieve accurate registration in the face of inaccurate and missing landmarks. Fourth, they are fully automated, even estimating the initial registration from the extracted landmarks. Finally, they are computationally efficient, taking less than a minute on a 900‐MHz Pentium III computer for registering two images roughly 70 MB in size. The registration errors represent a combination of modelling, estimation, discretization and neuron tracing errors. Accurate 3D montaging is described; the algorithms have broader applicability to images of vasculature, and other structures with distinctive point, line and surface landmarks.


computer vision and pattern recognition | 2000

A feature-based technique for joint, linear estimation of high-order image-to-mosaic transformations: application to mosaicing the curved human retina

Ali Can; Charles V. Stewart; Badrinath Roysam; Howard L. Tanenbaum

Methods are presented for increasing the coverage and accuracy of image mosaics constructed from multiple, uncalibrated, weak-perspective views of the human retina. Extending our previous algorithm for registering pairs of images using a non-invertible, 12-parameter, quadratic image transformation model and a hierarchical, robust estimation technique, two important innovations are presented. The first is a linear, non-iterative method for jointly estimating the transformations of all images onto the mosaic. This employs constraints derived from pairwise matching between the non-mosaic image frames. It allows the transformations to be estimated for images that do not overlap the mosaic anchor frame, and results in mutually consistent transformations for all images. This means the mosaics can cover a much broader area of the retinal surface, even though the transformation model is not closed under composition. This capability is particularly valuable for mosaicing the retinal periphery in the context of diseases such as AIDS/CMV. The second innovation is a method to improve the accuracy of the pairwise matches as well as the joint estimation by refining the feature locations and by adding new features based on the transformation estimates themselves. For matching image frames of size 1024/spl times/1024, this cuts the registration error from the range of 1 to 3 pixels to about 0.55 pixels. The overall transformation error in final mosaic construction is 0.80 pixels based on experiments over a large set of eyes.


international conference of the ieee engineering in medicine and biology society | 2006

Automated change analysis from fluorescein angiograms for monitoring wet macular degeneration.

Harihar Narasimha-Iyer; Ali Can; Badrinath Roysam; Jeffrey Stern

Detection and analysis of changes from retinal images is important in clinical practice, quantitative scoring of clinical trials, computer-assisted reading centers, and in medical research. This paper presents a fully-automated approach for robust detection and classification of changes in longitudinal time-series of fluorescein angiograms (FA). The changes of interest here are related to the development of choroidal neo-vascularization (CNV) in wet macular degeneration. Specifically, the changes in CNV regions as well as the retinal pigment epithelium (RPE) hypertrophic regions are detected and analyzed to study the progression of disease and effect of treatment. Retinal features including the vasculature, vessel branching/crossover locations, optic disk and location of the fovea are first segmented automatically. The images are then registered to sub-pixel accuracy using a 12-dimensional mapping that accounts for the unknown retinal curvature and camera parameters. Spatial variations in illumination are removed using a surface fitting algorithm that exploits the segmentations of the various features. The changes are identified in the regions of interest and a Bayesian classifier is used to classify the changes into clinically significant classes. The automated change analysis algorithms were found to have a success rate of 83%

Collaboration


Dive into the Ali Can's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles V. Stewart

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

James N. Turner

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar

Donald H. Szarowski

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar

Omar Al-Kofahi

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

S. Lasek

Oklahoma State Department of Health

View shared research outputs
Top Co-Authors

Avatar

William Shain

New York State Department of Health

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge