Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sajib Kumar Saha is active.

Publication


Featured researches published by Sajib Kumar Saha.


computational color imaging workshop | 2011

Color correction: a novel weighted Von Kries model based on memory colors

Alejandro Moreno; Basura Fernando; Bismillah Kani; Sajib Kumar Saha; Sezer Karaoglu

In this paper we present an automatic color correction framework based on memory colors. Memory colors for 3 different objects: grass, snow and sky are obtained using psychophysical experiments under different illumination levels and later modeled statistically. While supervised image segmentation method detects memory color objects, a luminance level predictor classifies images as dark, dim or bright. This information along with the best memory color model that fits to the data is used to do the color correction using a novel weighted Von Kries formula. Finally, a visual experiment is conducted to evaluate color corrected images. Experimental results suggest that the proposed weighted von Kries model is an appropriate color correction model for natural images.


Journal of Medical Systems | 2016

A Two-Step Approach for Longitudinal Registration of Retinal Images

Sajib Kumar Saha; Di Xiao; Shaun Frost; Yogesan Kanagasingam

This paper presents a novel two step approach for longitudinal (over time) registration of retinal images. Longitudinal registration is an important preliminary step to analyse longitudinal changes on the retina including disease progression. While potential overlap and minimal geometric distortion are likely in longitudinal images, identification of reliable features over time is a potential challenge for longitudinal registration. Relying on the widely accepted phenomenon that retinal vessels are more reliable over time, the proposed method aims to accurately match bifurcation and cross-over points between different timestamp images. Binary robust independent elementary features (BRIEF) are computed around bifurcation points which are then matched based on Hamming distance. Prior to computing BRIEF descriptors, a preliminary registration is performed relying on SURF key-point matching. Experiments are conducted on different image datasets containing 109 longitudinal image pairs in total. The proposed method has been found to produce accurate registration (i.e. registration with zero alignment error) for 97xa0% cases, which is significantly higher than the other methods in comparison. The paper also reveals the finding that both the number and distributions of accurately matching key-points pairs are important for successful registration of image pairs.


Journal of Visual Communication and Image Representation | 2018

A novel method for automated correction of non-uniform/poor illumination of retinal images without creating false artifacts

Sajib Kumar Saha; Alexander Fletcher; Di Xiao; Yogesan Kanagasingam

Abstract Retinal images are frequently corrupted by unwanted variations in the brightness that occur due to over-all imperfections in the image acquisition process. This inhomogeneous illumination across the retina can limit the pathological information that can be gained from the image; and can lead to serious difficulties when performing image processing tasks that requires qualitative as well as quantitative analysis of feature presence on the image. On that perspective we have proposed a novel two-step approach for non-uniform and/or poor illumination correction in the context of retinal imaging. A subjective experiment was conducted to ensure that the proposed method did not create visually noticeable false color or artifacts on the images, especially on the areas that did not suffer non-uniform/poor illumination prior to correction. An objective experiment on 25,872 retinal images was performed to justify the significance of the proposed method for automated pathology detection/classification.


Biomedical Signal Processing and Control | 2019

Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review

Sajib Kumar Saha; Di Xiao; Alauddin Bhuiyan; Tien Yin Wong; Yogesan Kanagasingam

Abstract Diabetic retinopathy (DR) is one of the leading cause of visual impairments in the working age population in the developed world. It is a complication of both types of diabetes mellitus, which affects the light perception part of the retina; and without timely treatment patients could lose their sight and eventually become blind. Automated methods for the detection and progression analysis of DR are considered as potential health-care need to stop disease propagation and to ensure improved management for DR. Aiming for the detection and progression analysis of DR, color fundus photography is considered as one of the best candidates for non-invasive imaging of the eye fundus. A list of methods has already been developed to analyse DR related changes in the retina using color fundus photographs. In this manuscript we review those automated methods. In order to accurately compare the evolution of DR over time, retinal images that are typically collected on an annual or biennial basis must be perfectly superimposed. However, in reality, for two separate photographic-eye examinations the patient is never in exactly the same position and also the camera may vary. Therefore, a registration method is applied prior to evolution computation. Knowing registration as a fundamental preprocessing step for longitudinal (over time) analysis, we also reviewed state-of-the art methods for the registration of color fundus images. The review summarizes the achievement so far and also identifies potential study areas for further improvement and future research toward more efficient and accurate DR progression analysis.


Journal of Medical Systems | 2018

Performance Evaluation of State-of-the-Art Local Feature Detectors and Descriptors in the Context of Longitudinal Registration of Retinal Images

Sajib Kumar Saha; Di Xiao; Shaun Frost; Yogesan Kanagasingam

In this paper we systematically evaluate the performance of several state-of-the-art local feature detectors and descriptors in the context of longitudinal registration of retinal images. Longitudinal (temporal) registration facilitates to track the changes in the retina that has happened over time. A wide number of local feature detectors and descriptors exist and many of them have already applied for retinal image registration, however, no comparative evaluation has been made so far to analyse their respective performance. In this manuscript we evaluate the performance of the widely known and commonly used detectors such as Harris, SIFT, SURF, BRISK, and bifurcation and cross-over points. As of descriptors SIFT, SURF, ALOHA, BRIEF, BRISK and PIIFD are used. Longitudinal retinal image datasets containing a total of 244 images are used for the experiment. The evaluation reveals some potential findings including more robustness of SURF and SIFT keypoints than the commonly used bifurcation and cross-over points, when detected on the vessels. SIFT keypoints can be detected with a reliability of 59% for without pathology images and 45% for with pathology images. For SURF keypoints these values are respectively 58% and 47%. ALOHA descriptor is best suited to describe SURF keypoints, which ensures an overall matching accuracy, distinguishability of 83%, 93% and 78%, 83% for without pathology and with pathology images respectively.


Journal of Digital Imaging | 2018

A Novel Method for Correcting Non-uniform/Poor Illumination of Color Fundus Photographs

Sajib Kumar Saha; Di Xiao; Yogesan Kanagasingam

Retinal fundus images are often corrupted by non-uniform and/or poor illumination that occur due to overall imperfections in the image acquisition process. This unwanted variation in brightness limits the pathological information that can be gained from the image. Studies have shown that poor illumination can impede human grading in about 10~15% of retinal images. For automated grading, the effect can be even higher. In this perspective, we propose a novel method for illumination correction in the context of retinal imaging. The method splits the color image into luminosity and chroma (i.e., color) components and performs illumination correction in the luminosity channel based on a novel background estimation technique. Extensive subjective and objective experiments were conducted on publicly available DIARETDB1 and EyePACS images to justify the performance of the proposed method. The subjective experiment has confirmed that the proposed method does not create false color/artifacts and at the same time performs better than the traditional method in 84 out of 89 cases. The objective experiment shows an accuracy improvement of 4% in automated disease grading when illumination correction is performed by the proposed method than the traditional method.


Journal of Digital Imaging | 2018

Automated Quality Assessment of Colour Fundus Images for Diabetic Retinopathy Screening in Telemedicine

Sajib Kumar Saha; Basura Fernando; Jorge Cuadros; Di Xiao; Yogesan Kanagasingam

Fundus images obtained in a telemedicine program are acquired at different sites that are captured by people who have varying levels of experience. These result in a relatively high percentage of images which are later marked as unreadable by graders. Unreadable images require a recapture which is time and cost intensive. An automated method that determines the image quality during acquisition is an effective alternative. To determine the image quality during acquisition, we describe here an automated method for the assessment of image quality in the context of diabetic retinopathy. The method explicitly applies machine learning techniques to access the image and to determine ‘accept’ and ‘reject’ categories. ‘Reject’ category image requires a recapture. A deep convolution neural network is trained to grade the images automatically. A large representative set of 7000 colour fundus images was used for the experiment which was obtained from the EyePACS that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorise these images into ‘accept’ and ‘reject’ classes based on the precise definition of image quality in the context of DR. The network was trained using 3428 images. The method shows an accuracy of 100% to successfully categorise ‘accept’ and ‘reject’ images, which is about 2% higher than the traditional machine learning method. On a clinical trial, the proposed method shows 97% agreement with human grader. The method can be easily incorporated with the fundus image capturing system in the acquisition centre and can guide the photographer whether a recapture is necessary or not.


Iet Image Processing | 2018

An Automated Method for the Detection and Segmentation of Drusen in Color Fundus Image for the Diagnosis of Age-related Macular Degeneration

Sultan Mohammad Mohaimin; Sajib Kumar Saha; Alve Mahamud Khan; Abu Shamim Mohammad Arif; Yogesan Kanagasingam

Age-related macular degeneration (AMD) is one of the main reasons for visual impairment worldwide. The assessment of risk for the development of AMD requires reliable detection and quantitative mapping of retinal abnormalities that are considered as precursors of the disease. Typical signs of the latter are the so-called drusen that appear as yellowish spots in the retina. Automated detection and segmentation of drusen provide vital information about the severity of the disease. The authors propose a novel method for the detection and segmentation of drusen in colour fundus images. The method combines colour information of the object with its boundary information for the accurate detection and segmentation of drusen. To perform non-uniform illumination correction and to minimise inter-subject variability a novel colour normalisation method has been proposed. Experiments are conducted on publicly available STARE and ARIA datasets. The method achieves an overall accuracy of 96.62% which is about 4% higher than the state-of-the-art method. The sensitivity and specificity of the proposed method are 95.96 and 97.64%, respectively.


Investigative Ophthalmology & Visual Science | 2016

Deep Learning for Automatic Detection and Classification of Microaneurysms, Hard and Soft Exudates, and Hemorrhages for Diabetic Retinopathy Diagnosis

Sajib Kumar Saha; Basura Fernando; Di Xiao; Mei-Ling Tay-Kearney; Yogesan Kanagasingam


Investigative Ophthalmology & Visual Science | 2017

Deep Learning Based Decision Support System for Automated Diagnosis of Age-related Macular Degeneration (AMD)

Sajib Kumar Saha; Di Xiao; Basura Fernando; Mei-Ling Tay-Kearney; Dong An; Yogesan Kanagasingam

Collaboration


Dive into the Sajib Kumar Saha's collaboration.

Top Co-Authors

Avatar

Yogesan Kanagasingam

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Di Xiao

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Basura Fernando

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaun Frost

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jorge Cuadros

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Fletcher

University of Western Australia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge