Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jimmy Addison Lee is active.

Publication


Featured researches published by Jimmy Addison Lee.


international conference on computer vision | 2009

Robust matching of building facades under large viewpoint changes

Jimmy Addison Lee; Kin Choong Yow; Alex Yong-Sang Chia

This paper presents a novel approach to finding point correspondences between images of building facades with wide viewpoint variations, and at the same time returning a large list of true matches between the images. Such images comprise repetitive and symmetric patterns, which render popular algorithms e.g., SIFT to be ineffective. Feature descriptors such as SIFT that are based on region patches are also unstable under large viewing angle variations. In this paper, we integrate both the appearance and geometric properties of an image to find unique matches. First we extract hypotheses of building facades based on a robust line fitting algorithm. Each hypothesis is defined by a planar convex quadrilateral in the image, which we call a “q-region”, and the four corners of each q-region provide the inputs from which a projective transformation model is derived. Next, a set of interest points are extracted from the images and are used to evaluate the correctness of the transformation model. The transformation model with the largest set of matched interest points is selected as the correct model, and this model also returns the best pair of corresponding q-regions and the most number of point correspondences in the two images. Extensive experimental results demonstrate the robustness of our approach in which we achieve a tenfold increase in true matches when compared to state of the art techniques such as SIFT and MSER.


international symposium on visual computing | 2008

Image-Based Information Guide on Mobile Devices

Jimmy Addison Lee; Kin Choong Yow; Andrzej Sluzek

We present a prototype of an information guide system to be used outdoor in our campus. It allows a user to find places of interest (e.g., lecture halls and libraries) using a camera phone. We use a database of panoramic views of campus scenes tagged by GPS locations, which can diminish overlapping between views. Panoramic views with the closest locations with the query view are acquired. We exploit a wide-baseline matching technique to match between views. However, due to dissimilarity in viewpoints and presence of repetitive structures, a vast percentage of matches could be false matches. We propose a verification model to effectively eliminate false matches. The true correspondences are chosen for pose recovery and information is then projected onto the image. The system is validated by extensive experiments, with images taken in different seasons, weather, illumination conditions, etc.


international conference on image processing | 2007

Image Recognition for Mobile Applications

Jimmy Addison Lee; Kin Choong Yow

Our paper presents a system for efficient recognition of landmarks taken from camera phones. Information such as tutorial rooms within the captured landmarks is returned to user within seconds. The system uses a database of multiple viewpoints images for matching. Various navigational aids and sensors are used to optimize accuracy and retrieval time by providing complementary information about relative position and viewpoint of each query image. This makes our system less sensitive to orientation, scale and perspective distortion. Multi-scale approach and a reliability score model are proposed in this application. Our system is validated by several experiments in the campus, with images taken from different resolutions camera phones, positions and times of day.


Journal of Psychosomatic Research | 2013

Body mass index and risk of mental disorders in the general population: Results from the Singapore Mental Health Study

Mythily Subramaniam; Louisa Picco; Vincent Yf He; Janhavi Ajit Vaingankar; Edimansyah Abdin; Swapna Verma; Gurpreet Rekhi; Mabel Yap; Jimmy Addison Lee; Siow Ann Chong

OBJECTIVE The aims of the current study were to elucidate the association between body mass index (BMI) and mental disorders and to examine whether these associations are moderated by socio-demographic correlates and comorbid physical disorders. METHODS The Singapore Mental Health Study (SMHS) surveyed adult Singapore residents (Singapore citizens and permanent residents) aged 18 years and above. The survey was conducted from December 2009 to December 2010. The diagnoses of mental disorders were established using the World Mental Health Composite International Diagnostic Interview version 3.0 (CIDI 3.0). BMI was calculated using height and weight which were self-reported by respondents. The Euro-Qol-5Dimensions (EQ-5D) was used to measure the health related quality of life (HRQoL) in the sample. RESULTS Six thousand and six hundred sixteen respondents completed the study (response rate of 75.9%) and constituted a representative sample of the adult resident population in Singapore. Being underweight was associated with both lifetime (adjusted odds ratio (OR): 2.3) and 12-month obsessive-compulsive disorder (adjusted OR: 4.4). Obesity was associated with 12-month alcohol dependence (adjusted OR: 8.4). There were no significant differences in the EQ-5D indices or the EQ-VAS scores among the four BMI groups in the population. CONCLUSIONS Our findings are somewhat unique and different from those reported in research from Western countries. There is a need for further cross-cultural research to explore and identify genetic, metabolic and cultural differences that underlie the interaction between obesity and mental illnesses.


computer vision and pattern recognition | 2015

A low-dimensional step pattern analysis algorithm with application to multimodal retinal image registration

Jimmy Addison Lee; Jun Cheng; Beng Hai Lee; Ee Ping Ong; Guozhen Xu; Damon Wing Kee Wong; Jiang Liu; Augustinus Laude; Tock Han Lim

Existing feature descriptor-based methods on retinal image registration are mainly based on scale-invariant feature transform (SIFT) or partial intensity invariant feature descriptor (PIIFD). While these descriptors are often being exploited, they do not work very well upon unhealthy multimodal images with severe diseases. Additionally, the descriptors demand high dimensionality to adequately represent the features of interest. The higher the dimensionality, the greater the consumption of resources (e.g. memory space). To this end, this paper introduces a novel registration algorithm coined low-dimensional step pattern analysis (LoSPA), tailored to achieve low dimensionality while providing sufficient distinctiveness to effectively align unhealthy multimodal image pairs. The algorithm locates hypotheses of robust corner features based on connecting edges from the edge maps, mainly formed by vascular junctions. This method is insensitive to intensity changes, and produces uniformly distributed features and high repeatability across the image domain. The algorithm continues with describing the corner features in a rotation invariant manner using step patterns. These customized step patterns are robust to non-linear intensity changes, which are well-suited for multimodal retinal image registration. Apart from its low dimensionality, the LoSPA algorithm achieves about two-fold higher success rate in multimodal registration on the dataset of severe retinal diseases when compared to the top score among state-of-the-art algorithms.


International Journal of Mobile Learning and Organisation | 2013

A platform on the cloud for self-creation of mobile interactive learning trails

Yiqun Li; Aiyuan Guo; Jimmy Addison Lee; Gede Putra Kusuma Negara

We present a system to create mobile interactive learning trails. The system includes a web portal running on the Amazon cloud server for people without programming skill to create trails for outdoor fieldtrip learning, and two universal apps for iOS and Android phones respectively to run different learning trails. It enables rapid and easy creation of learning trails within 15 minutes without mobile app development. The learning contents can be customised by teachers, and activated by snapping pictures from physical Objects of Interest (OOI) or entering a geographic area. Image recognition technology is used to identify which OOI that the picture is captured from, and return relevant contents pre-associated with the OOIs.


international conference on acoustics, speech, and signal processing | 2016

Non-verbal speech analysis of interviews with schizophrenic patients

Yasir Tahir; Debsubhra Chakraborty; Justin Dauwels; Nadia Magnenat Thalmann; Daniel Thalmann; Jimmy Addison Lee

Negative symptoms in schizophrenia are associated with significant burden and functional impairment, especially speech production. In clinical practice today, there are no robust treatments for negative symptoms and one obstacle surrounding its research is the lack of an objective measure. To this end, we explore non-verbal speech cues as objective measures. Specifically, we extract these cues while schizophrenic patients are interviewed by psychologists. We have analyzed interviews of 15 patients who were enrolled in an observational study on the effectiveness of Cognitive Remediation Therapy (CRT). The subject (undergoing CRT) and control group (not undergoing CRT) contains 8 and 7 individuals respectively. The patients were recorded during three sessions while being evaluated for negative symptoms over a 12-week follow-up period. In order to validate the non-verbal speech cues, we computed their correlation with the Negative Symptom Assessment (NSA-16). Our results suggest a strong correlation between certain measures of the two rating sets. Supervised prediction of the subjective ratings from the non-verbal speech features with leave-one-person-out cross-validation has reasonable accuracy of 53-80%. Furthermore, the non-verbal cues can be used to reliably distinguish between the subjects and controls, as supervised learning methods can classify the two groups with 80-93% accuracy.


wireless, mobile and ubiquitous technologies in education | 2012

Visual Interactive and Location Activated Mobile Learning

Yiqun Li; Aiyuan Guo; Jimmy Addison Lee; Yan Gao; Yii Leong Ling

In this paper we proposed an application on smart phones for interactive mobile learning. Image recognition technology is used to link physical objects seen through the camera to relevant information. Through the built-in camera on the smart phone, visual interactive learning can be realized. With the GPS sensor, location activated learning is also possible. Combining both the camera and the GPS sensor on the phone, multimedia contents can either be activated by snapping a picture from a real world object or entering a predefined geographical area. This makes the learning process more interesting and intuitive. A web portal is developed for teachers to create the learning trails for different learning objectives. The mobile apps are also developed for iOS and Android platforms. A trial was conducted with school teachers and students and positive feedbacks are obtained.


international conference of the ieee engineering in medicine and biology society | 2016

An automatic quantitative measurement method for performance assessment of retina image registration algorithms

Ee Ping Ong; Jimmy Addison Lee; Guozhen Xu; Beng Hai Lee; Damon Wing Kee Wong

This paper presents a novel automatic quantitative measurement method for assessment of the performance of image registration algorithms designed for registering retina fundus images. To achieve automatic quantitative measurement, we propose the use of edges and edge dissimilarity measure for determining the performance of retina image registration algorithms. Our input is the registered pair of retina fundus images obtained using any of the existing retina image registration algorithms in the literature. To compute edge dissimilarity score, we propose an edge dissimilarity measure that we called “robustified Hausdorff distance”. We show that our proposed approach is feasible as designed by drawing comparison to visual evaluation results when tested on images from the DRIVERA and G9 dataset.


asian conference on computer vision | 2016

F-SORT: An Alternative for Faster Geometric Verification

Jacob Chan; Jimmy Addison Lee; Kemao Qian

This paper presents a novel geometric verification approach coined Fast Sequence Order Re-sorting Technique (F-SORT), capable of rapidly validating matches between images under arbitrary viewing conditions. By using a fundamental framework of re-sorting image features into local sequence groups for geometric validation along different orientations, we simulate the enforcement of geometric constraints within each sequence group in various views and rotations. While conventional geometric verification (e.g. RANSAC) and state-of-the-art fully affine invariant image matching approaches (e.g. ASIFT) are high in computational cost, our approach is multiple times less computational expensive. We evaluate F-SORT on the Stanford Mobile Visual Search (SMVS) and the Zurich Buildings (ZuBuD) image databases comprising an overall of 9 image categories, and report competitive performance with respect to PROSAC, RANSAC and ASIFT. Out of the 9 categories, F-SORT wins PROSAC in 9 categories, RANSAC in 8 categories and ASIFT in 7 categories, with a significant reduction in computational cost of over nine-fold, thirty-fold and hundred-fold respectively.

Collaboration


Dive into the Jimmy Addison Lee's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kin Choong Yow

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiang Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge