Syed Omer Gilani
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Syed Omer Gilani.
international conference on human computer interaction | 2007
Peng Song; Stefan Winkler; Syed Omer Gilani; ZhiYing Zhou
We designed and implemented a vision-based projected table-top interface for finger interaction. The system offers a simple and quick setup and economic design. The projection onto the tabletop provides more comfortable and direct viewing for users, and more natural, intuitive yet flexible interaction than classical or tangible interfaces. Homography calibration techniques are used to provide geometrically compensated projections on the tabletop. A robust finger tracking algorithm is proposed to enable accurate and efficient interactions using this interface. Two applications have been implemented based on this interface.
IEEE Journal of Selected Topics in Signal Processing | 2014
Dwarikanath Mahapatra; Syed Omer Gilani; Mukesh Kumar Saini
Extracting moving and salient objects from videos is important for many applications like surveillance and video retargeting. In this paper we use spatial and temporal coherency information to segment salient objects in videos. While many methods use motion information from videos, they do not exploit coherency information which has the potential to give more accurate saliency maps. Spatial coherency maps identify regions belonging to regular objects, while temporal coherency maps identify regions with high coherent motion. The two coherency maps are combined to obtain the final spatio-temporal map identifying salient regions. Experimental results on public datasets show that our method outperforms two competing methods in segmenting moving objects from videos.
international conference on image processing | 2013
Syed Omer Gilani; Ramanathan Subramanian; Huang Hua; Stefan Winkler; Shih-Cheng Yen
Image appeal is determined by factors such as exposure, white balance, motion blur, scene perspective, and semantics. All these factors influence the selection of the best image(s) in a typical photo triaging task. This paper presents the results of an exploratory study on how image appeal affected selection behavior and visual attention patterns of 11 users, who were assigned the task of selecting the best photo from each of 40 groups. Images with low appeal were rejected, while highly appealing images were selected by a majority. Images with higher appeal attracted more visual attention, and users spent more time exploring them. A comparison of user eye fixation maps with three state-of-the-art saliency models revealed that these differences are not captured by the models.
Journal of Vision | 2013
Esther Wu; Syed Omer Gilani; Jeroen J. A. van Boxtel; Ido Amihai; Fook K. Chua; Shih-Cheng Yen
Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (<100 ms) moved to the center only one saccade later. Our data on eye movement positions provide novel evidence of parallel programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.
electronic imaging | 2008
Steven Zhiying Zhou; Syed Omer Gilani; Stefan Winkler
Mobile phones have evolved from passive one-to-one communication device to powerful handheld computing device. Today most new mobile phones are capable of capturing images, recording video, and browsing internet and do much more. Exciting new social applications are emerging on mobile landscape, like, business card readers, sing detectors and translators. These applications help people quickly gather the information in digital format and interpret them without the need of carrying laptops or tablet PCs. However with all these advancements we find very few open source software available for mobile phones. For instance currently there are many open source OCR engines for desktop platform but, to our knowledge, none are available on mobile platform. Keeping this in perspective we propose a complete text detection and recognition system with speech synthesis ability, using existing desktop technology. In this work we developed a complete OCR framework with subsystems from open source desktop community. This includes a popular open source OCR engine named Tesseract for text detection & recognition and Flite speech synthesis module, for adding text-to-speech ability.
Sensors | 2018
Ernest Nlandu Kamavuako; Usman Sheikh; Syed Omer Gilani; Mohsin Jamil; Imran Khan Niazi
People suffering from neuromuscular disorders such as locked-in syndrome (LIS) are left in a paralyzed state with preserved awareness and cognition. In this study, it was hypothesized that changes in local hemodynamic activity, due to the activation of Broca’s area during overt/covert speech, can be harnessed to create an intuitive Brain Computer Interface based on Near-Infrared Spectroscopy (NIRS). A 12-channel square template was used to cover inferior frontal gyrus and changes in hemoglobin concentration corresponding to six aloud (overtly) and six silently (covertly) spoken words were collected from eight healthy participants. An unsupervised feature extraction algorithm was implemented with an optimized support vector machine for classification. For all participants, when considering overt and covert classes regardless of words, classification accuracy of 92.88 ± 18.49% was achieved with oxy-hemoglobin (O2Hb) and 95.14 ± 5.39% with deoxy-hemoglobin (HHb) as a chromophore. For a six-active-class problem of overtly spoken words, 88.19 ± 7.12% accuracy was achieved for O2Hb and 78.82 ± 15.76% for HHb. Similarly, for a six-active-class classification of covertly spoken words, 79.17 ± 14.30% accuracy was achieved with O2Hb and 86.81 ± 9.90% with HHb as an absorber. These results indicate that a control paradigm based on covert speech can be reliably implemented into future Brain–Computer Interfaces (BCIs) based on NIRS.
International Journal of Signal Processing, Image Processing and Pattern Recognition | 2017
Hunza Hayat; Syed Omer Gilani; Mohsin Jamil
The x-ray image processing is an effective technique to distinguish between the major types of arthritis: Osteoarthritis Arthritis (OA) and rheumatoid Arthritis (RA). The degenerative bone disorders are diagnosed using X-ray detection. X-ray scans alone are insufficient to detect the type of arthritis. Image processing can aid in improving the diagnosis. The classification is done on the basis of differentiation in the region properties and boundary areas that can be identified using MATLAB. These properties were used to better understanding the variation in the knee, hands and neck region. The study holds potential as a diagnostic tool for arthritis identification through x-rays. It could pave way to differential diagnosis in future.
Archive | 2008
Steven Zhiying Zhou; Syed Omer Gilani
multimedia and ubiquitous engineering | 2017
Rabia Ijaz; Mohsin Jamil; Syed Omer Gilani
multimedia and ubiquitous engineering | 2017
Maliha Asad; Syed Omer Gilani; Mohsin Jamil