Ranga Rodrigo
University of Moratuwa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ranga Rodrigo.
Advanced Robotics | 2007
Zhenhe Chen; Jagath Samarabandu; Ranga Rodrigo
Simultaneous localization and map-building (SLAM) continues to draw considerable attention in the robotics community due to the advantages it can offer in building autonomous robots. It examines the ability of an autonomous robot starting in an unknown environment to incrementally build an environment map and simultaneously localize itself within this map. Recent advances in computer vision have contributed a whole class of solutions for the challenge of SLAM. This paper surveys contemporary progress in SLAM algorithms, especially those using computer vision as main sensing means, i.e., visual SLAM. We categorize and introduce these visual SLAM techniques with four main frameworks: Kalman filter (KF)-based, particle filter (PF)-based, expectation-maximization (EM)-based and set membership-based schemes. Important topics of SLAM involving different frameworks are also presented. This article complements other surveys in this field by being current as well as reviewing a large body of research in the area of vision-based SLAM, which has not been covered. It clearly identifies the inherent relationship between the state estimation via the KF versus PF and EM techniques, all of which are derivations of Bayes rule. In addition to the probabilistic methods in other surveys, non-probabilistic approaches are also covered.
canadian conference on electrical and computer engineering | 2006
Ranga Rodrigo; Wenxia Shi; Jagath Samarabandu
Detection of straight lines in an image is a fundamental requirement for many applications in computer vision. We formulate the straight line detection task as an energy minimization problem. This formulation helps the detection of lines in a global manner in contrast to the local detection methods used in conventional algorithms. As a result the proposed straight line detection algorithm can handle virtually co-located straight lines, slightly curved lines and edge linking in a unified manner. In addition, due to its the global nature, the algorithm is not deceived by image noise giving rise to spurious line segments. Therefore, the proposed algorithm can robustly detect straight lines. The main component of the algorithm is formulating the energy to be minimized. The contribution to this energy function is less at a pixel which is a good candidate to be a member of an existing line segment depending on the directional gradients. A pixel choosing a part of a line segment is costly, but not impossible. This energy optimization is done using dynamic programming snakes. Since the algorithm is a global one and since no gradient calculations are used for local motion of nodes, our algorithm is robust. However, the optimization process takes a longer time than the existing straight line detection algorithms. Results are given for detecting straight lines in indoor environments
systems man and cybernetics | 2009
Ranga Rodrigo; Mehrnaz Zouqi; Zhenhe Chen; Jagath Samarabandu
Robust feature tracking is a requirement for many computer vision tasks such as indoor robot navigation. However, indoor scenes are characterized by poorly localizable features. As a result, indoor feature tracking without artificial markers is challenging and remains an attractive problem. We propose to solve this problem by constraining the locations of a large number of nondistinctive features by several planar homographies which are strategically computed using distinctive features. We experimentally show the need for multiple homographies and propose an illumination-invariant local-optimization scheme for motion refinement. The use of a large number of nondistinctive features within the constraints imposed by planar homographies allows us to gain robustness. Also, the lesser computation cost in estimating these nondistinctive features helps to maintain the efficiency of the proposed method. Our local-optimization scheme produces subpixel accurate feature motion. As a result, we are able to achieve robust and accurate feature tracking.
international conference on industrial and information systems | 2009
Mahendra Samarawickrama; Ajith Pasqual; Ranga Rodrigo
A single-chip FPGA implementation of a vision core is an efficient way to design fast and compact embedded vision systems from the PCB design level. The scope of the research is to design a novel FPGA-based parallel architecture for embedded vision entirely with on-chip FPGA resources. We designed it by utilizing block-RAMs and IO interfaces on the FPGA. As a result, the system is compact, fast and flexible. We evaluated this architecture for several mid-level neighborhood algorithms using Xilinx Virtex-2 Pro (XC2VP30) FPGA. Our algorithm uses a vision core with a 100 MHz system clock which supports image processing on a low-resolution image of 128×128 pixels up to 200 images per second. The results are accurate. We have compared our results with existing FPGA implementations. The performance of the algorithms could be substantially improved by applying sufficient parallelism.
international conference on information and automation | 2006
Ranga Rodrigo; Zhenhe Chen; Jagath Samarabandu
Monocular vision based robot navigation requires feature tracking for localization. In this paper we present a tracking system using discriminative features as well as less discriminative features. Discriminative features such as SIFT are easily tracked and useful to obtain the initial estimates of the transforms such as affinities and homographies. On the other hand less discriminative features such as Harris corners and manually selected features are not easily tracked in a subsequent frame due to problems in matching. We use SIFT features to obtain the the estimates of the planar homographies representing the motion of the major planar structures in the scene. Planar structure assumption is valid for indoor and architectural scenes. The combination of discriminative and less discriminative feature are tracked using the prediction by these homographies. Then normalized cross correlation matching is used to find the exact matches. This produces robust matching and feature motion can be accurately estimated. We show the performance of our system with real image sequences.
international conference on robotics and automation | 2005
Duane J. Jacques; Ranga Rodrigo; Kenneth A. McIsaac; Jagath Samarabandu
In this work, we have taken the first step towards the creation of a computerized seeing-eye guide dog. The system we presented extends the development of assistive technology for the visually impaired into a new area: object tracking and visual servoing. The system uses computer vision to provide a kind of surrogate sight for the human user; sensing information from the environment and communicating it through haptic signalling. Our proof-of concept prototype is a low-cost wearable system which uses a colour camera to analyze a scene and recognize a desired object, then generate tactile cues to the wearer to steer his or her hand towards the object. We have proved the system in trials with random users in an unstructured environment.
international conference on mechatronics and automation | 2005
Ranga Rodrigo; Jagath Samarabandu
The structure and camera pose obtained using multiple view geometry based techniques cannot readily be used for robot localization and mapping. This is due to the fact that the structure and pose obtained relate to the actual environment and motion only up to a transform. In this paper, a method to localize the robot using monocular vision is presented. The assumptions are that the initial pose of the robot is known and that five or more landmarks (true, world points) can be identified. If two or more dissimilar views of at least five non coplanar feature points are initially available, subsequent robot locations with respect to the landmarks in view can be established. The exploration of the environment can then take place incorporating new feature points as the robot moves and successive images are acquired. The feature points which are no longer present in the field of view have to be handled along with the occluded ones. In the presented method, the recovered structure and the knowledge about the intrinsic parameters of the camera are used to obtain the metric structure. Depending on the number of images considered at a time, the structure recovery can be done using the epipolar constraints or using the factorization method. The coordinates of the known landmarks are used to calculate the true 3D world coordinates of the feature points. Current location of the robot is established with respect to these landmarks. The world coordinates of the subsequently observed feature points are obtained using the full camera calibration available following the robot localization. The proposed method avoids cumbersome stereo rig calibration. It naturally uses the new feature information available as the robot moves, for incremental localizations. The performance of the algorithm is verified with simulation and real results.
international conference on information and automation | 2010
Buddhika Maldeniya; Dinindu Nawarathna; Kanishka Wijayasekara; Tharindu Wijegoonasekara; Ranga Rodrigo
In order to obtain depth perception in computer vision, it is needed to process pairs of stereo images. This process is computationally challenging to be carried out in real-time, because it requires the search for matches between objects in both images. Such process is significantly simplified if the images are rectified. Stereo image rectification involves a matrix transformation which when done in software will not produce real-time results although it is very demanding. Therefore, the video streaming and matrix transformation are not usually implemented in the same system. Our product is a stereo camera pair which produces a rectified real time image output with a resolution of 320×240 at a frame rate of 15FPS and delivers them via a 100-Ethernet interface. We use a Spartan 3E FPGA for real-time processing within which we implement an image rectification algorithm.
international conference on information and automation | 2010
Ranga Dabarera; Ranga Rodrigo
In elephant management and conservation, it is vital to have non-invasive methods to track elephants. Image based recognition is a non-invasive mechanism for elephant tracking, albeit the inefficiency in manual method due to difficulties in handling large amount of data from multiple sources. To mitigate the drawbacks in manual method, we have proposed a computer vision based, automated, elephant recognition mechanism, which mainly relies on appearance based recognition algorithms. We have tested feasibility of the system running on a web based interface, which can facilitate researchers and conservationists all around the world to actively participate in elephant conservation.
international conference on information and automation | 2010
Mahendra Samarawickrama; Ranga Rodrigo; Ajith Pasqual
Control the data flow between device interfaces, processing blocks and memories in a vision system is complex in hardware implementation. In the research, high-level synthesis tool is used to design, implement and test the vision system within the context of required control, synchronization, and parameterization on a processor based platform. In addition, both HLS tools and HDL were used for the development of the processing cores, and the performance of the two versions were analyzed and compared. The operational structures of benchmarked vision core consist of custom vision coprocessor with efficient memory and bus interfaces. The performance properties such as accuracy, throughput and efficiency are measured and presented. Xilinx XC5VLX110T FPGA, has been used for prototype the hardware platforms. According to results, without any complex optimizations, pipeline length and resource utilization was achieved compared with the HDL counterpart. Our image pre-processing architecture which was implemented using HLL is faster than the optimized software implementation on an Intel Core 2 Duo GPU. The development time using AccelDSP was roughly five times shorter than using Verilog. Therefore, the availability of competent high-level synthesis tools will significantly reduce costs and design constraints in embedded image-processing implementations on FPGA.