Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arjuna Balasuriya is active.

Publication


Featured researches published by Arjuna Balasuriya.


IEEE Transactions on Robotics and Automation | 2004

Road-boundary detection and tracking using ladar sensing

Wijerupage Sardha Wijesoma; K.R.S. Kodagoda; Arjuna Balasuriya

Road-boundary detection is an integral and important function in advanced driver-assistance systems and autonomous vehicle navigation systems. A prominent feature of roads in urban, semi-urban, and similar environments, such as in theme parks, campus sites, industrial estates, science parks, and the like, is curbs on either side defining the roads boundary. Although vision is the most common and popular sensing modality used by researchers and automotive manufacturers for road-lane detection, it can pose formidable challenges in detecting road curbs under poor illumination, bad weather, and complex driving environments. This paper proposes a novel method based on extended Kalman filtering for fast detection and tracking of road curbs using successive range/bearing readings obtained from a scanning two-dimensional ladar measurement system. As compared with millimeter wave radar methods reported in the literature, the proposed technique is simpler and computationally more efficient. This is the first of its kind reported in the literature. Qualitative experimental results are presented from the application of the technique to a campus site environment to demonstrate the viability, effectiveness, and robustness.


international conference on robotics and automation | 2010

Spatiotemporal path planning in strong, dynamic, uncertain currents

David R. Thompson; Steve Chien; Yi Chao; Peggy P. Li; Bronwyn Cahill; Julia Levin; Oscar Schofield; Arjuna Balasuriya; Stephanie Petillo; Matt Arrott; Michael Meisinger

This work addresses mission planning for autonomous underwater gliders based on predictions of an uncertain, time-varying current field. Glider submersibles are highly sensitive to prevailing currents so mission planners must account for ocean tides and eddies. Previous work in variable-current path planning assumes that current predictions are perfect, but in practice these forecasts may be inaccurate. Here we evaluate plan fragility using empirical tests on historical ocean forecasts for which followup data is available. We present methods for glider path planning and control in a time-varying current field. A case study scenario in the Southern California Bight uses current predictions drawn from the Regional Ocean Monitoring System (ROMS).


robotics and biomimetics | 2005

EOG based control of mobile assistive platforms for the severely disabled

Wijerupage Sardha Wijesoma; Kang Say Wee; Ong Choon Wee; Arjuna Balasuriya; Koh Tong San; Low Kay Soon

Assistive robots are increasingly being used to improve the quality of the life of disabled or handicapped people. In this paper a complete system is presented that can be used by people with extremely limited peripheral mobility but having the ability for eye motor coordination. The electrooculogram signals (EOG) that results from the eye displacement in the orbit of the subject are processed in real time to interpret intent and hence generate appropriate control signals to the assistive device. The effectiveness of the proposed methodology and the algorithms are demonstrated using a mobile robot for a limited vocabulary


IEEE Transactions on Control Systems and Technology | 2006

CuTE: curb tracking and estimation

K. R. S. Kodagoda; Wijerupage Sardha Wijesoma; Arjuna Balasuriya

The number of road accident related fatalities and damages are reduced substantially by improving road infrastructure and enacting and imposing laws. Further reduction is possible through embedding intelligence onto the vehicles for safe decision making. Road boundary information plays a major role in developing such intelligent vehicles. A prominent feature of roads in urban, semi-urban, and similar environments, is curbs on either side defining the roads boundary. In this brief, a novel methodology of tracking curbs is proposed. The problem of tracking a curb from a moving vehicle is formulated as tracking of a maneuvering target in clutter from a mobile platform using onboard sensors. A curb segment is presumed to be the maneuvering target, and is modeled as a nonlinear Markov switching process. The targets (curbs) orientation and location measurements are simultaneously obtained using a two-dimensional (2-D) scanning laser radar (LADAR) and a charge-coupled device (CCD) monocular camera, and are modeled as traditional base state observations. Camera images are also used to estimate the targets mode, which is modeled as a discrete-time point process. An effective curb tracking algorithm, known as Curb Tracking and Estimation (CuTE) using multiple modal sensor information is, thus, synthesized in an image enhanced interactive multiple model filtering framework. The use and fusion of camera vision and LADAR within this frame provide for efficient, effective, and robust tracking of curbs. Extensive experiments conducted in a campus road network demonstrate the viability, effectiveness, and robustness of the proposed method


OCEANS'10 IEEE SYDNEY | 2010

Autonomous adaptive environmental assessment and feature tracking via autonomous underwater vehicles

Stephanie Petillo; Arjuna Balasuriya; Henrik Schmidt

In the underwater environment, spatiotemporally dynamic environmental conditions pose challenges to the detection and tracking of hydrographic features. A useful tool in combating these challenge is Autonomous Adaptive Environmental Assessment (AAEA) employed on board Autonomous Underwater Vehicles (AUVs). AAEA is a process by which an AUV autonomously assesses the hydrographic environment it is swimming through in real-time, effectively detecting hydro-graphic features in the area. This feature detection process leads naturally to the subsequent active/adaptive tracking of a selected feature. Due to certain restrictions in operating AUVs this detection-tracking feedback loop setup with AAEA can only rely on having an AUVs self-collected hydrographic data (e.g., temperature, conductivity, and/or pressure readings) available. With a basic quantitative definition of an underwater feature of interest, an algorithm can be developed (with which a data set is evaluated) to detect said feature. One example of feature tracking with AAEA explored in this paper is tracking the marine thermocline. The AAEA process for thermocline tracking is outlined here from quantitatively defining the thermocline region and calculating thermal gradients, all the way through simulation and implementation of the process on AUVs. Adaptation to varying feature properties, scales, and other challenges in bringing the concept of feature tracking with AAEA into implementation in field experiments is addressed, and results from two recent field experiments are presented.


international conference on robotics and automation | 2007

Autonomous Control of an Autonomous Underwater Vehicle Towing a Vector Sensor Array

Michael R. Benjamin; David Battle; Donald P. Eickstedt; Henrik Schmidt; Arjuna Balasuriya

This paper is about the autonomous control of an autonomous underwater vehicle (AUV), and the particular considerations required to allow proper control while towing a 100-meter vector sensor array. Mission related objectives are tempered by the need to consider the effect of a sequence of maneuvers on the motion of the towed array which is thought not to tolerate sharp bends or twists in sensitive material. We describe and motivate an architecture for autonomy structured on the behavior-based control model augmented with a novel approach for performing behavior coordination using multi-objective optimization. We provide detailed in-field experimental results from recent exercises with two 21-inch AUVs in Monterey Bay California.


intelligent robots and systems | 2001

Road edge and lane boundary detection using laser and vision

Wijerupage Sardha Wijesoma; K.R.S. Kodagoda; Arjuna Balasuriya; Eam Khwang Teoh

This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. The Hough transform is used to extract the candidate of lane boundary edges, and the most probable lane boundary is determined by using an active line model and minimizing an appropriate energy function. Experimental results are presented to demonstrate the effectiveness of the combined laser and vision strategy for road-edge and lane boundary detection.


international conference on control, automation, robotics and vision | 2002

A laser and a camera for mobile robot navigation

Wijerupage Sardha Wijesoma; K.R.S. Kodagoda; Arjuna Balasuriya

In most urban roads, and similar environments such as in theme parks, campus sites, industrial estates, science parks and the like, the painted lane markings that exist may not be easily discernible by CCD cameras due to poor lighting, bad weather conditions, and inadequate maintenance. An important feature of roads in such environments is the existence of pavements or curbs on either side defining the road boundaries. These curbs, which are mostly parallel to the road, can be harnessed to extract useful features of the road for implementing autonomous navigation or driver assistance systems. However, extraction of the curb or road edge feature using vision image data is a very formidable task as the curb is not conspicuous in the vision image. To extract the curb using vision data requires extensive image processing, heuristics and very favourable ambient lighting. In our approach, the curb data is extracted speedily using range data provided by a 2D laser range measurement device. This information is then used to extract the mid-line(s) in the vision image using an extended Kalman filtering (EKF) approach. Subsequently midline data is used for the prediction of the road boundaries. Experimental results are presented to demonstrate the viability, and effectiveness, of the proposed methodology.


International Journal of Distributed Sensor Networks | 2012

Constructing a Distributed AUV Network for Underwater Plume-Tracking Operations

Stephanie Petillo; Henrik Schmidt; Arjuna Balasuriya

In recent years, there has been significant concern about the impacts of offshore oil spill plumes and harmful algal blooms on the coastal ocean environment and biology, as well as on the human populations adjacent to these coastal regions. Thus, it has become increasingly important to determine the 3D extent of these ocean features (“plumes”) and how they evolve over time. The ocean environment is largely inaccessible to sensing directly by humans, motivating the need for robots to intelligently sense the ocean for us. In this paper, we propose the use of an autonomous underwater vehicle (AUV) network to track and predict plume shape and motion, discussing solutions to the challenges of spatiotemporal data aliasing (coverage versus resolution), underwater communication, AUV autonomy, data fusion, and coordination of multiple AUVs. A plume simulation is also developed here as the first step toward implementing behaviors for autonomous, adaptive plume tracking with AUVs, modeling a plume as a sum of Fourier orders and examining the resulting errors. This is then extended to include plume forecasting based on time variations, and future improvements and implementation are discussed.


Proceedings of 1998 International Symposium on Underwater Technology | 1998

Autonomous target tracking by underwater robots based on vision

Arjuna Balasuriya; Tamaki Ura

This paper proposes and demonstrates the performance of a target tracking system designed for underwater robots (UR) based on the image data of the target, captured by a single CCD camera mounted on the robot. The use of target features for the derivation of navigational commands for the UR is addressed. The proposed system simplifies the underwater navigation problem by introducing additional parameters using visual image. Underwater positioning is one of the major problems encountered by URs. In the methodology proposed, such positioning is not required making the navigation problem easier. Here, the behavior of the target object in the vicinity of the UR is observed using CCD data and is interpreted into navigational commands. Implementations of this system for underwater cable tracking and moving object following missions using the UR test-bed available at the University of Tokyo are presented.

Collaboration


Dive into the Arjuna Balasuriya's collaboration.

Top Co-Authors

Avatar

Zhen Jia

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Wijerupage Sardha Wijesoma

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Henrik Schmidt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tamaki Ura

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K.R.S. Kodagoda

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Bharath Kalyan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Stephanie Petillo

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael R. Benjamin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Fan

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge