Junaed Sattar
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junaed Sattar.
IEEE Computer | 2007
Gregory Dudek; Philippe Giguère; Chris Prahacs; Shane Saunderson; Junaed Sattar; Luz Abril Torres-Méndez; Michael Jenkin; Andrew German; Andrew Hogue; Arlene Ripsman; James E. Zacher; Evangelos E. Milios; Hui Liu; Pifu Zhang; Martin Buehler; Christina Georgiades
AQUA, an amphibious robot that swims via the motion of its legs rather than using thrusters and control surfaces for propulsion, can walk along the shore, swim along the surface in open water, or walk on the bottom of the ocean. The vehicle uses a variety of sensors to estimate its position with respect to local visual features and provide a global frame of reference
intelligent robots and systems | 2005
Gregory Dudek; Michael Jenkin; Chris Prahacs; Andrew Hogue; Junaed Sattar; Philippe Giguère; Andrew German; Hui Liu; Shane Saunderson; Arlene Ripsman; Saul Simhon; Luz Abril Torres; Evangelos E. Milios; Pifu Zhang; Ioannis Rekletis
We describe recent results obtained with AQUA, a mobile robot capable of swimming, walking and amphibious operation. Designed to rely primarily on visual sensors, the AQUA robot uses vision to navigate underwater using servo-based guidance, and also to obtain high-resolution range scans of its local environment. This paper describes some of the pragmatic and logistic obstacles encountered, and provides an overview of some of the basic capabilities of the vehicle and its associated sensors. Moreover, this paper presents the first ever amphibious transition from walking to swimming.
intelligent robots and systems | 2008
Junaed Sattar; Gregory Dudek; Olivia Chiu; Ioannis M. Rekleitis; Philippe Giguère; Alec Mills; Nicolas Plamondon; Chris Prahacs; Yogesh A. Girdhar; Meyer Nahon; John-Paul Lobos
Underwater operations present unique challenges and opportunities for robotic applications. These can be attributed in part to limited sensing capabilities, and to locomotion behaviours requiring control schemes adapted to specific tasks or changes in the environment. From enhancing teleoperation procedures, to providing high-level instruction, all the way to fully autonomous operations, enabling autonomous capabilities is fundamental for the successful deployment of underwater robots. This paper presents an overview of the approaches used during underwater sea trials in the coral reefs of Barbados, for two amphibious mobile robots and a set of underwater sensor nodes. We present control mechanisms used for maintaining a preset trajectory during enhanced teleoperations and discuss their experimental results. This is followed by a discussion on amphibious data gathering experiments conducted on the beach. We then present a tetherless underwater communication approach based on pure vision for high-level control of an underwater vehicle. Finally the construction details together with preliminary results from a set of distributed underwater sensor nodes are outlined.
intelligent robots and systems | 2005
Junaed Sattar; Philippe Giguère; Gregory Dudek; Chris Prahacs
This paper describes a visual servoing system for an underwater legged robotic system named AQUA and initial experiments with the system performed in the open sea. A large class of significant applications can be leveraged by allowing such a robot to follow a diver or some other moving target. The robot uses a suite of sensing technologies, primarily based on computer vision, to allow it to navigate in shallow-water environments. The visual servoing system described here allows the robot to track and follow a given target underwater. The servo package is made up of two distinct parts: a tracker and a feedback controller. The system has been evaluated in the sea water and under natural lighting conditions. The servo system has been tested underwater, and with minor modifications, the system can be used while the robot is walking on the ground as well.
international conference on robotics and automation | 2006
Junaed Sattar; Gregory Dudek
We consider the use of visual target tracking for autonomous steering of an underwater robot. In this context, we consider a performance comparison for three key visual tracking algorithms used for servo control. We present a comparative study of the performance in underwater environments of three tracking algorithms that are widely used in vision applications. Variations in illumination, suspended particles and a resulting reduction in visibility hinders vision systems from performing satisfactorily in marine environments; at least not as well as they do in terrestrial (Le. non-underwater) surroundings. Our work focuses on quantitatively measuring the performance of three color-based tracking algorithms- color blob tracker, color histogram tracker and mean-shift tracker, in tracking objects underwater in different levels lighting and visibility. We also present results demonstrating the effect of suspended particles underwater, and in conclusion we summarize the three tracking algorithms by comparing their pros and cons
intelligent robots and systems | 2007
Junaed Sattar; Gregory Dudek
We present an algorithm for underwater robots to track mobile targets, and specifically human divers, by detecting periodic motion. Periodic motion is typically associated with propulsion underwater and specifically with the kicking of human swimmers. By computing local amplitude spectra in a video sequence, we find the location of a diver in the robots field of view. We use the Fourier transform to extract the responses of varying intensities in the image space over time to detect characteristic low frequency oscillations to identify an undulating flipper motion associated with typical gaits. In case of detecting multiple locations that exhibit large low-frequency energy responses, we combine the gait detector with other methods to eliminate false detections. We present results of our algorithm on open-ocean video footage of swimming divers, and also discuss possible extensions and enhancements of the proposed approach for tracking other objects that exhibit low- frequency oscillatory motion.
canadian conference on computer and robot vision | 2007
Junaed Sattar; Eric Bourque; Philippe Giguère; Gregory Dudek
In this paper we introduce the Fourier tag, a synthetic fiducial marker used to visually encode information and provide controllable positioning. The Fourier tag is a synthetic target akin to a bar-code that specifies multi-bit information which can be efficiently and robustly detected in an image. Moreover, the Fourier tag has the beneficial property that the bit string it encodes has variable length as a function of the distance between the camera and the target. This follows from the fact that the effective resolution decreases as an effect of perspective. This paper introduces the Fourier tag, describes its design, and illustrates its properties experimentally.
robotics science and systems | 2009
Junaed Sattar; Gregory Dudek
We present an algorithm for underwater robots to visually detect and track human motion. Our objective is to enable human-robot interaction by allowing a robot to follow behind a human moving in (up to) six degrees of freedom. In particular, we have developed a system to allow a robot to detect, track and follow a scuba diver by using frequencydomain detection of biological motion patterns. The motion of biological entities is characterized by combinations of periodic motions which are inherently distinctive. This is especially true of human swimmers. By using the frequency-space response of spatial signals over a number of video frames, we attempt to identify signatures pertaining to biological motion. This technique is applied to track scuba divers in underwater domains, typically with the robot swimming behind the diver. The algorithm is able to detect a range of motions, which includes motion directly away from or towards the camera. The motion of the diver relative to the vehicle is then tracked using an Unscented Kalman Filter (UKF), an approach for non-linear estimation. The efficiency of our approach makes it attractive for real-time applications onboard our underwater vehicle, and in future applications we intend to track scuba divers in real-time with the robot. The paper presents an algorithmic overview of our approach, together with experimental evaluation based on underwater video footage. Fig. 1. An underwater robot servoing off a colored target carried by a diver.
The International Journal of Robotics Research | 2009
Junaed Sattar; Philippe Giguère; Gregory Dudek
In this paper, we evaluate a set of core functions that allow an underwater robot to perform surveillance under operator control. Specifically, we are interested in behaviors that facilitate the monitoring of organisms on a coral reef, and we present behaviors and interaction modes for a small underwater robot. In particular, we address some challenging issues arising from the underwater environment: visual processing, interactive communication with an underwater crew and, finally, orientation and motion of the vehicle through a hovering mode. The visual processing consists of target tracking using various techniques (color segmentation, color histogram and mean shift). Communication underwater is achieved through printed cards with robustly identifiable visual markers on them. Finally, the hovering gait developed for this vehicle relies on the planned motion of six flippers to generate the appropriate forces.
international conference on robotics and automation | 2008
Anqi Xu; Gregory Dudek; Junaed Sattar
A gesture-based interaction framework is presented for controlling mobile robots. This natural interaction paradigm has few physical requirements, and thus can be deployed in many restrictive and challenging environments. We present an implementation of this scheme in the control of an underwater robot by an on-site human operator. The operator performs discrete gestures using engineered visual targets, which are interpreted by the robot as parametrized actionable commands. By combining the symbolic alphabets resulting from several visual cues, a large vocabulary of statements can be produced. An iterative closest point algorithm is used to detect these observed motions, by comparing them with an established database of gestures. Finally, we present quantitative data collected from human participants indicating accuracy and performance of our proposed scheme.