Raymond G. Gosine
St. John's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raymond G. Gosine.
intelligent robots and systems | 2005
Momotaz Begum; George K. I. Mann; Raymond G. Gosine
This paper proposes a novel algorithm combining fuzzy logic (FL) and genetic algorithm (GA) for concurrent mapping and localization (CML) of mobile robot. First, CML is formulated as a multidimensional informed search problem. The search is performed to detect a robot pose which can best accommodate the recent sensor scan in the currently available map. A fuzzy set theoretic approach is used to predict a sample based representation of the state space of possible robot poses and a GA is designed to find out the globally optimal solution from the predicted pose space. The GA evaluates the fitness of poses based on the sensory information and drives the generation gradually towards the globally optimal solution even when the fuzzy prediction is inaccurate. The best fit solution as decided by GA offers the most likely continuation of the currently available map. Experiment on synthetic and real data illustrates the robustness of the algorithm.
intelligent robots and systems | 2008
Momotaz Begum; George K. I. Mann; Raymond G. Gosine; Fakhri Karray
This paper argues that the object- and space-based modes of visual attention can be naturally integrated in a common mathematical framework. In an earlier work we have proposed a mathematical model of visual attention for robotic system exploiting the knowledge of visual attention mechanism of the primates. This paper investigates on the validity of the proposed model for robotic systems through experimentation on a real robot. The paper sheds light on a number of real world issues involved with the design of visual attention system for physically embodied robots and explains how the proposed Bayesian model of visual attention addresses these issues. The object- and space-based modes of visual attention are naturally integrated in the model and is reflected in the sequential Monte Carlo implementation of the model on a real robot.
Archive | 2011
Yuanlong Yu; George K. I. Mann; Raymond G. Gosine
Unlike the traditional robotic systems in which the perceptual behaviors are manually designed by programmers for a given task and environment, autonomous perception of the world is one of the challenging issues in the cognitive robotics. It is known that the selective attention mechanism serves to link the processes of perception, action and learning (Grossberg, 2007; Tipper et al., 1998). It endows humanswith the cognitive capability that allows them to learn and think about how to perceive the environment autonomously. This visual attention based autonomous perception mechanism involves two aspects: conscious aspect that directs perception based on the current task and learned knowledge, and unconscious aspect that directs perception in the case of facing an unexpected or unusual situation. The top-down attention mechanism (Wolfe, 1994) is responsible for the conscious aspect whereas the bottom-up attentionmechanism (Treisman & Gelade, 1980) corresponds to the unconscious aspect. This paper therefore discusses about how to build an artificial system of autonomous visual perception. Three fundamental problems are addressed in this paper. The first problem is about pre-attentive segmentation for object-based attention. It is known that attentional selection is either space-based or object-based (Scholl, 2001). The space-based theory holds that attention is allocated to a spatial location (Posner et al., 1980). The object-based theory, however, posits that some pre-attentive processes serve to segment the field into discrete objects, followed by the attention that deals with one object at a time (Duncan, 1984). This paper proposes that object-based attention has the following three advantages in terms of computations: 1) Object-based attention is more robust than space-based attention since the attentional activation at the object level is estimated by accumulating contributions of all components within that object, 2) attending to an exact object can provide more useful information (e.g., shape and size) to produce the appropriate actions than attending to a spatial location, and 3) the discrete objects obtained by pre-attentive segmentation are required in the case that a global feature (e.g., shape) is selected to guide the top-down attention. Thus this paper adopts the object-based visual attention theory (Duncan, 1984; Scholl, 2001). Although a few object-based visual attention models have been proposed, such as (Sun, 2008; Sun & Fisher, 2003), developing a pre-attentive segmentation algorithm is still a challenging issue as it is a unsupervised process. This issue includes three types of challenges: 1) The Development of an Autonomous Visual Perception System for Robots Using Object-Based Visual Attention
NECEC 2013. | 2013
Thumeera Ruwansiri Wanasinghe Arachchige; George K. I. Mann; Raymond G. Gosine
NECEC 2013. | 2013
Mohamed W. Mehrez; George K. I. Mann; Raymond G. Gosine
NECEC 2012. | 2012
Oscar De Silva; George K. I. Mann; Raymond G. Gosine
NECEC 2012. | 2012
Thumeera Ruwansiri Wanasinghe Arachchige; George K. I. Mann; Raymond G. Gosine
NECEC 2012. | 2012
Thanh Trung Nguyen; George K. I. Mann; Raymond G. Gosine
NECEC 2011. | 2011
Oscar De Silva; George K. I. Mann; Raymond G. Gosine
Archive | 2009
Yuanlong Yu; George K. I. Mann; Raymond G. Gosine