Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Fernando Ribeiro is active.

Publication


Featured researches published by A. Fernando Ribeiro.


Robotics and Autonomous Systems | 2001

Omni-directional catadioptric vision for soccer robots

Pedro U. Lima; Andrea Bonarini; Carlos Machado; Fabio M. Marchese; Carlos F. Marques; A. Fernando Ribeiro; Domenico G. Sorrenti

Abstract This paper describes the design of a multi-part mirror catadioptric vision system and its use for self-localization and detection of relevant objects in soccer robots. The mirror and associated algorithms have been used in robots participating in the middle-size league of RoboCup — The World Cup of Soccer Robots.


world conference on information systems and technologies | 2014

Vision-Based Portuguese Sign Language Recognition System

Paulo Trigueiros; A. Fernando Ribeiro; Luís Paulo Reis

Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign languages are not standard and universal and the grammars differ from country to country. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of features and an accuracy of 99.6% with a second dataset of features. Although the implemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.


ieee international conference on autonomous robot systems and competitions | 2014

Generic system for human-computer gesture interaction

Paulo Trigueiros; A. Fernando Ribeiro; Luís Paulo Reis

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.


robot soccer world cup | 2013

Vision Based Referee Sign Language Recognition System for the RoboCup MSL League

Paulo Trigueiros; A. Fernando Ribeiro; Luís Paulo Reis

In RoboCup Middle Size league (MSL) the main referee uses assisting technology, controlled by a second referee, to support him, in particular for conveying referee decisions for robot players with the help of a wireless communication system. In this paper a vision-based system is introduced, able to interpret dynamic and static gestures of the referee, thus eliminating the need for a second one. The referee’s gestures are interpreted by the system and sent directly to the Referee Box, which sends the proper commands to the robots. The system is divided into four modules: a real time hand tracking and feature extraction, a SVM (Support Vector Machine) for static hand posture identification, an HMM (Hidden Markov Model) for dynamic unistroke hand gesture recognition, and a FSM (Finite State Machine) to control the various system states transitions. The experimental results showed that the system works very reliably, being able to recognize the combination of gestures and hand postures in real-time. For the hand posture recognition, with the SVM model trained with the selected features, an accuracy of 98,2% was achieved. Also, the system has many advantages over the current implemented one, like avoiding the necessity of a second referee, working on noisy environments, working on wireless jammed situations. This system is easy to implement and train and may be an inexpensive solution.


robot soccer world cup | 2002

MINHO Robot Football Team for 2001

A. Fernando Ribeiro; Carlos Machado; Sérgio Sampaio; Bruno Martins

This paper describes an autonomous robot football team. The work is being carried out since 1998. It describes the hardware used by the robots, the sensory system and interfaces, as well as the game strategy. Data acquisition for the perception level is carried out by the vision system, and the image processing system is described. Two cameras are used requiring sensorial fusion. With this architecture, an attempted is made to make the autonomous robots more real world intelligent. These robots have a kicker with controlled power, which allows passing the ball to a teammate with controlled distance and direction.


Ai Magazine | 2014

RoboCup Soccer Leagues

Daniele Nardi; Itsuki Noda; A. Fernando Ribeiro; Peter Stone; Oskar von Stryk; Manuela M. Veloso

RoboCup was created in 1996 by a group of Japanese, American, and European artificial intelligence and robotics researchers with a formidable, visionary long-term challenge: By 2050 a team of robot soccer players will beat the human World Cup champion team. In this article, we focus on RoboCup robot soccer, and present its five current leagues, which address complementary scientific challenges through different robot and physical setups. Full details on the status of the RoboCup soccer leagues, including league history and past results, upcoming competitions, and detailed rules and specifications are available from the league homepages and wikis.


Industrial Robot-an International Journal | 2012

Autonomous golf ball picking robot design and development

Nino Pereira; A. Fernando Ribeiro; Gil Lopes; Daniel E. Whitney; Jorge Lino

Purpose – The purpose of this paper is to present the methodology and the results on the design and development of an autonomous, golf ball picking robot, for driving ranges.Design/methodology/approach – The strategy followed to develop a commercial product is presented, based on prior identification requirements, which consist of picking up golf balls on a driving range in a safe and efficient way.Findings – A fully working prototype robot has been developed. It uses two driving wheels and a third cast wheel, and pushes a standard gang which collects the balls from the ground. A hybrid information system was implemented in order to provide a statistically relevant prediction of golf balls location, to optimize the path the robot has to follow in order to reduce time and cost. Autonomous navigation was developed and tested on a simulation environment.Research limitations/implications – Preliminary results showed that the new path planning algorithm Twin‐RRT* is able to form closed loop trajectories and im...


Journal of Intelligent and Robotic Systems | 2015

Generic System for Human-Computer Gesture Interaction: Applications on Sign Language Recognition and Robotic Soccer Refereeing

Paulo Trigueiros; A. Fernando Ribeiro; Luís Paulo Reis

Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for real-time human-machine interaction. Its novelty is the integration of different tools for gesture spotting and the proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained with a centroid distance dataset composed of 2170 records, able to achieve a final accuracy of 99.4 %. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each one of the defined gestures that the system should recognize with a final average accuracy of 93.7 %. The datasets were built from four different users with a total of 25 gestures per user, totalling 1100 records for model construction. The proposed solution has the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real-time.


robot soccer world cup | 2012

Catadioptric system optimisation for omnidirectional RoboCup MSL robots

Gil Lopes; A. Fernando Ribeiro; Nino Pereira

Omnidirectional RoboCup MSL robots often use catadioptric vision systems in order to enable 360° of field view. It comprises an upright camera facing a convex mirror, commonly spherical, parabolic or hyperbolic, that reflects the entire space around the robot. This technique is being used for more than a decade and in a similar way by most teams. Teams upgrade their cameras in order to obtain more and better information of the captured area in pixel quantity and quality, but a large image area outside the convex mirror is black and unusable. The same happens on the image centre where the robot shows itself. Some efficiency though, can be improved in this technique by the methods presented in this paper such as developing a new convex mirror and by repositioning the camera viewpoint. Using 3D modelling CAD/CAM software for the simulation and CNC lathe mirror construction, some results are presented and discussed.


intelligent robots and systems | 2015

Global localization by soft object recognition from 3D Partial Views

A. Fernando Ribeiro; Susana Brandão; João Paulo Costeira; Manuela M. Veloso

Global localization is a widely studied problem, and in essence corresponds to the online robot pose estimation based on a given map with landmarks, an odometry model, and real robot sensory observations and motion. In most approaches, the map provides the position of visible objects, which are then recognized to provide the robot pose estimation. Such object recognition with noisy sensory data is challenging. In this paper, we present an effective global localization technique using soft 3D object recognition to estimate the pose with respect to the landmarks in the given map. A depth sensor acquires a partial view for each observed object, from which our algorithm extracts the robot pose relative to the objects, based on a library of 3D Partial View Heat Kernel descriptors. Our approach departs from methods that require classification and registration against complete 3D models, which are prone to errors due to noisy sensory data and object misclassifications in the recognition stage. We experimentally validate our method in different robot paths with different common 3D environment objects. We also show the improvement of our method compared to when the partial view information is not used.

Collaboration


Dive into the A. Fernando Ribeiro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manuela M. Veloso

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge