Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Luis Goncalves is active.

Publication


Featured researches published by Luis Goncalves.


international conference on robotics and automation | 2005

The vSLAM Algorithm for Robust Localization and Mapping

N. Karlsson; E. Di Bernardo; Jim Ostrowski; Luis Goncalves; Paolo Pirjanian; Mario E. Munich

This paper presents the Visual Simultaneous Localization and Mapping (vSLAMTM) algorithm, a novel algorithm for simultaneous localization and mapping (SLAM). The algorithm is vision-and odometry-based, and enables low-cost navigation in cluttered and populated environments. No initial map is required, and it satisfactorily handles dynamic changes in the environment, for example, lighting changes, moving objects and/or people. Typically, vSLAM recovers quickly from dramatic disturbances, such as “kidnapping”.


international conference on robotics and automation | 2005

A Visual Front-end for Simultaneous Localization and Mapping

Luis Goncalves; E. Di Bernardo; D. Benson; M. Svedman; Jim Ostrowski; N. Karlsson; Paolo Pirjanian

We describe a method of generating and utilizing visual landmarks that is well suited for SLAM applications. The landmarks created are highly distinctive and reliably detected, virtually eliminating the data association problem present in other landmark schemes. Upon subsequent detections of a landmark, a 3-D pose can be estimated. The scheme requires a single camera.


advanced robotics and its social impacts | 2005

Optical sensing for robot perception and localization

Y. Yamamoto; Paolo Pirjanian; Mario E. Munich; E. DiBernardo; Luis Goncalves; Jim Ostrowski; N. Karlsson

Optical sensing, e.g., computer vision, provides a very compelling approach to solving a number of technological challenges for developing affordable, useful, and reliable robotic products. We describe key advancements in the field consisting of three core technologies for visual pattern recognition (ViPR), visual simultaneous localization and mapping (vSLAM), and a low-cost solution for localization using optical beacons (NorthStar). ViPR is an algorithm for visual pattern recognition based on scale invariant features (SIFT features) which provides a robust and computationally effective solution to fundamental vision problems including the correspondence problem; object recognition; structure; and pose estimation. vSLAM is an algorithm for visual simultaneous localization and mapping using one camera sensor in conjunction with dead-reckoning information, e.g., odometry. vSLAM provides a cost-effective solution to localization and mapping for cluttered environments and is reliable to dynamic changes in the environment Finally, NorthStar uses IR projections onto a surface to estimate the robots pose based on triangulation. We give examples of concept prototypes as well as commercial products such as Sonys Aibo, which have incorporated these technologies in order to improve product utility and value.


IEEE Robotics & Automation Magazine | 2006

SIFT-ing through features with ViPR

Mario E. Munich; Paolo Pirjanian; E. Di Bernardo; Luis Goncalves; N. Karlsson; David G. Lowe

Recent advances in computer vision have given rise to a robust and invariant visual pattern recognition technology that is based on extracting a set of characteristic features from an image. Such features are obtained with the scale invariant feature transform (SIFT) which represents the variations in brightness of the image around the point of interest. Recognition performed with these features has been shown to be quite robust in realistic settings. This paper describes the application of this particular visual pattern recognition (ViPR) technology to a variety of robotics applications: object recognition, navigation, manipulation, and human-machine interaction. The paper also describes the technology in more detail and presents a business case for visual pattern recognition in the field of robotics and automation


intelligent robots and systems | 2004

Core technologies for service robotics

N. Karlsson; Mario E. Munich; Luis Goncalves; Jim Ostrowski; E. Di Bernardo; Paolo Pirjanian

Service robotics products are becoming a reality. This paper describes three core technologies that enable the next generation of service robots. They are low-cost, make use of low-cost hardware, and prepare for a short time-to-market for product development. The first technology is an object recognition system, which can be used by the robot to interact with the environment The second technology is a vision-based navigation system (vSLAM/spl trade/), which simultaneously can build a map and localize the robot in the map. Finally, the third technology is a flexible and rich software platform (ERSP/spl trade/) that assists developers in rapid design and prototyping of robotics applications.


Unmanned ground vehicle technology. Conference | 2004

vSLAM: vision-based SLAM for autonomous vehicle navigation

Luis Goncalves; N. Karlsson; Jim Ostrowski; Enrico Di Bernardo; Paolo Pirjanian

Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics SLAM problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotics powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.


Archive | 2010

Systems and methods for filtering potentially unreliable visual data for visual simultaneous localization and mapping

Luis Goncalves; Enrico Di Bernardo; Paolo Pirjanian; L. Niklas Karlsson


Archive | 2006

Systems and methods for controlling a density of visual landmarks in a visual simultaneous localization and mapping system

Luis Goncalves; L. Karlsson; Paolo Pirjanian; Enrico Di Bernardo


Archive | 2003

Systems and methods for incrementally updating a pose of a mobile device calculated by visual simultaneous localization and mapping techniques

L. Niklas Karlsson; Paolo Pirjanian; Luis Goncalves; Enrico Di Bernardo


Archive | 2006

Systems and methods for merchandise checkout

Jim Ostrowski; Luis Goncalves; Michael Cremean; Alex Simonini; Alec Hudnut

Collaboration


Dive into the Luis Goncalves's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

David G. Lowe

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge