Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Camilo Perez Quintero is active.

Publication


Featured researches published by Camilo Perez Quintero.


international conference on robotics and automation | 2013

SEPO: Selecting by pointing as an intuitive human-robot command interface

Camilo Perez Quintero; Romeo Tatsambon Fomena; Azad Shademan; Nina Wolleb; Travis Dick; Martin Jagersand

Pointing to indicate direction or position is one of the intuitive communication mechanisms used by humans in all life stages. Our aim is to develop a natural human-robot command interface using pointing gestures for human-robot interaction (HRI). We propose an interface based on the Kinect sensor for selecting by pointing (SEPO) in a 3D real-world situation, where the user points to a target object or location and the interface returns the 3D position coordinates of the target. Through our interface we perform three experiments to study precision and accuracy of human pointing in typical household scenarios: pointing to a “wall”, pointing to a “table”, and pointing to a “floor”. Our results prove that the proposed SEPO interface enables users to point and select objects with an average 3D position accuracy of 9:6 cm in household situations.


robotics: science and systems | 2013

Realtime Registration-Based Tracking via Approximate Nearest Neighbour Search.

Travis Dick; Camilo Perez Quintero; Martin Jagersand; Azad Shademan

We introduce a new 2D visual tracking algorithm that utilizes an approximate nearest neighbour search to estimate per-frame state updates. We experimentally demonstrate that the new algorithm capable of estimating larger per-frame motions than the standard registration-based algorithms and that it is more robust in a vision-controlled robotic alignment task.


international conference on robotics and automation | 2015

Tracking benchmark and evaluation for manipulation tasks

Ankush Roy; Xi Zhang; Nina Wolleb; Camilo Perez Quintero; Martin Jagersand

In this paper we present a public dataset to evaluate trackers used for human and robot manipulation tasks. For these tasks both high DOF motion and high accuracy is needed. We describe in detail, both the process of recording the sequences and how ground truth data was generated for the videos. The videos are tagged with challenges that a tracker would face while tracking the object. As an initial example, we evaluate the performance of six published trackers [5], [11], [12], [13], [15], [6] and analyse their result. We describe a new evaluation metric to test sensitivity of trackers to speed. A total of 100 annotated and tagged sequences are reported. All the videos, ground truth data, original implementation of trackers and evaluation scripts are made publicly available on the website so others can extend the results on their trackers and evaluation.


international conference on robotics and automation | 2015

VIBI: Assistive vision-based interface for robot manipulation

Camilo Perez Quintero; Oscar A. Ramirez; Martin Jagersand

Upper-body disabled people can benefit from the use of robot-arms to perform every day tasks. However, the adoption of this kind of technology has been limited by the complexity of robot manipulation tasks and the difficulty in controlling a multiple-DOF arm using a joystick or a similar device. Motivated by this need, we present an assistive vision-based interface for robot manipulation. Our proposal is to replace the direct joystick motor control interface present in a commercial wheelchair mounted assistive robotic manipulator with a human-robot interface based on visual selection. The scene in front of the robot is shown on a screen, and the user can then select an object with our novel grasping interface. We develop computer vision and motion control methods that drive the robot to that object. Our aim is not to replace user control, but instead augment user capabilities through our system with different levels of semi-autonomy, while leaving the user with a sense that he/she is in control of the task. Two disabled pilot users, were involved at different stages of our research. The first pilot user during the interface design along with rehab experts. The second performed user studies along with an 8 subject control group to evaluate our interface. Our system reduces robot instruction from a 6-DOF task in continuous space to either a 2-DOF pointing task or a discrete selection task among objects detected by computer vision.


canadian conference on computer and robot vision | 2013

Towards Practical Visual Servoing in Robotics

Romeo Tatsambon Fomena; Camilo Perez Quintero; Mona Gridseth; Martin Jagersand

Visual servoing was introduced in robotics nearly 4 decades ago. However until now, there are still only a handful of known examples of application of this technique in addressing real word robotics problems such as disaster response, assistance for elderly or handicapped people, etc. As the world is moving towards the use of robotics to improve quality of life, it is time to assess the challenges involved in applying visual servoing to solve real world problems. This paper presents an overview of these challenges, by asking the question what are the missing components for practical visual servoing? and by providing practical possible solutions for these components. Illustration of these challenges and our current practical solutions are given using our 7-DoFs Barrett WAM Arm.


robot and human interactive communication | 2015

Visual pointing gestures for bi-directional human robot interaction in a pick-and-place task

Camilo Perez Quintero; Romeo Tatsambon; Mona Gridseth; Martin Jagersand

This paper explores visual pointing gestures for two-way nonverbal communication for interacting with a robot arm. Such non-verbal instruction is common when humans communicate spatial directions and actions while collaboratively performing manipulation tasks. Using 3D RGBD we compare human-human and human-robot interaction for solving a pick-and-place task. In the human-human interaction we study both pointing and other types of gestures, performed by humans in a collaborative task. For the human-robot interaction we design a system that allows the user to interact with a 7DOF robot arm using gestures for selecting, picking and dropping objects at different locations. Bi-directional confirmation gestures allow the robot (or human) to verify that the right object is selected. We perform experiments where 8 human subjects collaborate with the robot to manipulate ordinary household objects on a tabletop. Without confirmation feedback selection accuracy was 70-90% for both humans and the robot. With feedback through confirmation gestures both humans and our vision-robotic system could perform the task accurately every time (100%). Finally to illustrate our gesture interface in a real application, we let a human instruct our robot to make a pizza by selecting different ingredients.


canadian conference on computer and robot vision | 2014

Interactive Teleoperation Interface for Semi-autonomous Control of Robot Arms

Camilo Perez Quintero; Romeo Tatsambon Fomena; Azad Shademan; Oscar A. Ramirez; Martin Jagersand

We propose and develop an interactive semi-autonomous control of robot arms. Our system controls two interactions: (1) A user can naturally control a robot arm by a direct linkage to the arm motion from the tracked human skeleton. (2) An autonomous image-based visual servoing routine can be triggered for precise positioning. Coarse motions are executed by human teleoperation and fine motions by image-based visual servoing. A successful application of our proposed interaction is presented for a WAM arm equipped with an eye-in-hand camera.


international conference on robotics and automation | 2017

Flexible virtual fixture interface for path specification in tele-manipulation

Camilo Perez Quintero; Masood Dehghan; Oscar A. Ramirez; Marcelo H. Ang; Martin Jagersand

We present the design and implementation of a flexible force-vision-based interface; allowing local operators to visually specify a path constraint to a remote robot manipulator in an on-line fashion during the teleoperation. Using bilateral and unilateral configurations, we compare our system to direct teleoperation through user studies. Three performance metrics (smoothness, error and execution time) and a subjective evaluation (NASA TLX) were used to quantify user performance. The trials show that our system outperforms direct teleoperation and reduces cognitive load. Our findings show that the performance of a unilateral teleop configuration with visual-force constraints surpass a bilateral teleop configuration in terms of displacement error and variance, as well as allowing users to complete tasks faster and with a smoother trajectory.


international conference on robotics and automation | 2015

On-line reconstruction based predictive display in unknown environment

Huan Hu; Camilo Perez Quintero; Hanxu Sun; Martin Jagersand

In tele-robotics, time delay is a significant problem. When video feedback is delayed, operators adopt inefficient move-wait strategies, so system performance decreases. Predictive display (PD) is an effective solution to compensate for delays by graphics rendering of predicted visual feedback. Using advanced computer vision technology, we implemented a PD system based-on online real-time 3D reconstruction from monocular video. This paper describes the client-server system architecture. Experimental results indicate it can capture 3D models and render the predicted image in realistic applications covering outdoor rover operation on earth, Canadian Space Agencys (CSA) Mars analogue environment, UAV operation.


international conference on robotics and automation | 2016

ViTa: Visual task specification interface for manipulation with uncalibrated visual servoing

Mona Gridseth; Oscar A. Ramirez; Camilo Perez Quintero; Martin Jagersand

We present a human robot interface (HRI) for semi-autonomous human-in-the-loop control, that aims to tackle some of the challenges for robotics in unstructured environments. Our HRI lets the user specify desired object alignments in an image editor as geometric overlays on images. The HRI is based on the technique of visual task specification [1], which provides a well studied theoretical framework. Tasks are completed using uncalibrated image-based visual servoing (UVS). Our interface is shown to be effective for a versatile set of tasks that span both coarse and fine manipulation. We complete tasks such as inserting a marker in its cap, inserting a small cube in a shape sorter, grasping a circular lid, following a line, grasping a screw, cutting along a line, picking and placing a box and grasping a cylinder using a Barrett WAM arm and hand.

Collaboration


Dive into the Camilo Perez Quintero's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge