Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gian Luca Mariottini is active.

Publication


Featured researches published by Gian Luca Mariottini.


IEEE Transactions on Robotics | 2007

Image-Based Visual Servoing for Nonholonomic Mobile Robots Using Epipolar Geometry

Gian Luca Mariottini; Giuseppe Oriolo; Domenico Prattichizzo

We present an image-based visual servoing strategy for driving a nonholonomic mobile robot equipped with a pinhole camera toward a desired configuration. The proposed approach, which exploits the epipolar geometry defined by the current and desired camera views, does not need any knowledge of the 3-D scene geometry. The control scheme is divided into two steps. In the first, using an approximate input-output linearizing feedback, the epipoles are zeroed so as to align the robot with the goal. Feature points are then used in the second translational step to reach the desired configuration. Asymptotic convergence to the desired configuration is proven, both in the calibrated and partially calibrated case. Simulation and experimental results show the effectiveness of the proposed control scheme


IEEE Transactions on Robotics | 2009

Vision-Based Localization for Leader–Follower Formation Control

Gian Luca Mariottini; Fabio Morbidi; Domenico Prattichizzo; N. Vander Valk; Nathan Michael; George J. Pappas; Kostas Daniilidis

This paper deals with vision-based localization for leader-follower formation control. Each unicycle robot is equipped with a panoramic camera that only provides the view angle to the other robots. The localization problem is studied using a new observability condition valid for general nonlinear systems and based on the extended output Jacobian. This allows us to identify those robot motions that preserve the system observability and those that render it nonobservable. The state of the leader-follower system is estimated via the extended Kalman filter, and an input-state feedback control law is designed to stabilize the formation. Simulations and real-data experiments confirm the theoretical results and show the effectiveness of the proposed formation control.


conference on decision and control | 2005

Vision-based Localization of Leader-Follower Formations

Gian Luca Mariottini; George J. Pappas; Domenico Prattichizzo; Kostas Daniilidis

This paper focuses on the localization problem for a mobile camera network. In particular, we consider the case of leader-follower formations of nonholonomic mobile vehicles equipped with vision sensors which provide only the bearing to the other robots. We prove a sufficient condition for observability and show that recursive estimation enables a leader-follower formation if the leader is not trapped in an unobservable configuration. We employ an Extended Kalman Filter for the estimation of each follower position and orientation with respect to the leader and we adopt a feedback linearizing control strategy to achieve a desired formation. Simulation results in a noisy environment are provided.


IEEE Robotics & Automation Magazine | 2005

EGT for multiple view geometry and visual servoing: robotics vision with pinhole and panoramic cameras

Gian Luca Mariottini; Domenico Prattichizzo

The Epipolar Geometry Toolbox (EGT) for MATLAB is a software package targeted to research and education in computer vision and robotics visual servoing. It provides the user with a wide set of functions for designing multicamera systems for both pinhole and panoramic cameras. Functions include camera placement and visualization, computation, and estimation of epipolar geometry entities. The compatibility of EGT with the Robotics Toolbox enables users to address general vision-based control issues. Two applications of EGT to visual servoing tasks are examined in this article. Several epipolar geometry estimation algorithms have been implemented.


international conference on robotics and automation | 2007

Leader-Follower Formations: Uncalibrated Vision-Based Localization and Control

Gian Luca Mariottini; Fabio Morbidi; Domenico Prattichizzo; George J. Pappas; Kostas Daniilidis

This paper focuses on leader-follower formations of mobile robots equipped with panoramic cameras and extend earlier works in the literature addressing both the vision-based localization and control problems. First, a new sufficient analytical condition for localizability is proved and used to shed light on the geometrical meaning of formation localization using uncalibrated vision sensors, here performed with the unscented Kalman filter. Second, we design a feedback control law based on dynamic extension in order to extend the applicability of our control scheme also to the case of distant robots.


Automatica | 2010

Brief paper: Observer design via Immersion and Invariance for vision-based leader-follower formation control

Fabio Morbidi; Gian Luca Mariottini; Domenico Prattichizzo

The paper introduces a new vision-based range estimator for leader-follower formation control, based upon the Immersion and Invariance (I&I) methodology. The proposed reduced-order nonlinear observer is simple to implement, easy to tune and achieves global asymptotical convergence of the observation error to zero. Observability conditions for the leader-follower system are analytically derived by studying the singularity of the Extended Output Jacobian. The stability of the closed-loop system arising from the combination of the range estimator and an input-state feedback controller is proved by means of Lyapunov arguments. Simulation experiments illustrate the theory and show the effectiveness of the proposed designs.


The International Journal of Robotics Research | 2008

Image-based Visual Servoing with Central Catadioptric Cameras

Gian Luca Mariottini; Domenico Prattichizzo

This paper presents an image-based visual servoing strategy for the autonomous navigation of a mobile holonomic robot from a current towards a desired pose, specified only through a current and a desired image acquired by the on-board central catadioptric camera. This kind of vision sensor combines lenses and mirrors to enlarge the field of view. The proposed visual servoing does not require any metrical information about the three-dimensional viewed scene and is mainly based on a novel geometrical property, the auto-epipolar condition, which occurs when two catadioptric views (current and desired) undergo a pure translation. This condition can be detected in real time in the image domain by observing when a set of so-called disparity conics have a common intersection. The auto-epipolar condition and the pixel distances between the current and target image features are used to design the image-based control law. Lyapunov-based stability analysis and simulation results demonstrate the parametric robustness of the proposed method. Experimental results are presented to show the applicability of our visual servoing in a real context.


pervasive technologies related to assistive environments | 2011

A survey and comparison of commercial and open-source robotic simulator software

Aaron Staranowicz; Gian Luca Mariottini

Simulators play an important role in robotics research as tools for testing the efficiency, safety, and robustness of new algorithms. This is of particular importance in scenarios that require robots to closely interact with humans, e.g., in medical robotics, and in assistive environments. Despite the increasing number of commercial and open-source robotic simulation tools, to the best of our knowledge, no comprehensive up-to-date survey paper has reviewed and compared their features. This survey paper presents a comprehensive and detailed overview and a comparison between the most recent and popular commercial and open-source robotic software for simulation and interfacing with real robots. A case-study is presented, showing the versatility in porting the control code from a simulation to a real robot. Finally, a detailed step-by-step documentation on software installation and usage has been made available publicly on the Internet, together with downloadable code examples.


IEEE Transactions on Medical Imaging | 2013

A Fast and Accurate Feature-Matching Algorithm for Minimally-Invasive Endoscopic Images

Gustavo A. Puerto-Souza; Gian Luca Mariottini

The ability to find image similarities between two distinct endoscopic views is known as feature matching, and is essential in many robotic-assisted minimally-invasive surgery (MIS) applications. Differently from feature-tracking methods, feature matching does not make any restrictive assumption about the chronological order between the two images or about the organ motion, but first obtains a set of appearance-based image matches, and subsequently removes possible outliers based on geometric constraints. As a consequence, feature-matching algorithms can be used to recover the position of any image feature after unexpected camera events, such as complete occlusions, sudden endoscopic-camera retraction, or strong illumination changes. We introduce the hierarchical multi-affine (HMA) algorithm, which improves over existing feature-matching methods because of the larger number of image correspondences, the increased speed, and the higher accuracy and robustness. We tested HMA over a large (and annotated) dataset with more than 100 MIS image pairs obtained from real interventions, and containing many of the aforementioned sudden events. In all of these cases, HMA outperforms the existing state-of-the-art methods in terms of speed, accuracy, and robustness. In addition, HMA and the image database are made freely available on the internet.


international conference on robotics and automation | 2010

A Laser-Aided Inertial Navigation System (L-INS) for human localization in unknown indoor environments

Joel A. Hesch; Faraz M. Mirzaei; Gian Luca Mariottini; Stergios I. Roumeliotis

This paper presents a novel 3D indoor Laser-aided Inertial Navigation System (L-INS) for the visually impaired. An Extended Kalman Filter (EKF) fuses information from an Inertial Measurement Unit (IMU) and a 2D laser scanner, to concurrently estimate the six degree-of-freedom (d.o.f.) position and orientation (pose) of the person and a 3D map of the environment. The IMU measurements are integrated to obtain pose estimates, which are subsequently corrected using line-to-plane correspondences between linear segments in the laser-scan data and orthogonal structural planes of the building. Exploiting the orthogonal building planes ensures fast and efficient initialization and estimation of the map features while providing human-interpretable layout of the environment. The L-INS is experimentally validated by a person traversing a multistory building, and the results demonstrate the reliability and accuracy of the proposed method for indoor localization and mapping.

Collaboration


Dive into the Gian Luca Mariottini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gustavo A. Puerto-Souza

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Aaron Staranowicz

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Garrett R. Brown

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

George J. Pappas

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Jeffrey A. Cadeddu

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar

Kostas Daniilidis

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge