Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yasir Latif is active.

Publication


Featured researches published by Yasir Latif.


IEEE Transactions on Robotics | 2016

Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age

Cesar Cadena; Luca Carlone; Henry Carrillo; Yasir Latif; Davide Scaramuzza; José L. Neira; Ian D. Reid; John J. Leonard

Simultaneous Localization and Mapping (SLAM) consists in the concurrent construction of a representation of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. The paper serves as a tutorial for the non-expert reader. It is also a position paper: by looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors’ take on two questions that often animate discussions during robotics conferences: do robots need SLAM? Is SLAM solved?


international conference on computer vision | 2015

Hierarchical Higher-Order Regression Forest Fields: An Application to 3D Indoor Scene Labelling

Trung Pham; Ian D. Reid; Yasir Latif; Stephen Gould

This paper addresses the problem of semantic segmentation of 3D indoor scenes reconstructed from RGB-D images. Traditionally label prediction for 3D points is tackled by employing graphical models that capture scene features and complex relations between different class labels. However, the existing work is restricted to pairwise conditional random fields, which are insufficient when encoding rich scene context. In this work we propose models with higher-order potentials to describe complex relational information from the 3D scenes. Specifically, we relax the labelling problem to a regression, and generalize the higher-order associative P n Potts model to a new family of arbitrary higher-order models based on regression forests. We show that these models, like the robust P n models, can still be decomposed into the sum of pairwise terms by introducing auxiliary variables. Moreover, our proposed higher-order models also permit extension to hierarchical random fields, which allows for the integration of scene context and features computed at different scales. Our potential functions are constructed based on regression forests encoding Gaussian densities that admit efficient inference. The parameters of our model are learned from training data using a structured learning approach. Results on two datasets show clear improvements over current state-of-the-art methods.


intelligent robots and systems | 2017

Meaningful maps with object-oriented semantic mapping

Niko Sünderhauf; Trung Pham; Yasir Latif; Michael Milford; Ian D. Reid

For intelligent robots to interact in meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation.


intelligent robots and systems | 2016

Measuring the performance of single image depth estimation methods

Cesar Cadena; Yasir Latif; Ian D. Reid

We consider the question of benchmarking the performance of methods used for estimating the depth of a scene from a single image. We describe various measures that have been used in the past, discuss their limitations and demonstrate that each is deficient in one or more ways. We propose a new measure of performance for depth estimation that overcomes these deficiencies, and has a number of desirable properties. We show that in various cases of interest the new measure enables visualisation of the performance of a method that is otherwise obfuscated by existing metrics. Our proposed method is capable of illuminating the relative performance of different algorithms on different kinds of data, such as the difference in efficacy of a method when estimating the depth of the ground plane versus estimating the depth of other generic scene structure. We showcase the method by comparing a number of existing single-view methods against each other and against more traditional depth estimation methods such as binocular stereo.


international conference on robotics and automation | 2017

Dense monocular reconstruction using surface normals

Chamara Saroj Weerasekera; Yasir Latif; Ravi Garg; Ian D. Reid

This paper presents an efficient framework for dense 3D scene reconstruction using input from a moving monocular camera. Visual SLAM (Simultaneous Localisation and Mapping) approaches based solely on geometric methods have proven to be quite capable of accurately tracking the pose of a moving camera and simultaneously building a map of the environment in real-time. However, most of them suffer from the 3D map being too sparse for practical use. The missing points in the generated map correspond mainly to areas lacking texture in the input images, and dense mapping systems often rely on hand-crafted priors like piecewise-planarity or piecewise-smooth depth. These priors do not always provide the required level of scene understanding to accurately fill the map. On the other hand, Convolutional Neural Networks (CNNs) have had great success in extracting high-level information from images and regressing pixel-wise surface normals, semantics, and even depth. In this work we leverage this high-level scene context learned by a deep CNN in the form of a surface normal prior. We show, in particular, that using the surface normal prior leads to better reconstructions than the weaker smoothness prior.


international conference on robotics and automation | 2017

RRD-SLAM: Radial-distorted rolling-shutter direct SLAM

Jae-Hak Kim; Yasir Latif; Ian D. Reid

In this paper, we present a monocular direct semi-dense SLAM (Simultaneous Localization And Mapping) method that can handle both radial distortion and rolling-shutter distortion. Such distortions are common in, but not restricted to, situations when an inexpensive wide-angle lens and a CMOS sensor are used, and leads to significant inaccuracy in the map and trajectory estimates if not modeled correctly. The apparent naive solution of simply undistorting the images using pre-calibrated parameters does not apply to this case since rows in the undistorted image are no longer captured at the same time. To address this we develop an algorithm that incorporates radial distortion into an existing state-of-the-art direct semi-dense SLAM system that takes rolling-shutters into account. We propose a method for finding the generalized epipolar curve for each rolling-shutter radially distorted image. Our experiments demonstrate the efficacy of our approach and compare it favorably with the state-of-the-art in direct semi-dense rolling-shutter SLAM.


international conference on robotics and automation | 2018

Addressing Challenging Place Recognition Tasks Using Generative Adversarial Networks

Yasir Latif; Ravi Garg; Michael Milford; Ian D. Reid


arXiv: Robotics | 2018

Structure Aware SLAM using Quadrics and Planes.

Mehdi Hosseinzadeh; Yasir Latif; Trung Pham; Niko Suenderhauf; Ian D. Reid


arXiv: Robotics | 2018

Real-Time Monocular Object-Model Aware Sparse SLAM.

Mehdi Hosseinzadeh; Kejie Li; Yasir Latif; Ian D. Reid


Archive | 2018

Towards Semantic SLAM: Points, Planes and Objects.

Mehdi Hosseinzadeh; Yasir Latif; Trung Pham; Niko Sünderhauf; Ian D. Reid

Collaboration


Dive into the Yasir Latif's collaboration.

Top Co-Authors

Avatar

Ian D. Reid

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Niko Sünderhauf

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ravi Garg

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jae-Hak Kim

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge