Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lili Meng is active.

Publication


Featured researches published by Lili Meng.


international conference on computer science and education | 2014

3D visual SLAM for an assistive robot in indoor environments using RGB-D cameras

Lili Meng; Clarence W. de Silva; Jie Zhang

With a growing global aging population, assistive robots are becoming increasingly important. This paper presents an integrated hardware and software architecture for assistive robots. This modular and reusable software framework incorporates capabilities of perception and navigation. The paper presents as well a system for three-dimensional (3D) vision-based simultaneous localization and mapping (SLAM) using a Red-Green-Blue and Depth (RGB-D) camera, and illustrates its application on an assistive robot. The ORB features and depth information are extracted for ego-motion estimation. Random Sample Consensus algorithm (RANSAC) is adopted for outlier removal, while the integration of RGB-D and iterated closest point (ICP) is used for alignment. Pose-graph optimization is completed by g2o. Finally, a 3D volumetric map is generated for further navigation.


british machine vision conference | 2016

Exploiting Random RGB and Sparse Features for Camera Pose Estimation.

Lili Meng; Jianhui Chen; Frederick Tung; James J. Little; Clarence W. de Silva

We address the problem of estimating camera pose relative to a known scene, given a single RGB image. We extend recent advances in scene coordinate regression forests for camera relocalization in RGB-D images to use RGB features, enabling camera relocalization from a single RGB image. Furthermore, we integrate random RGB features and sparse feature matching in an efficient and accurate way, broadening the method for fast sports camera calibration in highly dynamic scenes. We evaluate our method on both static, small scale and dynamic, large scale datasets with challenging camera poses. The proposed method is compared with several strong baselines. Experiment results demonstrate the efficacy of our approach, showing superior or on-par performance with the state of the art.


international conference on robotics and automation | 2017

The Raincouver Scene Parsing Benchmark for Self-Driving in Adverse Weather and at Night

Frederick Tung; Jianhui Chen; Lili Meng; James J. Little

Self-driving vehicles have the potential to transform the way we travel. Their development is at a pivotal point, as a growing number of industrial and academic research organizations are bringing these technologies into controlled but real-world settings. An essential capability of a self-driving vehicle is environment understanding: Where are the pedestrians, the other vehicles, and the drivable space? In computer and robot vision, the task of identifying semantic categories at a per pixel level is known as scene parsing or semantic segmentation. While much progress has been made in scene parsing in recent years, current datasets for training and benchmarking scene parsing algorithms focus on nominal driving conditions: fair weather and mostly daytime lighting. To complement the standard benchmarks, we introduce the Raincouver scene parsing benchmark, which to our knowledge is the first scene parsing benchmark to focus on challenging rainy driving conditions, during the day, at dusk, and at night. Our dataset comprises half an hour of driving video captured on the roads of Vancouver, Canada, and 326 frames with hand-annotated pixelwise semantic labels.


international conference on advanced robotics | 2017

Towards autonomous exploration with information potential field in 3D environments

Chaoqun Wang; Lili Meng; Teng Li; Clarence W. de Silva; Max Q.-H. Meng

Autonomous exploration is one of the key components for flying robots in 3D active perception. Fast and accurate exploration algorithms are essential for aerial vehicles due to their limited flight endurance. In this paper, we address the problem of exploring the environment and acquiring information using aerial vehicles within limited flight endurance. We propose an information potential field based method for autonomous exploration in 3D environments. In contrast to the existing approaches that only consider either the traveled distances or the information collected during exploration, our method takes into account both the traveled cost and information-gain. The next best view point is chosen based on a multi-objective function which considers information of several candidate regions and the traveled path cost. The selected goal attracts the robot while the known obstacles form the repulsive force to repel the robot. These combined force drives the robot to explore the environment. Different from planners that use all acquired global information, our planner only considers the goal selected and the nearby obstacles, which is more efficient in high-dimensional environments. Furthermore, we present a method to help the robot escape when it falls into a trapped area. The experimental results demonstrate the efficiency and efficacy of our proposed method.


national conference on artificial intelligence | 2018

Reversible Architectures for Arbitrarily Deep Residual Neural Networks

Bo Chang; Lili Meng; Eldad Haber; Lars Ruthotto; David Begert; Elliot Holtham


international conference on learning representations | 2018

Multi-level Residual Networks from Dynamical Systems View

Bo Chang; Lili Meng; Eldad Haber; Frederick Tung; David Begert


intelligent robots and systems | 2017

Backtracking regression forests for accurate camera relocalization

Lili Meng; Jianhui Chen; Frederick Tung; James J. Little; Julien P. C. Valentin; Clarence W. de Silva


workshop on applications of computer vision | 2018

Camera Selection for Broadcasting Soccer Games

Jianhui Chen; Lili Meng; James J. Little


international conference on robotics and automation | 2018

Learning Motion Predictors for Smart Wheelchair Using Autoregressive Sparse Gaussian Process

Zicong Fan; Lili Meng; Tian Qi Chen; Jingchun Li; Ian M. Mitchell


arXiv: Computer Vision and Pattern Recognition | 2018

Where and When to Look? Spatio-temporal Attention for Action Recognition in Videos.

Lili Meng; Bo Zhao; Bo Chang; Gao Huang; Frederick Tung; Leonid Sigal

Collaboration


Dive into the Lili Meng's collaboration.

Top Co-Authors

Avatar

Frederick Tung

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Clarence W. de Silva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

James J. Little

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Jianhui Chen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Bo Chang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Eldad Haber

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ian M. Mitchell

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Teng Li

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Q.-H. Meng

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge