Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fangyi Zhang is active.

Publication


Featured researches published by Fangyi Zhang.


intelligent robots and systems | 2015

Visible Light Communication-based indoor localization using Gaussian Process

Kejie Qiu; Fangyi Zhang; Ming Liu

For mobile robots and position-based services, such as healthcare service, precise localization is the most fundamental capability while low-cost localization solutions are with increasing need and potentially have a wide market. A low-cost localization solution based on a novel Visible Light Communication (VLC) system for indoor environments is proposed in this paper. A number of modulated LED lights are used as beacons to aid indoor localization additional to illumination. A Gaussian Process(GP) is used to model the intensity distributions of the light sources. A Bayesian localization framework is constructed using the results of the GP, leading to precise localization. Path-planning is hereby feasible by only using the GP variance field, rather than using a metric map. Dijkstras algorithm-based path-planner is adopted to cope with the practical situations. We demonstrate our localization system by real-time experiments performed on a tablet PC in an indoor environment.


IEEE Robotics & Automation Magazine | 2016

Let the Light Guide Us: VLC-Based Localization

Kejie Qiu; Fangyi Zhang; Ming Liu

We propose to use visible-light beacons for low-cost indoor localization. Modulated light-emitting diode (LED) lights are adapted for localization as well as illumination. The proposed solution consists of two components: light-signal decomposition and Bayesian localization.


international conference on robotics and automation | 2015

Asynchronous blind signal decomposition using tiny-length code for Visible Light Communication-based indoor localization

Fangyi Zhang; Kejie Qiu; Ming Liu

Indoor localization is a fundamental capability for service robots and indoor applications on mobile devices. To realize that, the cost and performance are of great concern. In this paper, we introduce a lightweight signal encoding and decomposition method for a low-cost and low-power Visible Light Communication (VLC)-based indoor localization system. Firstly, a Gold-sequence-based tiny-length code selection method is introduced for light encoding. Then a correlation-based asynchronous blind light-signal decomposition method is developed for the decomposition of the lights mixed with modulated light sources. It is able to decompose the mixed light-signal package in real-time. The average decomposition time-cost for each frame is 20 ms. By using the decomposition results, the localization system achieves accuracy at 0.56 m. These features outperform other existing low-cost indoor localization approaches, such as WiFiSLAM.


computer vision and pattern recognition | 2017

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

Fangyi Zhang; Jürgen Leitner; Michael Milford; Peter Corke

This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuomotor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.


conference on automation science and engineering | 2015

Visible light communication-based indoor environment modeling and metric-free path planning

Kejie Qiu; Fangyi Zhang; Ming Liu

For mobile robots and position-based services, localization is the most fundamental capability while path-planning is an important application based on that. A novel localization and path-planning solution based on a low-cost Visible Light Communication (VLC) system for indoor environments is proposed in this paper. A number of modulated LED lights are used as beacons to aid indoor localization additional to illumination. A Gaussian Process (GP) is used to model the intensity distributions of the light sources. Path-planning is hereby feasible by using the GP variance field, rather than using a metric map. Graph-based path-planners are introduced to cope with the practical situations. We demonstrate our path-planning system by real-time experiments performed on a tablet PC in an indoor environment.


international conference on robotics and automation | 2017

The ACRV picking benchmark: A robotic shelf picking benchmark to foster reproducible research

Jürgen Leitner; Adam W. Tow; Niko Sünderhauf; Jake E. Dean; Joseph W. Durham; Matthew Cooper; Markus Eich; Christopher Lehnert; Ruben Mangels; Christopher McCool; Peter T. Kujala; Lachlan Nicholson; Trung Pham; James Sergeant; Liao Wu; Fangyi Zhang; Ben Upcroft; Peter Corke

Robotic challenges like the Amazon Picking Challenge (APC) or the DARPA Challenges are an established and important way to drive scientific progress. They make research comparable on a well-defined benchmark with equal test conditions for all participants. However, such challenge events occur only occasionally, are limited to a small number of contestants, and the test conditions are very difficult to replicate after the main event. We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark. Designed to be reproducible, it consists of a set of 42 common objects, a widely available shelf, and exact guidelines for object arrangement using stencils. A well-defined evaluation protocol enables the comparison of complete robotic systems — including perception and manipulation — instead of sub-systems only. Our paper also describes and reports results achieved by an open baseline system based on a Baxter robot.


arXiv: Learning | 2015

Towards vision-based deep reinforcement learning for robotic motion control

Fangyi Zhang; Jürgen Leitner; Michael Milford; Ben Upcroft; Peter Corke


ARC Centre of Excellence for Robotic Vision; School of Electrical Engineering & Computer Science; Faculty of Science and Technology | 2017

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

Fangyi Zhang; Jürgen Leitner; Michael Milford; Peter Corke


Archive | 2016

Transferring Vision-based Robotic Reaching Skills from Simulation to Real World

Fangyi Zhang; Jürgen Leitner; Ben Upcroft; Peter Corke


Archive | 2015

Environment Modeling-based Indoor Localization using Visible Light Communication

Kejie Qiu; Fangyi Zhang; Ming Liu

Collaboration


Dive into the Fangyi Zhang's collaboration.

Top Co-Authors

Avatar

Jürgen Leitner

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kejie Qiu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ming Liu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam W. Tow

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bahareh Nakisa

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher Lehnert

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge