Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hongkai Wen is active.

Publication


Featured researches published by Hongkai Wen.


information processing in sensor networks | 2014

Lightweight map matching for indoor localisation using conditional random fields

Zhuoling Xiao; Hongkai Wen; Andrew Markham; Niki Trigoni

Indoor tracking and navigation is a fundamental need for pervasive and context-aware smartphone applications. Although indoor maps are becoming increasingly available, there is no practical and reliable indoor map matching solution available at present. We present MapCraft, a novel, robust and responsive technique that is extremely computationally efficient (running in under 10 ms on an Android smartphone), does not require training in different sites, and tracks well even when presented with very noisy sensor data. Key to our approach is expressing the tracking problem as a conditional random field (CRF), a technique which has had great success in areas such as natural language processing, but has yet to be considered for indoor tracking. Unlike directed graphical models like Hidden Markov Models, CRFs capture arbitrary constraints that express how well observations support state transitions, given map constraints. Extensive experiments in multiple sites show how MapCraft outperforms state-of-the art approaches, demonstrating excellent tracking error and accurate reconstruction of tortuous trajectories with zero training effort. As proof of its robustness, we also demonstrate how it is able to accurately track the position of a user from accelerometer and magnetometer measurements only (i.e. gyro- and WiFi-free). We believe that such an energy-efficient approach will enable always-on background localisation, enabling a new era of location-aware applications to be developed.


IEEE Transactions on Wireless Communications | 2015

Non-Line-of-Sight Identification and Mitigation Using Received Signal Strength

Zhuoling Xiao; Hongkai Wen; Andrew Markham; Niki Trigoni; Phil Blunsom; Jeff Frolik

Indoor wireless systems often operate under non-line-of-sight (NLOS) conditions that can cause ranging errors for location-based applications. As such, these applications could benefit greatly from NLOS identification and mitigation techniques. These techniques have been primarily investigated for ultra-wide band (UWB) systems, but little attention has been paid to WiFi systems, which are far more prevalent in practice. In this study, we address the NLOS identification and mitigation problems using multiple received signal strength (RSS) measurements from WiFi signals. Key to our approach is exploiting several statistical features of the RSS time series, which are shown to be particularly effective. We develop and compare two algorithms based on machine learning and a third based on hypothesis testing to separate LOS/NLOS measurements. Extensive experiments in various indoor environments show that our techniques can distinguish between LOS/NLOS conditions with an accuracy of around 95%. Furthermore, the presented techniques improve distance estimation accuracy by 60% as compared to state-of-the-art NLOS mitigation techniques. Finally, improvements in distance estimation accuracy of 50% are achieved even without environment-specific training data, demonstrating the practicality of our approach to real world implementations.


international conference on robotics and automation | 2017

DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks

Sen Wang; Ronald Clark; Hongkai Wen; Niki Trigoni

This paper studies monocular visual odometry (VO) problem. Most of existing VO algorithms are developed under a standard pipeline including feature extraction, feature matching, motion estimation, local optimisation, etc. Although some of them have demonstrated superior performance, they usually need to be carefully designed and specifically fine-tuned to work well in different environments. Some prior knowledge is also required to recover an absolute scale for monocular VO. This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs). Since it is trained and deployed in an end-to-end manner, it infers poses directly from a sequence of raw RGB images (videos) without adopting any module in the conventional VO pipeline. Based on the RCNNs, it not only automatically learns effective feature representation for the VO problem through Convolutional Neural Networks, but also implicitly models sequential dynamics and relations using deep Recurrent Neural Networks. Extensive experiments on the KITTI VO dataset show competitive performance to state-of-the-art methods, verifying that the end-to-end Deep Learning technique can be a viable complement to the traditional VO systems.


international conference on indoor positioning and indoor navigation | 2014

Robust pedestrian dead reckoning (R-PDR) for arbitrary mobile device placement

Zhuoling Xiao; Hongkai Wen; Andrew Markham; Niki Trigoni

Pedestrian dead reckoning, especially on smart-phones, is likely to play an increasingly important role in indoor tracking and navigation, due to its low cost and ability to work without any additional infrastructure. A challenge however, is that positioning, both in terms of step detection and heading estimation, must be accurate and reliable, even when the use of the device is so varied in terms of placement (e.g. handheld or in a pocket) or orientation (e.g holding the device in either portrait or landscape mode). Furthermore, the placement can vary over time as a user performs different tasks, such as making a call or carrying the device in a bag. A second challenge is to be able to distinguish between a true step and other periodic motion such as swinging an arm or tapping when the placement and orientation of the device is unknown. If this is not done correctly, then the PDR system typically overestimates the number of steps taken, leading to a significant long term error. We present a fresh approach, robust PDR (R-PDR), based on exploiting how bipedal motion impacts acquired sensor waveforms. Rather than attempting to recognize different placements through sensor data, we instead simply determine whether the motion of one or both legs impact the measurements. In addition, we formulate a set of techniques to accurately estimate the device orientation, which allows us to very accurately (typically over 99%) reject false positives. We demonstrate that regardless of device placement, we are able to detect the number of steps taken with >99.4% accuracy. R-PDR thus addresses the two main limitations facing existing PDR techniques.


mobile adhoc and sensor systems | 2014

Fusion of Radio and Camera Sensor Data for Accurate Indoor Positioning

Savvas Papaioannou; Hongkai Wen; Andrew Markham; Niki Trigoni

Indoor positioning systems have received a lot of attention recently due to their importance for many location-based services, e.g. indoor navigation and smart buildings. Lightweight solutions based on WiFi and inertial sensing have gained popularity, but are not fit for demanding applications, such as expert museum guides and industrial settings, which typically require sub-meter location information. In this paper, we propose a novel positioning system, RAVEL (Radio And Vision Enhanced Localization), which fuses anonymous visual detections captured by widely available camera infrastructure, with radio readings (e.g. WiFi radio data). Although visual trackers can provide excellent positioning accuracy, they are plagued by issues such as occlusions and people entering/exiting the scene, preventing their use as a robust tracking solution. By incorporating radio measurements, visually ambiguous or missing data can be resolved through multi-hypothesis tracking. We evaluate our system in a complex museum environment with dim lighting and multiple people moving around in a space cluttered with exhibit stands. Our experiments show that although the WiFi measurements are not by themselves sufficiently accurate, when they are fused with camera data, they become a catalyst for pulling together ambiguous, fragmented, and anonymous visual tracklets into accurate and continuous paths, yielding typical errors below 1 meter.


conference on computer supported cooperative work | 2012

Operational transformation for orthogonal conflict resolution in real-time collaborative 2d editing systems

Chengzheng Sun; Hongkai Wen; Hongfei Fan

Operational Transformation (OT) is commonly used for conflict resolution in real-time collaborative applications, but none of existing OT techniques is able to solve a special type of conflict - orthogonal conflict, which may occur when concurrent operations are inserting/deleting an arbitrary number of objects in different dimensions of a two-dimensional (2D) workspace, such as spreadsheet documents. This paper is the first to identify and solve the orthogonal conflict problem by extending OT with a new capability of resolving 2D conflicts. Extending OT from one- to two-dimensional conflict resolution is fundamental to the theory and application of OT, and technically challenging as well because 2D orthogonal conflict is different from but intimately related to the one-dimensional positional shifting conflict and necessitates new and integral solutions for multi-dimensional conflicts. In this paper, we present formal definitions of orthogonal conflict, pseudo-code description, design rationale analysis, and correctness verification and complexity analysis of the 2DOT solution.


international conference on embedded wireless systems and networks | 2013

On assessing the accuracy of positioning systems in indoor environments

Hongkai Wen; Zhuoling Xiao; Niki Trigoni; Phil Blunsom

As industrial and academic communities become increasingly interested in Indoor Positioning Systems (IPSs), a plethora of technologies are gaining maturity and competing for adoption in the global smartphone market. In the near future, we expect busy places, such as schools, airports, hospitals and large businesses, to be outfitted with multiple IPS infrastructures, which need to coexist, collaborate and / or compete for users. In this paper, we examine the novel problem of estimating the accuracy of co-located positioning systems, and selecting which one to use where. This is challenging because 1) we do not possess knowledge of the ground truth, which makes it difficult to empirically estimate the accuracy of an indoor positioning system; and 2) the accuracy reported by a positioning system is not always a faithful representation of the real accuracy. In order to address these challenges, we model the process of a user moving in an indoor environment as a Hidden Markov Model (HMM), and augment the model to take into account vector (instead of scalar) observations, and prior knowledge about user mobility drawn from personal electronic calendars. We then propose an extension of the Baum-Welch algorithm to learn the parameters of the augmented HMM. The proposed HMM-based approach to learning the accuracy of indoor positioning systems is validated and tested against competing approaches in several real-world indoor settings.


international conference on data engineering | 2012

Ranking Query Answers in Probabilistic Databases: Complexity and Efficient Algorithms

Dan Olteanu; Hongkai Wen

In many applications of probabilistic databases, the probabilities are mere degrees of uncertainty in the data and are not otherwise meaningful to the user. Often, users care only about the ranking of answers in decreasing order of their probabilities or about a few most likely answers. In this paper, we investigate the problem of ranking query answers in probabilistic databases. We give a dichotomy for ranking in case of conjunctive queries without repeating relation symbols: it is either in polynomial time or NP-hard. Surprisingly, our syntactic characterisation of tractable queries is not the same as for probability computation. The key observation is that there are queries for which probability computation is \#P-hard, yet ranking can be computed in polynomial time. This is possible whenever probability computation for distinct answers has a common factor that is hard to compute but irrelevant for ranking. We complement this tractability analysis with an effective ranking technique for conjunctive queries. Given a query, we construct a share plan, which exposes sub queries whose probability computation can be shared or ignored across query answers. Our technique combines share plans with incremental approximate probability computation of sub queries. We implemented our technique in the SPROUT query engine and report on performance gains of orders of magnitude over Monte Carlo simulation using FPRAS and exact probability computation based on knowledge compilation.


IEEE Transactions on Mobile Computing | 2015

Indoor Tracking Using Undirected Graphical Models

Zhuoling Xiao; Hongkai Wen; Andrew Markham; Niki Trigoni

Indoor tracking and navigation is a fundamental need for pervasive and context-aware smartphone applications. Although indoor maps are becoming increasingly available, there is no practical and reliable indoor map matching solution available at present. We present MapCraft, a novel, robust and responsive technique that is extremely computationally efficient (running in under 10 ms on an Android smartphone), does not require training in different sites, and tracks well even when presented with very noisy sensor data. Key to our approach is expressing the tracking problem as a conditional random field (CRF), a technique which has had great success in areas such as natural language processing. Unlike directed graphical models like Hidden Markov Models, CRFs capture arbitrary constraints that express how well observations support state transitions, given map constraints. In addition, we show how to further improve tracking accuracy, by tuning the parameters of the motion sensing model using an unsupervised EM-style optimization scheme. Extensive experiments in multiple sites show how MapCraft outperforms state-of-the art approaches, demonstrating excellent tracking error and accurate reconstruction of tortuous trajectories with zero training effort. As proof of its robustness, we also demonstrate how it is able to accurately track the position of a user from accelerometer and magnetometer measurements only (i.e., gyroand Wi-Fi-free). We believe that such an energy-efficient approach will enable always-on background localisation, enabling a new era of location-aware applications to be developed.


computer vision and pattern recognition | 2017

VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization

Ronald Clark; Sen Wang; Andrew Markham; Niki Trigoni; Hongkai Wen

Machine learning techniques, namely convolutional neural networks (CNN) and regression forests, have recently shown great promise in performing 6-DoF localization of monocular images. However, in most cases image-sequences, rather only single images, are readily available. To this extent, none of the proposed learning-based approaches exploit the valuable constraint of temporal smoothness, often leading to situations where the per-frame error is larger than the camera motion. In this paper we propose a recurrent model for performing 6-DoF localization of video-clips. We find that, even by considering only short sequences (20 frames), the pose estimates are smoothed and the localization error can be drastically reduced. Finally, we consider means of obtaining probabilistic pose estimates from our model. We evaluate our method on openly-available real-world autonomous driving and indoor localization datasets.

Collaboration


Dive into the Hongkai Wen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sen Wang

University of Oxford

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yiran Shen

Harbin Engineering University

View shared research outputs
Top Co-Authors

Avatar

Bowen Du

University of Warwick

View shared research outputs
Top Co-Authors

Avatar

Bo Yang

University of Oxford

View shared research outputs
Researchain Logo
Decentralizing Knowledge