Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwang-Ting Tim Cheng is active.

Publication


Featured researches published by Kwang-Ting Tim Cheng.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2014

Local Difference Binary for Ultrafast and Distinctive Feature Description

Xin Yang; Kwang-Ting Tim Cheng

The efficiency and quality of a feature descriptor are critical to the user experience of many computer vision applications. However, the existing descriptors are either too computationally expensive to achieve real-time performance, or not sufficiently distinctive to identify correct matches from a large database with various transformations. In this paper, we propose a highly efficient and distinctive binary descriptor, called local difference binary (LDB). LDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. A multiple-gridding strategy and a salient bit-selection method are applied to capture the distinct patterns of the patch at different spatial granularities. Experimental results demonstrate that compared to the existing state-of-the-art binary descriptors, primarily designed for speed, LDB has similar construction efficiency, while achieving a greater accuracy and faster speed for mobile object recognition and tracking tasks.The efficiency and quality of a feature descriptor are critical to the user experience of many computer vision applications. However, the existing descriptors are either too computationally expensive to achieve real-time performance, or not sufficiently distinctive to identify correct matches from a large database with various transformations. In this paper, we propose a highly efficient and distinctive binary descriptor, called local difference binary (LDB). LDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. A multiple-gridding strategy and a salient bit-selection method are applied to capture the distinct patterns of the patch at different spatial granularities. Experimental results demonstrate that compared to the existing state-of-the-art binary descriptors, primarily designed for speed, LDB has similar construction efficiency, while achieving a greater accuracy and faster speed for mobile object recognition and tracking tasks.


IEEE Transactions on Visualization and Computer Graphics | 2014

Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices

Xin Yang; Kwang-Ting Tim Cheng

The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.


workshop on applications of computer vision | 2016

Accurate and efficient pulse measurement from facial videos on smartphones

Chong Huang; Xin Yang; Kwang-Ting Tim Cheng

Non-contact measurement of cardiac pulse signals has attracted high interests due to its convenience and cost effectiveness. However, extracting pulse signals on mobile handheld devices (e.g. smartphones) based on face videos captured by mobile cameras usually suffers from low measurement accuracy due to misalignment errors in face tracking and inevitable illumination changes in a mobile scenario, and low efficiency due to a handhelds limited computing power. We propose two techniques to address these limitations: 1) an accurate and efficient face tracking method based on an Active Shape Model (ASM) and the LDB (Local Difference Binary) feature description; 2) an adaptive temporal filtering method which can detect, and in turn denoise, sharp intensity changes in the source trace. Experimental results demonstrate that the proposed solution can achieve a speedup of 6.2X and is robust to noises in common mobile scenarios.


conference on multimedia modeling | 2016

OGB: A Distinctive and Efficient Feature for Mobile Augmented Reality

Xin Yang; Xinggang Wang; Kwang-Ting Tim Cheng

The distinctiveness and efficiency of a feature descriptor used for object recognition and tracking are fundamental to the user experience of a mobile augmented reality MAR system. However, existing descriptors are either too compute-expensive to achieve real-time performance on a mobile device, or not sufficiently distinctive to identify correct matches from a large database. As a result, current MAR systems are still limited in both functionalities and capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly distinctive and efficient binary descriptor, called Oriented Gradients Binary OGB. OGB captures the major edge/gradient structure that is an important characteristic of local shapes and appearance. Specifically, OGB computes the distribution of major edge/gradient directions within an image patch. To achieve high efficiency, aggressive down-sampling is applied to the patch to significantly reduce the computational complexity, while maintaining major edge/gradient directions within the patch. Comparing to the state-of-the-art binary descriptors including ORB, BRISK and FREAK, which are primarily designed for speed, OGB has similar construction efficiency, while achieves a superior performance for both object recognition and tracking tasks running on a mobile handheld device.


medical image computing and computer assisted intervention | 2018

A Deep Model with Shape-Preserving Loss for Gland Instance Segmentation

Zengqiang Yan; Xin Yang; Kwang-Ting Tim Cheng

Segmenting gland instance in histology images requires not only separating glands from a complex background but also identifying each gland individually via accurate boundary detection. This is a very challenging task due to lots of noises from the background, tiny gaps between adjacent glands, and the “coalescence” problem arising from adhesive gland instances. State-of-the-art methods adopted multi-channel/multi-task deep models to separately accomplish pixel-wise gland segmentation and boundary detection, yielding a high model complexity and difficulties in training. In this paper, we present a unified deep model with a new shape-preserving loss which facilities the training for both pixel-wise gland segmentation and boundary detection simultaneously. The proposed shape-preserving loss helps significantly reduce the model complexity and make the training process more controllable. Compared with the current state-of-the-art methods, the proposed deep model with the shape-preserving loss achieves the best overall performance on the 2015 MICCAI Gland Challenge dataset. In addition, the flexibility of integrating the proposed shape-preserving loss into any learning based medical image segmentation networks offers great potential for further performance improvement of other applications.


asian conference on computer vision | 2014

Accurate Vessel Segmentation with Progressive Contrast Enhancement and Canny Refinement

Xin Yang; Kwang-Ting Tim Cheng; Aichi Chien

Vessel segmentation is a key step for various medical applications, such as diagnosis assistance, quantification of vascular pathology, and treatment planning. This paper describes an automatic vessel segmentation framework which can achieve highly accurate segmentation even in regions of low contrast and signal-to-noise-ratios (SNRs) and at vessel boundaries with disturbance induced by adjacent non-vessel pixels. There are two key contributions of our framework. The first is a progressive contrast enhancement method which adaptively improves contrast of challenging pixels that were otherwise indistinguishable, and suppresses noises by weighting pixels according to their likelihood to be vessel pixels. The second contribution is a method called canny refinement which is based on a canny edge detection algorithm to effectively re-move false positives around boundaries of vessels. Experimental results on a public retinal dataset and our clinical cerebral data demonstrate that our approach outperforms state-of-the-art methods including the vesselness based method [1] and the optimally oriented flux (OOF) based method [2].


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Local Difference Binary for Ultra-fast and Distinctive Feature Description.

Xin Yang; Kwang-Ting Tim Cheng


Medical Image Analysis | 2016

Renal compartment segmentation in DCE-MRI images

Xin Yang; Hung Le Minh; Kwang-Ting Tim Cheng; Kyung Hyun Sung; Wenyu Liu


IEEE Transactions on Medical Imaging | 2018

A Skeletal Similarity Metric for Quality Evaluation of Retinal Vessel Segmentation

Zengqiang Yan; Xin Yang; Kwang-Ting Tim Cheng


international conference on robotics and automation | 2018

ACT: An Autonomous Drone Cinematography System for Action Scenes

Chong Huang; Fei Gao; Jie Pan; Zhenyu Yang; Weihao Qiu; Peng Chen; Xin Yang; Shaojie Shen; Kwang-Ting Tim Cheng

Collaboration


Dive into the Kwang-Ting Tim Cheng's collaboration.

Top Co-Authors

Avatar

Xin Yang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zengqiang Yan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chong Huang

University of California

View shared research outputs
Top Co-Authors

Avatar

Peng Chen

Zhejiang University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fei Gao

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Pan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shaojie Shen

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hung Le Minh

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wenyu Liu

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Xinggang Wang

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge