Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frederick Tung is active.

Publication


Featured researches published by Frederick Tung.


Pattern Recognition | 2010

Enabling scalable spectral clustering for image segmentation

Frederick Tung; Alexander Wong; David A. Clausi

Spectral clustering has become an increasingly adopted tool and an active area of research in the machine learning community over the last decade. A common challenge with image segmentation methods based on spectral clustering is scalability, since the computation can become intractable for large images. Down-sizing the image, however, will cause a loss of finer details and can lead to less accurate segmentation results. A combination of blockwise processing and stochastic ensemble consensus are used to address this challenge. Experimental results indicate that this approach can preserve details with higher accuracy than comparable spectral clustering image segmentation methods and without significant computational demands.


Image and Vision Computing | 2011

Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillance

Frederick Tung; John S. Zelek; David A. Clausi

In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors such as human fatigue and boredom. The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features such as flow vectors or control points, this paper builds upon a recently introduced approach that makes use of higher-level features of intentionality. Individuals in the scene are modelled as intentional agents, and unusual behaviour is detected by evaluating the explicability of the agents trajectory with respect to known spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach.


Computer Vision and Image Understanding | 2016

Scene parsing by nonparametric label transfer of content-adaptive windows

Frederick Tung; James J. Little

CollageParsing is a scene parsing algorithm that matches content-adaptive windows.Unlike superpixels, content-adaptive windows are designed to preserve objects.A powerful MRF unary is constructed by performing label transfer using the windows.Gains of 15-19% average per-class accuracy are obtained on a standard benchmark. Scene parsing is the task of labeling every pixel in an image with its semantic category. We present CollageParsing, a nonparametric scene parsing algorithm that performs label transfer by matching content-adaptive windows. Content-adaptive windows provide a higher level of perceptual organization than superpixels, and unlike superpixels are designed to preserve entire objects instead of fragmenting them. Performing label transfer using content-adaptive windows enables the construction of a more effective Markov random field unary potential than previous approaches. On a standard benchmark consisting of outdoor scenes from the LabelMe database, CollageParsing obtains state-of-the-art performance with 15-19% higher average per-class accuracy than recent nonparametric scene parsing algorithms.


IEEE Transactions on Image Processing | 2013

A Decoupled Approach to Illumination-Robust Optical Flow Estimation

Abhishek Kumar; Frederick Tung; Alexander Wong; David A. Clausi

Despite continuous improvements in optical flow in the last three decades, the ability for optical flow algorithms to handle illumination variation is still an unsolved challenge. To improve the ability to interpret apparent object motion in video containing illumination variation, an illumination-robust optical flow method is designed. This method decouples brightness into reflectance and illumination components using a stochastic technique; reflectance is given higher weight to ensure robustness against illumination, which is suppressed. Illumination experiments using the Middlebury and University of Oulu databases demonstrate the decoupled methods improvement when compared with state-of-the-art. In addition, a novel technique is implemented to visualize optical flow output, which is especially useful to compare different optical flow methods in the absence of the ground truth.


canadian conference on computer and robot vision | 2009

Efficient Target Recovery Using STAGE for Mean-shift Tracking

Frederick Tung; John S. Zelek; David A. Clausi

Robust visual tracking is a challenging problem, especially when a target undergoes complete occlusion or leaves and later re-enters the camera view. The mean-shift tracker is an efficient appearance-based tracking algorithm that has become very popular in recent years. Many researchers have developed extensions to the algorithm that improve the appearance model used in target localization. We approach the problem from a slightly different angle and seek to improve the robustness of the mean-shift tracker by integrating an efficient failure recovery mechanism. The proposed method uses a novel application of the STAGE algorithm to efficiently recover a target in the event of tracking failure. The STAGE algorithm boosts the performance of a local search algorithm by iteratively learning an evaluation function to predict good states for initiating searches. STAGE can be viewed as a random-restart algorithm that chooses promising restart states based on the shape of the state space, as estimated using the search trajectories from previous iterations. In the proposed method, an adapted version of STAGE is applied to the mean-shift target localization algorithm (Bhattacharyya coefficient maximization using the mean-shift procedure) to efficiently recover the lost target. Experiments indicate that the proposed method is viable as a technique for recovering from failure caused by complete occlusion or departure from the camera view.


british machine vision conference | 2016

Exploiting Random RGB and Sparse Features for Camera Pose Estimation.

Lili Meng; Jianhui Chen; Frederick Tung; James J. Little; Clarence W. de Silva

We address the problem of estimating camera pose relative to a known scene, given a single RGB image. We extend recent advances in scene coordinate regression forests for camera relocalization in RGB-D images to use RGB features, enabling camera relocalization from a single RGB image. Furthermore, we integrate random RGB features and sparse feature matching in an efficient and accurate way, broadening the method for fast sports camera calibration in highly dynamic scenes. We evaluate our method on both static, small scale and dynamic, large scale datasets with challenging camera poses. The proposed method is compared with several strong baselines. Experiment results demonstrate the efficacy of our approach, showing superior or on-par performance with the state of the art.


international conference on robotics and automation | 2017

MF3D: Model-free 3D semantic scene parsing

Frederick Tung; James J. Little

We present a novel model-free method for online 3D semantic scene parsing from video sequences. MF3D (Model-Free 3D) is different from conventional methods for 3D scene parsing in that voxel labelling is approached via search-based label transfer instead of discriminative classification. This non-parametric approach makes MF3D easy to scale with an online growth in the database, as no model re-training is required with the addition of new examples or categories. Experimental results on the KITTI benchmark demonstrate that our model-free approach enables accurate online 3D scene parsing while retaining extensibility to new categories. In addition, we show that unsupervised binary encoding (hashing) techniques can be easily incorporated into our framework for scalability to larger databases.


international conference on robotics and automation | 2017

The Raincouver Scene Parsing Benchmark for Self-Driving in Adverse Weather and at Night

Frederick Tung; Jianhui Chen; Lili Meng; James J. Little

Self-driving vehicles have the potential to transform the way we travel. Their development is at a pivotal point, as a growing number of industrial and academic research organizations are bringing these technologies into controlled but real-world settings. An essential capability of a self-driving vehicle is environment understanding: Where are the pedestrians, the other vehicles, and the drivable space? In computer and robot vision, the task of identifying semantic categories at a per pixel level is known as scene parsing or semantic segmentation. While much progress has been made in scene parsing in recent years, current datasets for training and benchmarking scene parsing algorithms focus on nominal driving conditions: fair weather and mostly daytime lighting. To complement the standard benchmarks, we introduce the Raincouver scene parsing benchmark, which to our knowledge is the first scene parsing benchmark to focus on challenging rainy driving conditions, during the day, at dusk, and at night. Our dataset comprises half an hour of driving video captured on the roads of Vancouver, Canada, and 326 frames with hand-annotated pixelwise semantic labels.


workshop on applications of computer vision | 2015

Bank of Quantization Models: A Data-Specific Approach to Learning Binary Codes for Large-Scale Retrieval Applications

Frederick Tung; Julieta Martinez; Holger H. Hoos; James J. Little

We explore a novel paradigm in learning binary codes for large-scale image retrieval applications. Instead of learning a single globally optimal quantization model as in previous approaches, we encode the database points in a data-specific manner using a bank of quantization models. Each individual database point selects the quantization model that minimizes its individual quantization error. We apply the idea of a bank of quantization models to data independent and data-driven hashing methods for learning binary codes, obtaining state-of-the-art performance on three benchmark datasets.


international conference on multimedia and expo | 2013

Polynomial self-similarity for object classification

Frederick Tung; Alexander Wong

Objects in an image may be semantically similar not because they share common photometric properties, but because they share common recurring patterns of internal self-similarities. In this paper, a polynomial self-similarity approach for object classification is proposed. Extending the global self-similarity framework, polynomial self-similarity enables greater flexibility in matching details with similar structure but intensity differences, and details under different ambient illumination. Experiments show that the proposed approach provides classification accuracy that is competitive with standard global self-similarity, even under challenging non-uniform illumination conditions.

Collaboration


Dive into the Frederick Tung's collaboration.

Top Co-Authors

Avatar

James J. Little

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lili Meng

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Clarence W. de Silva

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg Mori

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Jianhui Chen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Bo Chang

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge