Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingchen Liu is active.

Publication


Featured researches published by Jingchen Liu.


computer vision and pattern recognition | 2013

Tracking Sports Players with Context-Conditioned Motion Models

Jingchen Liu; Peter W. Carr; Robert T. Collins; Yanxi Liu

We employ hierarchical data association to track players in team sports. Player movements are often complex and highly correlated with both nearby and distant players. A single model would require many degrees of freedom to represent the full motion diversity and could be difficult to use in practice. Instead, we introduce a set of Game Context Features extracted from noisy detections to describe the current state of the match, such as how the players are spatially distributed. Our assumption is that players react to the current situation in only a finite number of ways. As a result, we are able to select an appropriate simplified affinity model for each player and time instant using a random decision forest based on current track and game context features. Our context-conditioned motion models implicitly incorporate complex inter-object correlations while remaining tractable. We demonstrate significant performance improvements over existing multi-target tracking algorithms on basketball and field hockey sequences several minutes in duration and containing 10 and 20 players respectively.


computer vision and pattern recognition | 2013

Symmetry Detection from RealWorld Images Competition 2013: Summary and Results

Jingchen Liu; George M. Slota; Gang Zheng; Zhaohui Wu; Minwoo Park; Seungkyu Lee; Ingmar Rauschert; Yanxi Liu

Symmetry is a pervasive phenomenon presenting itself in all forms and scales in natural and manmade environments. Its detection plays an essential role at all levels of human as well as machine perception. The recent resurging interest in computational symmetry for computer vision and computer graphics applications has motivated us to conduct a US NSF funded symmetry detection algorithm competition as a workshop affiliated with the Computer Vision and Pattern Recognition (CVPR) Conference, 2013. This competition sets a more complete benchmark for computer vision symmetry detection algorithms. In this report we explain the evaluation metric and the automatic execution of the evaluation workflow. We also present and analyze the algorithms submitted, and show their results on three test sets of real world images depicting reflection, rotation and translation symmetries respectively. This competition establishes a performance baseline for future work on symmetry detection.


computer vision and pattern recognition | 2013

GRASP Recurring Patterns from a Single View

Jingchen Liu; Yanxi Liu

We propose a novel unsupervised method for discovering recurring patterns from a single view. A key contribution of our approach is the formulation and validation of a joint assignment optimization problem where multiple visual words and object instances of a potential recurring pattern are considered simultaneously. The optimization is achieved by a greedy randomized adaptive search procedure (GRASP) with moves specifically designed for fast convergence. We have quantified systematically the performance of our approach under stressed conditions of the input (missing features, geometric distortions). We demonstrate that our proposed algorithm outperforms state of the art methods for recurring pattern discovery on a diverse set of 400+ real world and synthesized test images.


workshop on applications of computer vision | 2013

Robust autocalibration for a surveillance camera network

Jingchen Liu; Robert T. Collins; Yanxi Liu

We propose a novel approach for multi-camera autocalibration by observing multiview surveillance video of pedestrians walking through the scene. Unlike existing methods, we do NOT require tracking or explicit correspondences of the same person across time/views. Instead, we take noisy foreground blobs as the only input and rely on a joint optimization framework with robust statistics to achieve accurate calibration under challenging scenarios. First, each individual camera is roughly calibrated into its local World Coordinate System (lWCS) based on analysis of relative 3D pedestrian height distribution. Then, all lWCSs are iteratively registered with respect to a shared global World Coordinate System (gWCS) by incorporating robust matching with a partial Direct Linear Transform (pDLT). As demonstrated by extensive evaluation, our algorithm achieves satisfactory results in various camera settings with up to moderate crowd densities with a large proportion of foreground outliers.


computer vision and pattern recognition | 2014

Local Regularity-Driven City-Scale Facade Detection from Aerial Images

Jingchen Liu; Yanxi Liu

We propose a novel regularity-driven framework for facade detection from aerial images of urban scenes. Gini-index is used in our work to form an edge-based regularity metric relating regularity and distribution sparsity. Facade regions are chosen so that these local regularities are maximized. We apply a greedy adaptive region expansion procedure for facade region detection and growing, followed by integer quadratic programming for removing overlapping facades to optimize facade coverage. Our algorithm can handle images that have wide viewing angles and contain more than 200 facades per image. The experimental results on images from three different cities (NYC, Rome, San-Francisco) demonstrate superior performance on facade detection in both accuracy and speed over state of the art methods. We also show an application of our facade detection for effective cross-view facade matching.


computer vision and pattern recognition | 2010

Multi-target tracking of time-varying spatial patterns

Jingchen Liu; Yanxi Liu

Time-varying spatial patterns are common, but few computational tools exist for discovering and tracking multiple, sometimes overlapping, spatial structures of targets. We propose a multi-target tracking framework that takes advantage of spatial patterns inside the targets even though the number, the form and the regularity of such patterns vary with time. RANSAC-based model fitting algorithms are developed to automatically recognize (or dismiss) (il)legitimate patterns. Patterns are represented using a mixture of Markov Random Fields (MRF) with constraints (local and global) and preferences encoded into pairwise potential functions. To handle pattern variations continuously, we introduce a posterior probability for each spatial pattern modeled as a Bernoulli distribution. Tracking is achieved by inferring the optimal state configurations of the targets using belief propagation on a mixture of MRFs. We have evaluated our formulation on real video data with multiple targets containing time-varying lattice patterns and/or reflection symmetry patterns. Experimental results of our proposed algorithm show superior tracking performance over existing methods.


acm multimedia | 2015

Dancing with Turks

I-Kao Chiang; Ian Spiro; Seungkyu Lee; Alyssa Lees; Jingchen Liu; Chris Bregler; Yanxi Liu

Dance is a dynamic art form that reflects a wide range of cultural diversity and individuality. With the advancement of motion-capture technology combined with crowd-sourcing and machine learning algorithms, we explore the complex relationship between perceived dance quality/dancers gender and dance movements/music respectively. As a feasibility study, we construct a computational framework for an analysis-synthesis-feedback loop using a novel multimedia dance-music texture representation. Furthermore, we integrate crowd-sourcing, music and motion-capture data, and machine learning-based methods for dance segmentation, analysis and synthesis of new dancers. A quantitative validation of this framework on a motion-capture dataset of 172 dancers evaluated by more than 400 independent on-line raters demonstrates significant correlation between human perception and the algorithmically intended dance quality or gender of synthesized dancers. The technology illustrated in this work has a high potential to advance the multimedia entertainment industry via dancing with Turks.


european conference on computer vision | 2012

Local expert forest of score fusion for video event classification

Jingchen Liu; Scott McCloskey; Yanxi Liu


asian conference on computer vision | 2010

Curved reflection symmetry detection with self-validation

Jingchen Liu; Yanxi Liu


british machine vision conference | 2011

Automatic Surveillance Camera Calibration without Pedestrian Tracking.

Jingchen Liu; Robert T. Collins; Yanxi Liu

Collaboration


Dive into the Jingchen Liu's collaboration.

Top Co-Authors

Avatar

Yanxi Liu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Robert T. Collins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Byuungki Byun

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Chin-Hui Lee

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gang Zheng

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge