Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chunyuan Liao is active.

Publication


Featured researches published by Chunyuan Liao.


human factors in computing systems | 2010

Pacer: fine-grained interactive paper via camera-touch hybrid gestures on a cell phone

Chunyuan Liao; Qiong Liu; Bee Liew; Lynn Wilcox

PACER is a gesture-based interactive paper system that supports fine-grained paper document content manipulation through the touch screen of a cameraphone. Using the phones camera, PACER links a paper document to its digital version based on visual features. It adopts camera-based phone motion detection for embodied gestures (e.g. marquees, underlines and lassos), with which users can flexibly select and interact with document details (e.g. individual words, symbols and pixels). The touch input is incorporated to facilitate target selection at fine granularity, and to address some limitations of the embodied interaction, such as hand jitter and low input sampling rate. This hybrid interaction is coupled with other techniques such as semi-real time document tracking and loose physical-digital document registration, offering a gesture-based command system. We demonstrate the use of PACER in various scenarios including work-related reading, maps and music score playing. A preliminary user study on the design has produced encouraging user feedback, and suggested future research for better understanding of embodied vs. touch interaction and one vs. two handed interaction.


international conference on multimedia retrieval | 2011

Large-scale EMM identification based on geometry-constrained visual word correspondence voting

Xin Yang; Qiong Liu; Chunyuan Liao; Kwang-Ting Cheng; Andreas Girgensohn

We present a large-scale Embedded Media Marker (EMM) identification system which allows users to retrieve relevant dynamic media associated with a static paper document via camera-phones. The user supplies a query image by capturing an EMM-signified patch of a paper document through a camera phone. The system recognizes the query and in turn retrieves and plays the corresponding media on the phone. Accurate image matching is crucial for positive user experience in this application. To address the challenges posed by large datasets and variation in camera-phone-captured query images, we introduce a novel image matching scheme based on geometrically consistent correspondences. A hierarchical scheme, combined with two constraining methods, is designed to detect geometric constrained correspondences between images. A spatial neighborhood search approach is further proposed to address challenging cases of query images with a large translational shift. Experimental results on a 200k+ dataset show that our solution achieves high accuracy with low memory and time complexity and outperforms the baseline bag-of-words approach.


acm multimedia | 2011

A tool for authoring unambiguous links from printed content to digital media

Andreas Girgensohn; Frank M. Shipman; Lynn Wilcox; Qiong Liu; Chunyuan Liao; Yuichi Oneda

Embedded Media Markers (EMMs) are nearly transparent icons printed on paper documents that link to associated digital media. By using the document content for retrieval, EMMs are less visually intrusive than barcodes and other glyphs while still providing an indication for the presence of links. An initial implementation demonstrated good overall performance but exposed difficulties in guaranteeing the creation of unambiguous EMMs. We developed an EMM authoring tool that supports the interactive authoring of EMMs via visualizations that show the user which areas on a page may cause recognition errors and automatic feedback that moves the authored EMM away from those areas. The authoring tool and the techniques it relies on have been applied to corpora with different visual characteristics to explore the generality of our approach.


virtual reality continuum and its applications in industry | 2011

Minimum correspondence sets for improving large-scale augmented paper

Xin Yang; Chunyuan Liao; Qiong Liu; Kwang-Ting Cheng

Augmented Paper (AP) is an important area of Augmented Reality (AR). Many AP systems rely on visual features for paper document identification. Although promising, these systems can hardly support large sets of documents (i.e. one million documents) because of high memory and time cost in handling high-dimensional features. On the other hand, general large-scale image identification techniques are not well customized to AP, costing unnecessarily more resources to achieve the identification accuracy required by AP. To address this mismatching between AP and image identification techniques, we propose a novel large-scale image identification technique well geared to AP. At its core is a geometric verification scheme based on Minimum visual-word Correspondence Set (MICSs). MICS is a set of visual word (i.e. quantized visual feature) correspondences, each of which contains a minimum number of correspondences that are sufficient for deriving a transformation hypothesis between a captured document image and an indexed image. Our method selects appropriate MICSs to vote in a Hough space of transformation parameters, and uses a robust dense region detection algorithm to locate the possible transformation models in the space. The models are then utilized to verify all the visual word correspondences to precisely identify the matching indexed image. By taking advantage of unique geometric constraints in AP, our method can significantly reduce the time and memory cost while achieving high accuracy. As showed in evaluation with two AP systems called FACT and EMM, over a dataset with 1M+ images, our method achieves 100% identification accuracy and 0.67% registration error for FACT; For EMM, our method outperforms the state-of-the-art image identification approach by achieving 4% improvements in detection rate and almost perfect precision, while saving 40% and 70% memory and time cost.


acm multimedia | 2012

MixPad: augmenting interactive paper with mice & keyboards for cross-media and fine-grained interaction with documents

Xin Yang; Chunyuan Liao; Qiong Liu

Existing interactive paper systems suffer from the disparate input devices for paper and computers. The finger-pen-only input on paper causes frequent devices switching (e.g. pen vs. mouse) during cross-media interactions, and may have issues of occlusion and precision. We propose MixPad, a novel interactive paper system, which allows users to exploit mice and keyboards to digitally manipulate fine-grained document content on paper, such as copying an arbitrary image region to a computer and clicking on a word for web search. With the combined input channels, MixPad enables richer digital functions on paper and facilitates bimanual operations cross different media. A preliminary user study shows positive feedback on this interaction technique.


Archive | 2004

Integrated system for providing shared interactive environment, computer data signal, program, system, method for exchanging information in shared interactive environment, and method for annotating live video image

Patrick Chiu; T Foote Jonathan; Donald G Kimber; Chunyuan Liao; Qiong Liu; Lynn D. Wilcox; ティー. フート ジョナサン; リアオ チュユアン; リュウ チョン; ジー.キンバー ドナルド; チィーウ パトリック; ディー.ウィルコックス リン


Archive | 2012

Image projection device, image projection control device, and program

Huber Jochen; ヒューバー ヨヒェン; Chunyuan Liao; リアオ チュニュアン; Qiong Liu; リュウ チョン


Archive | 2011

Display apparatus and computer program for folding a document page object

Francine R. Chen; Patrick Chiu; Chunyuan Liao; リアオ チュニュアン; チィーウ パトリック; チェン フランシーン


Archive | 2005

System and method for authoring media presentation, interface device and integrated system

Patrick Chiu; Donald G Kimber; Surapong Lertsithichai; Chunyuan Liao; Qiong Liu; Hangjin Zhang; ラートシティチャイ スラポン; リアオ チュユアン; リュウ チョン; ジー.キンバー ドナルド; チィーウ パトリック; ジャン ハンジン


Archive | 2012

System, method and program for generating interactive hot spot of gesture base in real world environment

Rieffel Eleanor; リーフェル エレノア; G Kimber Donald; ジー.キンバー ドナルド; Chunyuan Liao; リアオ チュニュアン; Qiong Liu; リュウ チョン

Collaboration


Dive into the Chunyuan Liao's collaboration.

Top Co-Authors

Avatar

Qiong Liu

FX Palo Alto Laboratory

View shared research outputs
Top Co-Authors

Avatar

Xin Yang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kwang-Ting Cheng

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bee Liew

FX Palo Alto Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge