I-Chao Shen
Academia Sinica
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by I-Chao Shen.
international conference on computer graphics and interactive techniques | 2016
Li Yi; Vladimir G. Kim; Duygu Ceylan; I-Chao Shen; Mengyan Yan; Hao Su; Cewu Lu; Qixing Huang; Alla Sheffer; Leonidas J. Guibas
Large repositories of 3D shapes provide valuable input for data-driven analysis and modeling tools. They are especially powerful once annotated with semantic information such as salient regions and functional parts. We propose a novel active learning method capable of enriching massive geometric datasets with accurate semantic region annotations. Given a shape collection and a user-specified region label our goal is to correctly demarcate the corresponding regions with minimal manual work. Our active framework achieves this goal by cycling between manually annotating the regions, automatically propagating these annotations across the rest of the shapes, manually verifying both human and automatic annotations, and learning from the verification results to improve the automatic propagation algorithm. We use a unified utility function that explicitly models the time cost of human input across all steps of our method. This allows us to jointly optimize for the set of models to annotate and for the set of models to verify based on the predicted impact of these actions on the human efficiency. We demonstrate that incorporating verification of all produced labelings within this unified objective improves both accuracy and efficiency of the active learning procedure. We automatically propagate human labels across a dynamic shape network using a conditional random field (CRF) framework, taking advantage of global shape-to-shape similarities, local feature similarities, and point-to-point correspondences. By combining these diverse cues we achieve higher accuracy than existing alternatives. We validate our framework on existing benchmarks demonstrating it to be significantly more efficient at using human input compared to previous techniques. We further validate its efficiency and robustness by annotating a massive shape dataset, labeling over 93,000 shape parts, across multiple model classes, and providing a labeled part collection more than one order of magnitude larger than existing ones.
international conference on computer graphics and interactive techniques | 2012
Sheng-Jie Luo; I-Chao Shen; Bing-Yu Chen; Wen-Huang Cheng; Yung-Yu Chuang
This paper presents a novel technique for seamless stereoscopic image cloning, which performs both shape adjustment and color blending such that the stereoscopic composite is seamless in both the perceived depth and color appearance. The core of the proposed method is an iterative disparity adaptation process which alternates between two steps: disparity estimation, which re-estimates the disparities in the gradient domain so that the disparities are continuous across the boundary of the cloned region; and perspective-aware warping, which locally re-adjusts the shape and size of the cloned region according to the estimated disparities. This process guarantees not only depth continuity across the boundary but also models local perspective projection in accordance with the disparities, leading to more natural stereoscopic composites. The proposed method allows for easy cloning of objects with intricate silhouettes and vague boundaries because it does not require precise segmentation of the objects. Several challenging cases are demonstrated to show that our method generates more compelling results compared to methods with only global shape adjustment.
IEEE Transactions on Visualization and Computer Graphics | 2015
Sheng-Jie Luo; Ying-Tse Sun; I-Chao Shen; Bing-Yu Chen; Yung-Yu Chuang
This paper presents a patch-based synthesis framework for stereoscopic image editing. The core of the proposed method builds upon a patch-based optimization framework with two key contributions: First, we introduce a depth-dependent patch-pair similarity measure for distinguishing and better utilizing image contents with different depth structures. Second, a joint patch-pair search is proposed for properly handling the correlation between two views. The proposed method successfully overcomes two main challenges of editing stereoscopic 3D media: (1) maintaining the depth interpretation, and (2) providing controllability of the scene depth. The method offers patch-based solutions to a wide variety of stereoscopic image editing problems, including depth-guided texture synthesis, stereoscopic NPR, paint by depth, content adaptation, and 2D to 3D conversion. Several challenging cases are demonstrated to show the effectiveness of the proposed method. The results of user studies also show that the proposed method produces stereoscopic images with good stereoscopics and visual quality.
IEEE Transactions on Multimedia | 2015
I-Chao Shen; Wen-Huang Cheng
As the large online repositories of image and video data has emerged and continued to grow in number, the visual variations in such repositories has also increased dramatically. For example, the visual scene of a photograph can be changed into different colors by image editing tools or depicted by multiple representations, such as a painting and a hand-drawn sketch. The large visual variations tend to cause ambiguities for the existing computer vision algorithms to recognize the visual analogies of these images and often limit the potential of related applications. In this paper, therefore, we propose a new approach for detecting reliable visual features from images, with a particular focus on improving the repeatability of the local features in those images containing the same semantic contents (e.g., a landmark) but in different visual styles (e.g., a photo and a painting). We proposed a novel method for establishing visual correspondences between images based on the Gestalt theory, a psychological study of how human visions organize the visual perception. Experiments demonstrated the outperformance of our approach over the state-of-the-art local features in various computer vision tasks, such as cross domain image matching and retrieval.
Computer Graphics Forum | 2017
Wei-Tse Lee; Hsin-I Chen; Ming-Shiuan Chen; I-Chao Shen; Bing-Yu Chen
In virtual reality (VR) applications, the contents are usually generated by creating a 360° Video panorama of a real‐world scene. Although many capture devices are being released, getting high‐resolution panoramas and displaying a virtual world in real‐time remains challenging due to its computationally demanding nature. In this paper, we propose a real‐time 360° Video foveated stitching framework, that renders the entire scene in different level of detail, aiming to create a high‐resolution panoramic Video in real‐time that can be streamed directly to the client. Our foveated stitching algorithm takes Videos from multiple cameras as input, combined with measurements of human visual attention (i.e. the acuity map and the saliency map), can greatly reduce the number of pixels to be processed. We further parallelize the algorithm using GPU to achieve a responsive interface and validate our results via a user study. Our system accelerates graphics computation by a factor of 6 on a Google Cardboard display.
pacific conference on computer graphics and applications | 2016
Chun-Kai Huang; Yi-Ling Chen; I-Chao Shen; Bing-Yu Chen
In this paper, we introduce an interactive method suitable for retargeting both 3D objects and scenes. Initially, the input object or scene is decomposed into a collection of constituent components enclosed by corresponding control bounding volumes which capture the intra‐structures of the object or semantic grouping of objects in the 3D scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize the valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users. This strategy makes the proposed method highly flexible to process a wide variety of 3D objects and scenes under an unified framework. In addition, the proposed method achieved more general structure‐preserving pattern synthesis in both object and scene levels. We demonstrate the effectiveness of our method by applying it to several complicated 3D objects and scenes.
Computer Graphics Forum | 2015
Hsin-I Chen; Tse-Ju Lin; Xiao-Feng Jian; I-Chao Shen; Bing-Yu Chen
A persons handwriting appears differently within a typical range of variations, and the shapes of handwriting characters also show complex interaction with their nearby neighbors. This makes automatic synthesis of handwriting characters and paragraphs very challenging. In this paper, we propose a method for synthesizing handwriting texts according to a writers handwriting style. The synthesis algorithm is composed by two phases. First, we create the multidimensional morphable models for different characters based on one writers data. Then, we compute the cursive probability to decide whether each pair of neighboring characters are conjoined together or not. By jointly modeling the handwriting style and conjoined property through a novel trajectory optimization, final handwriting words can be synthesized from a set of collected samples. Furthermore, the paragraphs’ layouts are also automatically generated and adjusted according to the writers style obtained from the same dataset. We demonstrate that our method can successfully synthesize an entire paragraph that mimic a writers handwriting using his/her collected handwriting samples.
international conference on computer graphics and interactive techniques | 2013
Chien-Wen Jung; I-Chao Shen; Sheng-Jie Luo; Chiun-Kai Huang; Bing-Yu Chen; Wen-Huang Cheng
Before the widespread of modern capture devices, painting served an important role in recording and depicting the real world for human. These painting artworks not only preserve the immediate depiction but also with good aesthetic sense. However, most of the the depictions in the painting are impossible to be reproduced in modern devices. In this extend abstract, we present a method that photolizes an input painting artwork. Our method generates an image which resembles the scene of the painting and has a photo-realistic appearance. A user provides a number of photos which are partly similar to the painting, and specifies a number of corresponding edge strokes in the painting and one of the photos indicating corresponding object edges. Our method automatically deforms these corresponding edges and composites these photos together.
Computer Graphics Forum | 2013
Sheng-Jie Luo; Chin-Yu Lin; I-Chao Shen; Bing-Yu Chen
Creating variations of an image object is an important task, which usually requires manipulating the skeletal structure of the object. However, most existing methods (such as image deformation) only allow for stretching the skeletal structure of an object: modifying skeletal topology remains a challenge. This paper presents a technique for synthesizing image objects with different skeletal structures while respecting to an input image object. To apply this technique, a user firstly annotates the skeletal structure of the input object by specifying a number of strokes in the input image, and draws corresponding strokes in an output domain to generate new skeletal structures. Then, a number of the example texture pieces are sampled along the strokes in the input image and pasted along the strokes in the output domain with their orientations. The result is obtained by optimizing the texture sampling and seam computation. The proposed method is successfully used to synthesize challenging skeletal structures, such as skeletal branches, and a wide range of image objects with various skeletal structures, to demonstrate its effectiveness.
international conference on computer graphics and interactive techniques | 2018
Ming-Shiuan Chen; I-Chao Shen; Chun-Kai Hunag; Bing-Yu Chen
In recent years, personalized fabrication has attracted many attentions due to the widespread of consumer-level 3D printers. However, consumer 3D printers still suffer from shortcomings such as long production time and limited output size, which are undesirable factors to large-scale rapid-prototyping. We propose a hybrid 3D fabrication method that combines 3D printing and Zometool structure for both time/cost-effective fabrication of large objects. The key of our approach is to utilize compact, sturdy and re-usable internal structure (Zometool) to infill fabrications and replace both time and material-consuming 3D-printed materials. Unlike the laser-cutted shape used in [Song et al. 2016], we are able to reuse the inner structure. As a result, we can significantly reduce the cost and time by printing thin 3D external shells only.