Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chih-Yuan Yao is active.

Publication


Featured researches published by Chih-Yuan Yao.


Computer Animation and Virtual Worlds | 2006

Generating genus‐n‐to‐m mesh morphing using spherical parameterization

Tong-Yee Lee; Chih-Yuan Yao; Hung-Kuo Chu; Ming-Jen Tai; Cheng-Chieh Chen

Surface parameterization is a fundamental tool in computer graphics and benefits many applications such as texture mapping, morphing, and re‐meshing. Many spherical parameterization schemes with very nice properties have been proposed and widely used in the past. However, it is well known that the spherical parameterization is limited to genus‐0 models. In this paper, we first propose a novel framework to extend spherical parameterization for handling a genus‐n surface. In this framework, we represent a surface S of arbitrary genus by a positive mesh O and several negative meshes Ni. Each negative surface is used to represent a hole. A positive surface O is obtained by removing all holes in the original surface S. Then, both positive and negative meshes are genus‐0 and can be spherically parameterized, respectively. To compute S, we can use a Boolean difference operation to subtract negative Ni from a positive O. Next, we apply this novel framework to generate genus‐n‐to‐m mesh morphing application without restriction of n = m. Finally, there are many interesting non‐genus‐0 mesh morphing sequences generated. Copyright


Computer Animation and Virtual Worlds | 2005

Progressive mesh metamorphosis

Chao Hung Lin; Tong-Yee Lee; Hung-Kuo Chu; Chih-Yuan Yao

This paper describes a new integrated scheme for metamorphosis between two closed manifold genus‐0 polyhedral models. Spherical parameterizations of the source and target models are created first. To control the morphing, any number of feature vertex pairs is specified and a fold‐over free warping method is used to align two spherical embeddings. Our method does not create a merged meta‐mesh or execute re‐meshing to construct a common connectivity for morphs. Alternatively, a scheme for the progressive connectivity transformation of two spherical parameterizations is employed to generate the intermediate meshes. A novel semi‐overlay with a geomorph scheme is proposed to reduce the popping effects caused by the connectivity transformation. We demonstrate several examples of aesthetically pleasing morphing sequences using the proposed scheme. Copyright


Multimedia Tools and Applications | 2017

Improved local histogram equalization with gradient-based weighting process for edge preservation

Yu-Ren Lai; Ping-Chuan Tsai; Chih-Yuan Yao; Shanq-Jang Ruan

This paper presents a novel local histogram equalization by combining the transformation functions of the non-overlapped sub-images based on the gradient information for edge preservation and better visualization. To ameliorate the problems of the over- and under-enhancement produced by conventional local histogram equalization, the bilateral Bezier curve-based histogram modification strategy is first employed to modify the significant and insufficient changes of each cumulative distribution in each sub-image. Yet, the gradient information has not been considered, and the cumulative distribution of some enhanced sub-images are still significant or insufficient because of the over- and under-enhancement, respectively. Therefore, the key insight of the proposed method is that the transformation functions of the partitioned sub-images will be weighed and combined based on the proportion of gradients to preserve the image texture. In addition, the input image is separated into the non-overlapped sub-images for reducing the time complexity. Based on the eight representative test images and mean opinion score, the experimental results demonstrate that the proposed method is quite competitive with four state-of-the-art histogram equalization methods in the literature. Furthermore, according to the subjective evaluation, it is observed that the proposed method can also apply to the practical applications and achieve good visual quality.


IEEE Transactions on Visualization and Computer Graphics | 2017

Manga Vectorization and Manipulation with Procedural Simple Screentone

Chih-Yuan Yao; Shih-Hsuan Hung; Guo-Wei Li; I-Yu Chen; Reza Adhitya; Yu-Chi Lai

Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.


Neurocomputing | 2018

Low order adaptive region growing for lung segmentation on plain chest radiographs

Peter Chondro; Chih-Yuan Yao; Shanq-Jang Ruan; Li-Chien Chien

Abstract This study proposes a computer-aided region segmentation for the plain chest radiographs. It incorporates an avant-garde contrast enhancement that increases the opacity of the lung regions. The region of interest (ROI) is localized preliminarily by implementing a brisk block-based binarization and morphological operations. Further improvement for region boundaries is performed using a statistical-based region growing with an adaptive graph-cut technique that increases accuracy within any dubious gradient. Assessed on a representative dataset, the proposed method achieves an average segmentation accuracy of 96.3% with low complexity on 256p resolutions.


Computer Animation and Virtual Worlds | 2005

Progressive mesh metamorphosis: Animating Geometrical Models

Chao Hung Lin; Tong-Yee Lee; Hung-Kuo Chu; Chih-Yuan Yao

Not everything is perceived as it is provided by the environment. Depending on focus and attention perception can vary and therefore also the knowledge about the world. Virtual humans are sensing the virtual world, storing knowledge and using it to perform tasks. This paper describes our approach to model perceiving, storing and forgetting knowledge as the main regulation of tasks. We use different forms and levels of knowledge which can be independently adapted to different personalities and situations by combining computer graphics methods with psychological models. Copyright


IEEE Transactions on Visualization and Computer Graphics | 2017

Data-Driven NPR Illustrations of Natural Flows in Chinese Painting

Yu-Chi Lai; Bo-An Chen; Kuo-Wei Chen; Wei-Lin Si; Chih-Yuan Yao; Eugene Zhang

Introducing motion into existing static paintings is becoming a field that is gaining momentum. This effort facilitates keeping artworks current and translating them to different forms for diverse audiences. Chinese ink paintings and Japanese Sumies are well recognized in Western cultures, yet not easily practiced due to the years of training required. We are motivated to develop an interactive system for artists, non-artists, Asians, and non-Asians to enjoy the unique style of Chinese paintings. In this paper, our focus is on replacing static water flow scenes with animations. We include flow patterns, surface ripples, and water wakes which are challenging not only artistically but also algorithmically. We develop a data-driven system that procedurally computes a flow field based on stroke properties extracted from the painting, and animate water flows artistically and stylishly. Technically, our system first extracts water-flow-portraying strokes using their locations, oscillation frequencies, brush patterns, and ink densities. We construct an initial flow pattern by analyzing stroke structures, ink dispersion densities, and placement densities. We cluster extracted strokes as stroke pattern groups to further convey the spirit of the original painting. Then, the system automatically computes a flow field according to the initial flow patterns, water boundaries, and flow obstacles. Finally, our system dynamically generates and animates extracted stroke pattern groups with the constructed field for controllable smoothness and temporal coherence. The users can interactively place the extracted stroke patterns through our adapted Poisson-based composition onto other paintings for water flow animation. In conclusion, our system can visually transform a static Chinese painting to an interactive walk-through with seamless and vivid stroke-based flow animations in its original dynamic spirits without flickering artifacts.


international conference on computer graphics and interactive techniques | 2013

Refocusing images captured from a stereoscopic camera

Chia-Lun Ku; Yu-Shuen Wang; Chia-Sheng Chang; Hung-Kuo Chu; Chih-Yuan Yao

Traditional photography projects a 3D scene to a 2D image without recording the depth of each local region, which prevents users from changing the focus plane of a photograph once it has been taken. To tackle this problem, Ng et al. [2005] presented light-field cameras that record all focus planes of a scene and synthesized the refocused image using ray tracing. Nevertheless, the captured photographs are of low resolution because the image sensor is divided into sub-cells. Levin et al. [2007] embedded a coded aperture on the camera lens and recover depth information from blur patterns in a single image. However, the coded aperture blocks around 50% of light. Their system requires longer exposition time when taking pictures. Liang et al. [2008] also embedded a coded aperture on the camera lens to capture the scene but with multiple exposures. It produces high quality depth maps yet is not suitable to hand-held devices. Recently, Microsoft Kinect directly estimates the depth information using infrared light, which works only in a indoor environment.


international conference on computer graphics and interactive techniques | 2013

Adaptive manga re-layout on mobile device

Chia-Jung Tsai; Chih-Yuan Yao; Pei-Ying Chiang; Yu-Chi Lai; Ming-Te Chi; Hung-Kuo Chu; Yu-Shiang Wong; Yu-Shuen Wang

In the present day, smart phones and tablets are popular electronic devices for business, entertainment or study due to their convenience, portability and intuitive user interfaces. However, these advantages also induce one of their limitations: the limited available screen size. It is not comfortable to read articles, mangas, magazines,...etc, under such a small screen. Figure 1 shows the overview of our system. Before our work re-layouts a manga, the system must first extract all panels in the manga through our designed corner matching algorithm, sort the extracted panels into a queue, i.e a play list, based on the generally accepted manga reading rules and transform panels in the list to display under arbitrary accessing conditions.


asia-pacific signal and information processing association annual summit and conference | 2013

A facial skin changing system

Yu-Ren Lai; Chih-Yuan Yao; Hao-Siang Hu; Ming-Te Chi; Yu-Chi Lai

This paper presents a novel system to remove facial scars and pores for a portrait. The facial skin complexion and color are important attractive factors, and most people consider that a good portrait should be scar free and have smooth facial colors. Currently, most available commercial digital cameras or smart phones all provide facial-skin-beatification functions, but most of them only use simple image processing functions to smooth the taken image for the removal of small-scale unwanted facial scars and pores and these functions cannot remove large and obvious scars or pores from the face as shown in Figure 1. Therefore, the main contribution of this system allows the user to replace the facial skin of a portrait with a beautiful and scar-less skin of another portrait chosen from a database which consists of scar-free and beautiful skin complexion collected from webs.

Collaboration


Dive into the Chih-Yuan Yao's collaboration.

Top Co-Authors

Avatar

Yu-Chi Lai

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hung-Kuo Chu

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Tong-Yee Lee

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Ming-Te Chi

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Pei-Ying Chiang

National Taipei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shih-Hsuan Hung

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chao Hung Lin

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Chi Li

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dobromir Todorov

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hong-Nian Guo

National Taiwan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge