Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yu-Chi Lai is active.

Publication


Featured researches published by Yu-Chi Lai.


symposium on computer animation | 2005

Group motion graphs

Yu-Chi Lai; Stephen Chenney; Shaohua Fan

We introduce Group Motion Graphs, a data-driven animation technique for groups of discrete agents, such as flocks, herds, or small crowds. Group Motion Graphs are conceptually similar to motion graphs constructed from motion-capture data, but have some important differences: we assume simulated motion; transition nodes are found by clustering group configurations from the input simulations: and clips to join transitions are explicitly constructed via constrained simulation. Graphs built this way offer known bounds on the trajectories that they generate, making it easier to search for particular output motions. The resulting animations show realistic motion at significantly reduced computational cost compared to simulation, and improved control.


International Journal of Communication Systems | 2011

3D video communications: Challenges and opportunities

Guan‐Ming Su; Yu-Chi Lai; Andres Kwasinski; Haohong Wang

This paper surveys major techniques in 3D communications area, which covers the whole pipeline of the 3D video communication framework, including 3D content creation, data representation, compression, delivery, decompression, post-processing, and 3D scene rendering stages. Both the current state-of-the-art, stereo 3D, and future trend, free-viewpoint 3D, are demonstrated in details. On the other hand, the paper highlights a few features in the emerging 4G wireless systems that are critical for 3D communications system design. At the end, the topics with potential but challenges, for example 3D over 4G networks, distributed 3D video coding, 3D multi-user communication, scalability and universal 3D access, are discussed and pointed out to audiences for further investigation. Copyright


eurographics symposium on rendering techniques | 2005

Metropolis photon sampling with optional user guidance

Shaohua Fan; Stephen Chenney; Yu-Chi Lai

We present Metropolis Photon Sampling (MPS), a visual importance-driven algorithm for populating photon maps. Photon Mapping and other particle tracing algorithms fail if the photons are poorly distributed. Our approach samples light transport paths that join a light to the eye, which accounts for the viewer in the sampling process and provides information to improve photon storage. Paths are sampled with a Metropolis-Hastings algorithm that exploits coherence among important light paths. We also present a technique for including user selected paths in the sampling process without introducing bias. This allows a user to provide hints about important paths or reduce variance in specific parts of the image. We demonstrate MPS with a range of scenes and show quantitative improvements in error over standard Photon Mapping and Metropolis Light Transport.


eurographics symposium on rendering techniques | 2007

Photorealistic image rendering with population monte carlo energy redistribution

Yu-Chi Lai; Shaohua Fan; Stephen Chenney; Charcle Dyer

This work presents a novel global illumination algorithm which concentrates computation on important light transport paths and automatically adjusts energy distributed area for each light transport path. We adapt statistical framework of Population Monte Carlo into global illumination to improve rendering efficiency. Information collected in previous iterations is used to guide subsequent iterations by adapting the kernel function to approximate the target distribution without introducing bias into the final result. Based on this framework, our algorithm automatically adapts the amount of energy redistribution at different pixels and the area over which energy is redistributed. Our results show that the efficiency can be improved by exploring the correlated information among light transport paths.


Computer Graphics Forum | 2006

Optimizing Control Variate Estimators for Rendering

Shaohua Fan; Stephen Chenney; Bo Hu; Kam-Wah Tsui; Yu-Chi Lai

We present the Optimizing Control Variate (OCV) estimator, a new estimator for Monte Carlo rendering. Based upon a deterministic sampling framework, OCV allows multiple importance sampling functions to be combined in one algorithm. Its optimizing nature addresses a major problem with control variate estimators for rendering: users supply a generic correlated function which is optimized for each estimate, rather than a single highly tuned one that must work well everywhere. We demonstrate OCV with both direct lighting and irradiance‐caching examples, showing improvements in image error of over 35% in some cases, for little extra computation time.


eurographics | 2015

Evaluating 2D Flow Visualization Using Eye Tracking

Hsin Yang Ho; I-Cheng Yeh; Yu-Chi Lai; Wen-Chieh Lin; Fu-Yin Cherng

Flow visualization is recognized as an essential tool for many scientific research fields and different visualization approaches are proposed. Several studies are also conducted to evaluate their effectiveness but these studies rarely examine the performance from the perspective of visual perception. In this paper, we aim at exploring how users’ visual perception is influenced by different 2D flow visualization methods. An eye tracker is used to analyze users’ visual behaviors when they perform the free viewing, advection prediction, flow feature detection, and flow feature identification tasks on the flow field images generated by different visualizations methods. We evaluate the illustration capability of five representative visualization algorithms. Our results show that the eye‐tracking‐based evaluation provides more insights to quantitatively analyze the effectiveness of these visualization methods.


The Visual Computer | 2014

Geometry-shader-based real-time voxelization and applications

Hsu-Huai Chang; Yu-Chi Lai; Chin-Yuan Yao; Kai-Lung Hua; Yuzhen Niu; Feng Liu

This work proposes a new voxelization algorithm based on newly available GPU functionalities and designs several real-time applications to render complex lighting effects with the voxelization result. The voxelization algorithm can efficiently transform a highly complex scene in a surface-boundary representation into a set of voxels in one GPU pass using the geometry shader. Newly available 3D textures are used to directly record the surficial and volumetric properties of objects such as opaqueness, refraction, and transmittance. In the first, the usage of 3D textures can remove those strenuous efforts required to modify the encoding and decoding scheme when adjusting the voxel resolution. Second, surficial and volumetric properties recorded in 3D textures can be used to interactively compute and render more realistic lighting effects including the shadow of objects with complex occlusion and the refraction and transmittance of transparent objects. The shadow can be rendered with an absorption coefficient which is computed according to the number of surfaces drawing in each voxel during voxelization and used to compute the amount of light passing through partially occluded complex objects. The surface normal, transmittance coefficient and refraction index recorded in each voxel can be used to simulate the refraction and transmittance lighting effects of transparent objects using our multiple-surfaced refraction algorithm. Finally, the results demonstrate that our algorithm can transform a dynamic scene into a set of voxels and render complex lighting effects in real time without any pre-processing.


IEEE Transactions on Visualization and Computer Graphics | 2017

Manga Vectorization and Manipulation with Procedural Simple Screentone

Chih-Yuan Yao; Shih-Hsuan Hung; Guo-Wei Li; I-Yu Chen; Reza Adhitya; Yu-Chi Lai

Manga are a popular artistic form around the world, and artists use simple line drawing and screentone to create all kinds of interesting productions. Vectorization is helpful to digitally reproduce these elements for proper content and intention delivery on electronic devices. Therefore, this study aims at transforming scanned Manga to a vector representation for interactive manipulation and real-time rendering with arbitrary resolution. Our system first decomposes the patch into rough Manga elements including possible borders and shading regions using adaptive binarization and screentone detector. We classify detected screentone into simple and complex patterns: our system extracts simple screentone properties for refining screentone borders, estimating lighting, compensating missing strokes inside screentone regions, and later resolution independently rendering with our procedural shaders. Our system treats the others as complex screentone areas and vectorizes them with our proposed line tracer which aims at locating boundaries of all shading regions and polishing all shading borders with the curve-based Gaussian refiner. A user can lay down simple scribbles to cluster Manga elements intuitively for the formation of semantic components, and our system vectorizes these components into shading meshes along with embedded Bézier curves as a unified foundation for consistent manipulation including pattern manipulation, deformation, and lighting addition. Our system can real-time and resolution independently render the shading regions with our procedural shaders and drawing borders with the curve-based shader. For Manga manipulation, the proposed vector representation can be not only magnified without artifacts but also deformed easily to generate interesting results.


IEEE Transactions on Systems, Man, and Cybernetics | 2018

Background Extraction Using Random Walk Image Fusion

Kai-Lung Hua; Hong-Cyuan Wang; Chih-Hsiang Yeh; Wen-Huang Cheng; Yu-Chi Lai

It is important to extract a clear background for computer vision and augmented reality. Generally, background extraction assumes the existence of a clean background shot through the input sequence, but realistically, situations may violate this assumption such as highway traffic videos. Therefore, our probabilistic model-based method formulates fusion of candidate background patches of the input sequence as a random walk problem and seeks a globally optimal solution based on their temporal and spatial relationship. Furthermore, we also design two quality measures to consider spatial and temporal coherence and contrast distinctness among pixels as background selection basis. A static background should have high temporal coherence among frames, and thus, we improve our fusion precision with a temporal contrast filter and an optical-flow-based motionless patch extractor. Experiments demonstrate that our algorithm can successfully extract artifact-free background images with low computational cost while comparing to state-of-the-art algorithms.


The Visual Computer | 2015

Robust and efficient adaptive direct lighting estimation

Yu-Chi Lai; Hsuan-Ting Chou; Kuo-Wei Chen; Shaohua Fan

Hemispherical integrals are important for the estimation of direct lighting which has a major impact on the results of global illumination. This work proposes the population Monte Carlo hemispherical integral (PMC-HI) sampler to improve the efficiency of direct lighting estimation. The sampler is unbiased and derived from the population Monte Carlo framework which works on a population of samples and learns to be a better sampling function over iterations. Information found in one iteration can be used to guide subsequent iterations by distributing more samples to important sampling techniques to focus more efforts on the sampling sub-domains which have larger contributions to the hemispherical integrals. In addition, a cone sampling strategy is also proposed to enhance the success rate when complex occlusions exist. The images rendered with PMC-HI are compared against those rendered with multiple importance sampling (Veach and Guibas In: SIGGRAPH ’95, pp 419–428, 1995), adaptive light sample distributions (Donikian et al. IEEE Trans Vis Comput Graph 12(3):353–364, 2006), and multidimensional hemispherical adaptive sampling (Hachisuka et al. ACM Trans Graph 27(3):33:1–33:10, 2008). Our PMC-HI sampler can improve rendering efficiency.

Collaboration


Dive into the Yu-Chi Lai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chih-Yuan Yao

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Andres Kwasinski

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kai-Lung Hua

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Feng Liu

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Shaohua Fan

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Yuzhen Niu

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Stephen Chenney

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Kuo-Wei Chen

National Taiwan University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge