Chun Fa Chang
National Taiwan Normal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chun Fa Chang.
international conference on computer graphics and interactive techniques | 1999
Chun Fa Chang; Gary Bishop; Anselmo Lastra
Using multiple reference images in 3D image warping has been a challenging problem. Recently, the Layered Depth Image (LDI) was proposed by Shade et al. to merge multiple reference images under a single center of projection, while maintaining the simplicity of warping a single reference image. However it does not consider the issue of sampling rate. We present the LDI tree, which combines a hierarchical space partitioning scheme with the concept of the LDI. It preserves the sampling rates of the reference images by adaptively selecting an LDI in the LDI tree for each pixel. While rendering from the LDI tree, we only have to traverse the LDI tree to the levels that are comparable to the sampling rate of the output image. We also present a progressive refinement feature and a “gap filling” algorithm implemented by pre-filtering the LDI tree. We show that the amount of memory required has the same order of growth as the 2D reference images. This also bounds the complexity of rendering time to be less than directly rendering from all reference images. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Viewing Algorithms; I.3.6 [Computer Graphics] Methodology and Techniques Graphics data structures and data types; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. Additional
pacific rim conference on multimedia | 2002
Chun Fa Chang; Shyh Haur Ger
Compared to a personal computer, mobile devices typically have weaker processing power, less memory capacity, and lower resolution of display. While the former two factors are clearly disadvantages for 3D graphics applications running on mobile devices, the display factor could be turned into an advantage instead. However the traditional 3D graphics pipeline cannot take advantage of the smaller display because its run time depends mostly on the number of polygons to be rendered. In contrast, the run time of image-based rendering methods depends mainly on the display resolution. Therefore it is well suited for mobile devices. Furthermore, we may use the network connection to build a client-server framework, which allows us to integrate with nonimage-based rendering programs. We present our system framework and the experiment results on PocketPC® based devices in this work.
international conference on computer graphics and interactive techniques | 1999
Norman P. Jouppi; Chun Fa Chang
In this paper we present an algorithm for low-cost hardware antialiasing and transparency. This technique keeps a central Z value along with 8-bit floating-point Z gradients in the X and Y dimensions for each fragment within a pixel (hence the name Z 3). It uses a small fixed amount of storage per pixel. If there are more fragments generated for a pixel than the space available, it merges only as many fragments as necessary in order to fit in the available per-pixel memory. The merging occurs on those fragments having the closest Z values. This combines different fragments from the same surface, resulting in both storage and processing efficiency. When operating with opaque surfaces, Z 3 can provide superior image quality over sparse supersampling methods that use eight samples per pixel while using storage for only three fragments. Z 3 also makes the use of large numbers of samples (e.g., 16) feasible in inexpensive hardware, enabling higher quality images. It is simple to implement because it uses a small fixed number of fragments per pixel. Z can also provide order-independent transparency even if many transparent surfaces are present. Moreover, unlike the original A-buffer algorithm it correctly antialiases interpenetrating transparent surfaces because it has three-dimensional Z information within each pixel.
international conference on computer graphics and interactive techniques | 2011
Hsiang-Ting Chen; Li Yi Wei; Chun Fa Chang
Revision control is a vital component of digital project management and has been widely deployed for text files. Binary files, on the other hand, have received relatively less attention. This can be inconvenient for graphics applications that use a significant amount of binary data, such as images, videos, meshes, and animations. Existing strategies such as storing whole files for individual revisions or simple binary deltas could consume significant storage and obscure vital semantic information. We present a nonlinear revision control system for images, designed with the common digital editing and sketching workflows in mind. We use DAG (directed acyclic graph) as the core structure, with DAG nodes representing editing operations and DAG edges the corresponding spatial, temporal and semantic relationships. We visualize our DAG in RevG (revision graph), which provides not only as a meaningful display of the revision history but also an intuitive interface for common revision control operations such as review, replay, diff, addition, branching, merging, and conflict resolving. Beyond revision control, our system also facilitates artistic creation processes in common image editing and digital painting workflows. We have built a prototype system upon GIMP, an open source image editor, and demonstrate its effectiveness through formative user study and comparisons with alternative revision control systems.
The Visual Computer | 2005
Ke Sen Huang; Chun Fa Chang; Yu Yao Hsu; Shi Nine Yang
We present a novel constraint-based keyframe extraction technique, Key Probe. Based on animator-specified constraints, the method converts a skeleton-based motion or animated mesh to a keyframe-based representation. In contrast to previous curve simplification or clustering methods, we cast the keyframe extraction problem as a constrained matrix factorization problem and solve the problem based on the least-squares optimization technique. The extracted keyframes have two uses: they could be used for browsing or they may be blended to reconstruct all other frames of an animation. Our approach is general and suitable for both rigid-body and soft-body animations. Experiments on various types of animation examples show that the proposed method produces remarkable results in terms of quality and compression ratio. Empirical tests also show that our algorithm consistently offers better efficiency than those by principal component analysis (PCA) and independent component analysis (ICA).
interactive 3d graphics and games | 2005
Wan Chun Ma; Sung Hsiang Chao; Yu Ting Tseng; Yung-Yu Chuang; Chun Fa Chang; Bing-Yu Chen; Ming Ouhyoung
This paper presents a new technique for rendering bidirectional texture functions (BTFs) at different levels of detail (LODs). Our method first decomposes each BTF image into multiple subbands with a Laplacian pyramid. Each vector of Laplacian coefficients of a texel at the same level is regarded as a Laplacian bidirectional reflectance distribution function (BRDF). These vectors are then further compressed by applying principal components analysis (PCA). At the rendering stage, the LOD parameter for each pixel is calculated according to the distance from the viewpoint to the surface. Our rendering algorithm uses this parameter to determine how many levels of BTF Laplacian pyramid are required for rendering. Under the same sampling resolution, a BTF gradually transits to a BRDF as the camera moves away from the surface. Our method precomputes this transition and uses it for multiresolution BTF rendering. Our Laplacian pyramid representation allows real-time anti-aliased rendering of BTFs using graphics hardware. In addition to provide visually satisfactory multiresolution rendering for BTFs, our method has a comparable compression rate to the available single-resolution BTF compression techniques.
advances in multimedia | 2004
Su Ian Eugene Lei; Chun Fa Chang
In this paper, we present an empirical approach for rendering realistic watercolor effects in real-time. While watercolor being a versatile media, several characteristics of its effects have been categorized in the past. We describe an approach to recreate these effects using the Kubelka-Munk compositing model and the Sobel filter. Using modern per-pixel shading hardware, we present a method to render these effects in an interactive frame-rate.
international conference on multimedia and expo | 2006
Yung Feng Chiu; Chun Fa Chang
We present efficient algorithms for real-time rendering of ocean using the newest features of programmable graphics processors (GPU). It differs from previous works in three aspects: adaptive GPU-based ocean surface tessellation, sophisticated optical effects for shallow water, and spray dynamics for oscillating waves. Our tessellation scheme not only offers easier level-of-detail (LOD) control but also avoids the loading of vertex attributes from CPU to GPU at each frame. The object-space wave sampling approach allows us to produce sophisticated optical effects for shallow water and implement a state-preserving particle system for simulating spray motions interactively
virtual reality software and technology | 2004
Wan Chun Ma; Sung Hsiang Chao; Bing-Yu Chen; Chun Fa Chang; Ming Ouhyoung; Tomoyuki Nishita
In this paper, we propose an appearance representation for general complex materials which can be applied in real-time rendering framework. By combining a single parametric shading function (such as the Phong model) and the proposed spatial-varying residual function (SRF), this representation can recover the appearance of complex materials with little loss of visual fidelity. The difference between the real data and the parametric shading is directly fitted by a specific function for easy reconstruction. It is simple, flexible and easy to be implemented on programmable graphics hardware. Experiments show that the mean square error (MSE) between the reconstructed appearance and real photographs is less than 5%.
Proceedings of the 2010 Workshop on Embedded Systems Education | 2010
Meng Ting Wang; Po Chun Huang; Jenq Kuen Lee; Shang Hong Lai; Roger Jyh-Shing Jang; Chun Fa Chang; Chih Wei Liu; Tei-Wei Kuo; Steve Liao
Technologies for handheld devices with open-platforms have made rapid progresses recently which gives rise to the necessities of bringing embedded system education and training material up to date. Android system plays a leading role among all of the open-platforms for embedded systems and makes impacts on daily usages of mobile devices. In this paper, we present our experience of incorporating Android-based lab modules in embedded system courses. Our lab modules include system software labs and embedded application labs. The Android embedded application lab modules contain computer vision, audio signal processing and speech recognitions, and 3D graphics materials. Lab modules for Android systems in embedded system software cover topics on embedded compiler, HW/SW co-design, and power optimization. We also illustrate how these laboratory modules can be integrated into embedded system curriculum. Feedbacks from students show that these laboratory modules are interesting to students and give them essential training of adopting Android components for embedded software development.