Chao Hung Lin
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chao Hung Lin.
IEEE Transactions on Visualization and Computer Graphics | 2009
Min Wen Chao; Chao Hung Lin; Cheng Wei Yu; Tong-Yee Lee
In this paper, we present a very high-capacity and low-distortion 3D steganography scheme. Our steganography approach is based on a novel multi-layered embedding scheme to hide secret messages in the vertices of 3D polygon models. Experimental results show that the cover model distortion is very small as the number of hiding layers ranges from 7 to 13 layers. To the best of our knowledge, this novel approach can provide much higher hiding capacity than other state-of-the-art approaches, while obeying the low distortion and security basic requirements for steganography on 3D models.
IEEE Transactions on Geoscience and Remote Sensing | 2013
Chao Hung Lin; Po Hung Tsai; Kang Hua Lai; Jyun Yuan Chen
A cloud removal approach based on information cloning is introduced. The approach removes cloud-contaminated portions of a satellite image and then reconstructs the information of missing data utilizing temporal correlation of multitemporal images. The basic idea is to clone information from cloud-free patches to their corresponding cloud-contaminated patches under the assumption that land covers change insignificantly over a short period of time. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Thus, the proposed approach can potentially yield better results in terms of radiometric accuracy and consistency compared with related approaches. Some experimental analyses on sequences of images acquired by the Landsat-7 Enhanced Thematic Mapper Plus sensor are conducted. The experimental results show that the proposed approach can process large clouds in a heterogeneous landscape, which is difficult for cloud removal approaches. In addition, quantitative and qualitative analyses on simulated data with different cloud contamination conditions are conducted using quality index and visual inspection, respectively, to evaluate the performance of the proposed approach.
IEEE Transactions on Multimedia | 2013
Shih-Syun Lin; I-Cheng Yeh; Chao Hung Lin; Tong-Yee Lee
Image retargeting is the process of adapting images to fit displays with various aspect ratios and sizes. Most studies on image retargeting focus on shape preservation, but they do not fully consider the preservation of structure lines, which are sensitive to human visual system. In this paper, a patch-based retargeting scheme with an extended significance measurement is introduced to preserve shapes of both visually salient objects and structure lines while minimizing visual distortions. In the proposed scheme, a similarity transformation constraint is used to force visually salient content to undergo as-rigid-as-possible deformation, while an optimization process is performed to smoothly propagate distortions. These processes enable our approach to yield pleasing content-aware warping and retargeting. Experimental results and a user study show that our results are better than those generated by state-of-the-art approaches.
IEEE Transactions on Medical Imaging | 2002
Tong-Yee Lee; Chao Hung Lin
A feature-guided image interpolation scheme is presented. It is an effective and improved, shape-based interpolation method used for interpolating image slices in medical applications. The proposed method integrates feature line-segments to guide the shape-based method for better shape interpolation. An automatic method for finding these line segments is given. The proposed feature-guided shape-based method can manage translation, rotation and scaling situations when the slices have similar shapes. It can also interpolate intermediate shapes when the successive slices do not have similar shapes. This method is experimentally evaluated using artificial and real two-dimensional and three-dimensional data. The proposed method generated satisfactory interpolated results in these experiments. We demonstrate the practicality, effectiveness and reproducibility of the proposed method for interpolating medical images.
international conference of the ieee engineering in medicine and biology society | 1999
Tong-Yee Lee; Ping-Hsien Lin; Chao Hung Lin; Yung-Nien Sun; Xi-Zhang Lin
We describe a low-cost three-dimensional (3-D) virtual colonoscopy system that is a noninvasive technique for examining the entire colon and can assist physicians in detecting polyps inside the colon. Using the helical CT data and proposed techniques, we can three-dimensionally reconstruct and visualize the inner surface of the colon. We generate high resolution video views of the colon interior structures as if the viewers eyes were inside the colon. The physicians can virtually navigate inside the colon in two different modes: interactive and automatic navigation, respectively. For automatic navigation, the flythrough path is determined a priori using the 3-D thinning and two-pass tracking schemes. The whole colon is spatially subdivided into several cells, and only potentially visible cells are taken into account during rendering. To further improve rendering efficiency, potentially visible cells are rendered at different levels of detail. Additionally, a chain of bounding volume in each cell is used to avoid penetrating through the colon during navigation. In comparison with previous work, the proposed system can efficiently accomplish required preprocessing tasks and afford adequate rendering speeds on a low-cost PC system.
IEEE Transactions on Visualization and Computer Graphics | 2011
I-Cheng Yeh; Chao Hung Lin; Olga Sorkine; Tong-Yee Lee
We introduce a template fitting method for 3D surface meshes. A given template mesh is deformed to closely approximate the input 3D geometry. The connectivity of the deformed template model is automatically adjusted to facilitate the geometric fitting and to ascertain high quality of the mesh elements. The template fitting process utilizes a specially tailored Laplacian processing framework, where in the first, coarse fitting stage we approximate the input geometry with a linearized biharmonic surface (a variant of LS-mesh), and then the fine geometric detail is fitted further using iterative Laplacian editing with reliable correspondence constraints and a local surface flattening mechanism to avoid foldovers. The latter step is performed in the dual mesh domain, which is shown to encourage near-equilateral mesh elements and significantly reduces the occurrence of triangle foldovers, a well-known problem in mesh fitting. To experimentally evaluate our approach, we compare our method with relevant state-of-the-art techniques and confirm significant improvements of results. In addition, we demonstrate the usefulness of our approach to the application of consistent surface parameterization (also known as cross-parameterization).
IEEE Transactions on Geoscience and Remote Sensing | 2014
Chao Hung Lin; Kang Hua Lai; Zhi Bin Chen; Jyun Yuan Chen
Cloud covers, which are generally present in optical remote sensing images, limit the usage of acquired images and increase the difficulty in data analysis. Thus, information reconstruction of cloud-contaminated images generally plays an important role in image analysis. This paper proposes a novel method to reconstruct cloud-contaminated information in multitemporal remote sensing images. Based on the concept of utilizing temporal correlation of multitemporal images, we propose a patch-based information reconstruction algorithm that spatiotemporally segments a sequence of images into clusters containing several spatially connected components called patches and then clones information from cloud-free and high-similarity patches to their corresponding cloud-contaminated patches. In addition, a seam that passes through homogenous regions is used in information reconstruction to reduce radiometric inconsistency, and information cloning is solved using an optimization process with the determined seam. These processes enable the proposed method to well reconstruct missing information. Qualitative analyses of image sequences acquired by a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor and a quantitative analysis of simulated data with various cloud contamination conditions are conducted to evaluate the proposed method. The experimental results demonstrate the superiority of the proposed method to related methods in terms of radiometric accuracy and consistency, particularly for large clouds in a heterogeneous landscape.
IEEE Transactions on Circuits and Systems for Video Technology | 2008
Tong-Yee Lee; Chao Hung Lin; Yu-Shuen Wang; Tai Guang Chen
Three-dimensional animating meshes have been widely used in the computer graphics and video game industries. Reducing the animating mesh complexity is a common way of overcoming the rendering limitation or network bandwidth. Thus, we present a compact representation for animating meshes based on novel key-frames extraction and animating mesh simplification approaches. In contrast to the general simplification and key-frames extraction approaches which are driven by geometry metrics, the proposed methods are based on a deformation analysis of animating mesh to preserve both the geometric features and motion characteristics. These two approaches can produce a very compact animation representation in spatial and temporal domains, and therefore they can be beneficial in many applications such as progressive animation transmission and animation segmentation and transferring.
IEEE Transactions on Circuits and Systems for Video Technology | 2014
Shih Syun Lin; Chao Hung Lin; Shu Huai Chang; Tong-Yee Lee
This paper addresses the topic of content-aware stereoscopic image retargeting. The key to this topic is consistently adapting a stereoscopic image to fit displays with various aspect ratios and sizes while preserving visually salient content. Most methods focus on preserving the disparities and shapes of visually salient objects through nonlinear image warping, in which distortions caused by warping are propagated to homogenous and low-significance regions. However, disregarding the consistency of object deformation sometimes results in apparent distortions in both the disparities and shapes of objects. An object-coherence warping scheme is proposed to reduce this unwanted distortion. The basic idea is to utilize the information of matched objects rather than that of matched pixels in warping. Such information implies object correspondences in a stereoscopic image pair, which allows the generation of an object significance map and the consistent preservation of objects. This strategy enables our method to consistently preserve both the disparities and shapes of visually salient objects, leading to good content-aware retargeting. In the experiments, qualitative and quantitative analyses of various stereoscopic images show that our results are better than those generated by related methods in terms of consistency of object preservation.
IEEE Transactions on Visualization and Computer Graphics | 2015
Ming-Te Chi; Shih Syun Lin; Shiang Yi Chen; Chao Hung Lin; Tong-Yee Lee
A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting peoples attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds overtime, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.