Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ming-Te Chi is active.

Publication


Featured researches published by Ming-Te Chi.


IEEE Transactions on Visualization and Computer Graphics | 2011

Focus+Context Metro Maps

Yu-Shuen Wang; Ming-Te Chi

We introduce a focus+context method to visualize a complicated metro map of a modern city on a small displaying area. The context of our work is with regard the popularity of mobile devices. The best route to the destination, which can be obtained from the arrival time of trains, is highlighted. The stations on the route enjoy larger spaces, whereas the other stations are rendered smaller and closer to fit the whole map into a screen. To simplify the navigation and route planning for visitors, we formulate various map characteristics such as octilinear transportation lines and regular station distances into energy terms. We then solve for the optimal layout in a least squares sense. In addition, we label the names of stations that are on the route of a passenger according to human preferences, occlusions, and consistencies of label positions using the graph cuts method. Our system achieves real-time performance by being able to report instant information because of the carefully designed energy terms. We apply our method to layout a number of metro maps and show the results and timing statistics to demonstrate the feasibility of our technique.


international conference on computer graphics and interactive techniques | 2008

Self-animating images: illusory motion using repeated asymmetric patterns

Ming-Te Chi; Tong-Yee Lee; Yingge Qu; Tien-Tsin Wong

Illusory motion in a still image is a fascinating research topic in the study of human motion perception. Physiologists and psychologists have attempted to understand this phenomenon by constructing simple, color repeated asymmetric patterns (RAP) and have found several useful rules to enhance the strength of illusory motion. Based on their knowledge, we propose a computational method to generate self-animating images. First, we present an optimized RAP placement on streamlines to generate illusory motion for a given static vector field. Next, a general coloring scheme for RAP is proposed to render streamlines. Furthermore, to enhance the strength of illusion and respect the shape of the region, a smooth vector field with opposite directional flow is automatically generated given an input image. Examples generated by our method are shown as evidence of the illusory effect and the potential applications for entertainment and design purposes.


IEEE Transactions on Visualization and Computer Graphics | 2006

Stylized and abstract painterly rendering system using a multiscale segmented sphere hierarchy

Ming-Te Chi; Tong-Yee Lee

This paper presents a novel system framework for interactive, three-dimensional, stylized, abstract painterly rendering. In this framework, the input models are first represented using 3D point sets and then this point-based representation is used to build a multiresolution bounding sphere hierarchy. From the leaf to root nodes, spheres of various sizes are rendered into multiple-size strokes on the canvas. The proposed sphere hierarchy is developed using multiscale region segmentation. This segmentation task assembles spheres with similar attribute regularities into a meaningful region hierarchy. These attributes include colors, positions, and curvatures. This hierarchy is very useful in the following respects: 1) it ensures the screen-space stroke density, 2) controls different input model abstractions, 3) maintains region structures such as the edges/boundaries at different scales, and 4) renders models interactively. By choosing suitable abstractions, brush stroke, and lighting parameters, we can interactively generate various painterly styles. We also propose a novel scheme that reduces the popping effect in animation sequences. Many different stylized images can be generated using the proposed framework.


IEEE Transactions on Visualization and Computer Graphics | 2008

Stylized Rendering Using Samples of a Painted Image

Chung-Ren Yan; Ming-Te Chi; Tong-Yee Lee; Wen-Chieh Lin

We introduce a novel technique to generate painterly art maps (PAMs) for 3D nonphotorealistic rendering. Our technique can automatically transfer brushstroke textures and color changes to 3D models from samples of a painted image. Therefore, the generation of stylized images or animation in the style of a given artwork can be achieved. This new approach works particularly well for a rich variety of brushstrokes ranging from simple 1D and 2D line-art strokes to very complicated ones with significant variations in stroke characteristics. During the rendering or animation process, the coherence of brushstroke textures and color changes over 3D surfaces can be well maintained. With PAM, we can also easily generate the illusion of flow animation over a 3D surface to convey the shape of a model.


IEEE Transactions on Visualization and Computer Graphics | 2015

Morphable Word Clouds for Time-Varying Text Data Visualization

Ming-Te Chi; Shih Syun Lin; Shiang Yi Chen; Chao Hung Lin; Tong-Yee Lee

A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting peoples attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds overtime, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.


IEEE Transactions on Visualization and Computer Graphics | 2012

Region-Based Line Field Design Using Harmonic Functions

Chih Yuan Yao; Ming-Te Chi; Tong-Yee Lee; Tao Ju

Field design has wide applications in graphics and visualization. One of the main challenges in field design has been how to provide users with both intuitive control over the directions in the field on one hand and robust management of its topology on the other hand. In this paper, we present a design paradigm for line fields that addresses this challenge. Rather than asking users to input all singularities as in most methods that offer topology control, we let the user provide a partitioning of the domain and specify simple flow patterns within the partitions. Represented by a selected set of harmonic functions, the elementary fields within the partitions are then combined to form continuous fields with rich appearances and well-determined topology. Our method allows a user to conveniently design the flow patterns while having precise and robust control over the topological structure. Based on the method, we developed an interactive tool for designing line fields from images, and demonstrated the utility of the fields in image stylization.


Computing in Science and Engineering | 2007

Stylized Rendering for Anatomic Visualization

Tong-Yee Lee; Chung Ren Yan; Ming-Te Chi

A nonphotorealistic rendering (NPR) technique can help medical practitioners visualize anatomic models and illustrate them with different stylizations. This technique provides potentially useful rendering alternatives to conventional volume or surface rendering in medical visualization. Improved performance allows interactive visualization of anatomic models, and additional stroke texture synthesis enriches medical object illustrations


The Visual Computer | 2014

Optical illusion shape texturing using repeated asymmetric patterns

Ming-Te Chi; Chih Yuan Yao; Eugene Zhang; Tong-Yee Lee

Illusory motions refer to the phenomena in which static images composed of certain colors and patterns lead to the illusion of motions. This paper presents an approach for generating illusory motions on 3D surfaces which can be used for shape illustration as well as artistic visualization of line fields on surfaces. Our method extends previous work on generating illusory motions in the plane, which we adapt to 3D surfaces. In addition, we propose novel volume texture of repeated asymmetric patterns (RAPs) to visualize bidirectional flows, thus enabling the visualization of line fields in the plane and on the surface. We demonstrate the effectiveness of our method with applications in shape illustration as well as line field visualization on surfaces. For the design of optical illusion art, it is a tough case to arrange the distribution of RAP. However, we provide a semi-automatic algorithm to help users design flow direction. Finally, this technique applies to the design of street art and user could easily set the perspective effect and flow motion for illustration.


The Visual Computer | 2016

Image stylization using anisotropic reaction diffusion

Ming-Te Chi; Wei-Ching Liu; Shu-Hsuan Hsu

Image stylization refers to the process of converting input images to a specific representation that enhances image content using several designed patterns. The critical steps to a successful image stylization are the design of patterns and arrangements. However, only skilled artists master such tasks because these tasks are challenging for most users. In this paper, a novel image stylization system based on anisotropic reaction diffusion is proposed to facilitate pattern generation and stylized image design. The system begins with self-organized patterns generated by reaction diffusion. To extend the style of reaction diffusion, the proposed method involves using a set of modifications of anisotropic diffusion to deform shape and introducing a flow field to guide pattern arrangement. A pattern picker is proposed to facilitate the pattern selection from these modifications. In the post-process step, a new thresholding and color mapping method is introduced to refine the sizes, densities, and colors of patterns. From the experimental results and a user study, several image stylizations, including paper-cut, stylized halftone, and motion illusion, are generated using our method, demonstrating the feasibility and flexibility of the proposed system.


international conference on computer graphics and interactive techniques | 2015

Intuitive 3D cubic style modeling system

Chen-Chi Hu; Tze-Hsiang Wei; Yu-Sheng Chen; Yi-Chieh Wu; Ming-Te Chi

Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.

Collaboration


Dive into the Ming-Te Chi's collaboration.

Top Co-Authors

Avatar

Tong-Yee Lee

National Cheng Kung University

View shared research outputs
Top Co-Authors

Avatar

Chih Yuan Yao

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Eugene Zhang

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Chen-Chi Hu

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Chih-Yuan Yao

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hung-Kuo Chu

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar

Shu-Hsuan Hsu

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Yi-Chieh Wu

National Chengchi University

View shared research outputs
Top Co-Authors

Avatar

Yu-Shuen Wang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Chao Hung Lin

National Cheng Kung University

View shared research outputs
Researchain Logo
Decentralizing Knowledge