Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ming-Yuen Chan is active.

Publication


Featured researches published by Ming-Yuen Chan.


IEEE Transactions on Visualization and Computer Graphics | 2009

Perception-Based Transparency Optimization for Direct Volume Rendering

Ming-Yuen Chan; Yingcai Wu; Wai-Ho Mak; Wei Chen; Huamin Qu

The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method.


IEEE Transactions on Visualization and Computer Graphics | 2009

Focus+Context Route Zooming and Information Overlay in 3D Urban Environments

Huamin Qu; Haomian Wang; Weiwei Cui; Yingcai Wu; Ming-Yuen Chan

In this paper we present a novel focus+context zooming technique, which allows users to zoom into a route and its associated landmarks in a 3D urban environment from a 45-degree birds-eye view. Through the creative utilization of the empty space in an urban environment, our technique can informatively reveal the focus region and minimize distortions to the context buildings. We first create more empty space in the 2D map by broadening the road with an adapted seam carving algorithm. A grid-based zooming technique is then used to enlarge the landmarks to reclaim the created empty space and thus reduce distortions to the other parts. Finally,an occlusion-free route visualization scheme adaptively scales the buildings occluding the route to make the route always visible to users. Our method can be conveniently integrated into Google Earth and Virtual Earth to provide seamless route zooming and help users better explore a city and plan their tours. It can also be used in other applications such as information overlay to a virtual city.


IEEE Transactions on Visualization and Computer Graphics | 2008

Relation-Aware Volume Exploration Pipeline

Ming-Yuen Chan; Huamin Qu; Ka-Kei Chung; Wai-Ho Mak; Yingcai Wu

Volume exploration is an important issue in scientific visualization. Research on volume exploration has been focused on revealing hidden structures in volumetric data. While the information of individual structures or features is useful in practice, spatial relations between structures are also important in many applications and can provide further insights into the data. In this paper, we systematically study the extraction, representation,exploration, and visualization of spatial relations in volumetric data and propose a novel relation-aware visualization pipeline for volume exploration. In our pipeline, various relations in the volume are first defined and measured using region connection calculus (RCC) and then represented using a graph interface called relation graph. With RCC and the relation graph, relation query and interactive exploration can be conducted in a comprehensive and intuitive way. The visualization process is further assisted with relation-revealing viewpoint selection and color and opacity enhancement. We also introduce a quality assessment scheme which evaluates the perception of spatial relations in the rendered images. Experiments on various datasets demonstrate the practical use of our system in exploratory visualization.


ieee pacific visualization symposium | 2010

Quantitative effectiveness measures for direct volume rendered images

Yingcai Wu; Huamin Qu; Ka-Kei Chung; Ming-Yuen Chan; Hong Zhou

With the rapid development in graphics hardware and volume rendering techniques, many volumetric datasets can now be rendered in real time on a standard PC equipped with a commodity graphics board. However, the effectiveness of the results, especially direct volume rendered images, is difficult to validate and users may not be aware of ambiguous or even misleading information in the results. This limits the applications of volume visualization. In this paper, we introduce four quantitative effectiveness measures: distinguishability, contour clarity, edge consistency, and depth coherence measures, which target different effectiveness issues for direct volume rendered images. Based on the measures, we develop a visualization system with automatic effectiveness assessment, providing users with instant feedback on the effectiveness of the results. The case study and user evaluation have demonstrated the high potential of our system.


eurographics | 2007

Quality enhancement of direct volume rendered images

Ming-Yuen Chan; Yingcai Wu; Huamin Qu

In this paper, we propose a new method for enhancing the quality of direct volume rendered images. Unlike the typical image enhancement techniques which perform transformations in the image domain, we take the volume data into account and enhance the presentation of the volume in the rendered image by adjusting the rendering parameters. Our objective is not only to deliver a pleasing image with better color contrast or enhanced features, but also generate a faithful image with the information in the volume presented in the image. An image quality measurement is proposed to quantitatively evaluate image quality based on the information obtained from the image as well as the volumetric data. The parameter adjustment process is driven by the evaluation result using a genetic algorithm. More informative and comprehensible results are therefore delivered, compared with the typical image-based approaches.


international symposium on visual computing | 2006

Viewpoint selection for angiographic volume

Ming-Yuen Chan; Huamin Qu; Yingcai Wu; Hong Zhou

In this paper, we present a novel viewpoint selection framework for angiographic volume data. We propose several view descriptors based on typical concerns of clinicians for the view evaluation. Compared with conventional approaches, our method can deliver a more representative global optimal view by sampling at a much higher rate in the view space. Instead of performing analysis on sample views individually, we construct a solution space to estimate the quality of the views. Descriptor values are propagated to the solution space where an efficient searching process can be performed. The best viewpoint can be found by analyzing the accumulated descriptor values in the solution space based on different visualization goals.


computer graphics international | 2006

MIP-Guided vascular image visualization with multi-dimensional transfer function

Ming-Yuen Chan; Yingcai Wu; Huamin Qu; Albert Chi Shing Chung; Wilbur C.K. Wong

Direct volume rendering (DVR) is an effective way to visualize 3D vascular images for diagnosis of different vascular pathologies and planning of surgical treatments. Angiograms are typically noisy, fuzzy, and contain thin vessel structures. Therefore, some kinds of enhancements are usually needed before direct volume rendering can start. However, without visualizing the 3D structures in angiograms, users may find it difficult to select appropriate parameters and assess the effectiveness of the enhancement results. In addition, traditional enhancement techniques cannot easily separate the vessel voxels from other contextual structures with the same or very similar intensity. In this paper, we propose a framework to integrate enhancement and direct volume rendering into one visualization pipeline using multi-dimensional transfer function tailored for visualizing the curvilinear and line structures in angiograms. Furthermore, we present a feature preserving interpolation method to render very thin vessels which are usually missed using traditional approaches. To ease the difficulty in vessel selection, a MIP-guided method is suggested to assist the process.


eurographics | 2007

Palette-style volume visualization

Yingcai Wu; Anbang Xu; Ming-Yuen Chan; Huamin Qu; Ping Guo

In this paper we propose a palette-style volume visualization interface which aims at providing users with an intuitive volume exploration tool. Our system is inspired by the widely used wheel-style color palette. The system initially creates a set of direct volume rendered images (DVRIs) manually or automatically, and arranges them over a circle in 2D image space. Based on the initial set of DVRIs called primary DVRIs which imitate the primary colors in the color wheel, users can create more DVRIs on the wheel using PhotoShop-style image editing operations such as the fusing operation. With our system, non-expert users can easily navigate and explore volumetric data. In addition, users can always know where they have been, where they are, and where they could go in a visualization process and hence redundant exploration can be avoided.


pacific-rim symposium on image and video technology | 2006

Focus + context visualization with animation

Yingcai Wu; Huamin Qu; Hong Zhou; Ming-Yuen Chan

In this paper, we present some novel animation techniques to help users understand complex structures contained in volumetric data from the medical imaging and scientific simulation fields. Because of the occlusion of 3D objects, these complex structures and their 3D relationships usually cannot be revealed using one image. By our animation techniques, the focus regions, the context regions, and their relationships can be better visualized at the different stages of an animation. We propose an image-centric method which employs layered-depth images (LDIs) to get correct depth cues, and a data-centric method which exploits a novel transfer function fusing technique to guarantee the smooth transition between frames. The experimental results on real volume data demonstrate the advantages of our techniques over traditional image blending and transfer function interpolation methods.


international symposium on visual computing | 2008

An Efficient Quality-Based Camera Path Planning Method for Volume Exploration

Ming-Yuen Chan; Wai-Ho Mak; Huamin Qu

Volume visualization is an effective means for exploring information in volumetric data. To improve the visualization process, we propose a novel method for systematic revelation and illustration of the volume by presenting the data in a visual trail. Based on a novel propagation framework, we develop a feature and quality driven approach for viewpoint selection and camera path construction. Good candidate viewpoints selected are covered in the path. To connect the viewpoints in a proper manner, potential fields are established using the projection maps and camera paths are derived accordingly. This path planning method can deliver useful guidance for volume exploration. Experiments on volumetric datasets have been conducted for demonstration.

Collaboration


Dive into the Ming-Yuen Chan's collaboration.

Top Co-Authors

Avatar

Huamin Qu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hong Zhou

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wai-Ho Mak

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ka-Kei Chung

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Albert Chi Shing Chung

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Wilbur C.K. Wong

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Anbang Xu

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Haomian Wang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ping Guo

Beijing Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge