Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Minh X. Nguyen is active.

Publication


Featured researches published by Minh X. Nguyen.


ieee visualization | 2001

POP: a hybrid point and polygon rendering system for large data

Baoquan Chen; Minh X. Nguyen

We introduce a simple but effective extension to the existing pure point rendering systems. Rather than using only points, we use both points and polygons to represent and render large mesh models. We start from triangles as leaf nodes and build up a hierarchical tree structure with intermediate nodes as points. During the rendering, the system determines whether to use a point (of a certain intermediate level node) or a triangle (of a leaf node) for display depending on the screen contribution of each node. While points are used to speedup the rendering of distant objects, triangles are used to ensure the quality of close objects. Our method can accelerate the rendering of large models, compromising little in image quality.


The Visual Computer | 2005

Geometry completion and detail generation by texture synthesis

Minh X. Nguyen; Xiaoru Yuan; Baoquan Chen

We present a novel method for patching holes in polygonal meshes and synthesizing surfaces with details based on existing geometry. The most novel feature of our proposed method is that we transform the 3D geometry synthesis problem into a 2D domain by parameterizing surfaces and solve this problem in that domain. We then derive local geometry gradient images that encode intrinsic local geometry properties, which are invariant to object translation and rotation. The 3D geometry of holes is then reconstructed from synthesized local gradient images. This method can be extended to execute other mesh editing operations such as geometry detail transfer or synthesis. The resulting major benefits of performing geometry synthesis in 2D are more flexible and robust control, better leveraging of the wealth of current 2D image completion methods, and greater efficiency.


IEEE Transactions on Visualization and Computer Graphics | 2006

HDR VolVis: high dynamic range volume visualization

Xiaoru Yuan; Minh X. Nguyen; Baoquan Chen; David H. Porter

In this paper, we present an interactive high dynamic range volume visualization framework (HDR VolVis) for visualizing volumetric data with both high spatial and intensity resolutions. Volumes with high dynamic range values require high precision computing during the rendering process to preserve data precision. Furthermore, it is desirable to render high resolution volumes with low opacity values to reveal detailed internal structures, which also requires high precision compositing. High precision rendering will result in a high precision intermediate image (also known as high dynamic range image). Simply rounding up pixel values to regular display scales will result in loss of computed details. Our method performs high precision compositing followed by dynamic tone mapping to preserve details on regular display devices. Rendering high precision volume data requires corresponding resolution in the transfer function. To assist the users in designing a high resolution transfer function on a limited resolution display device, we propose a novel transfer function specification interface with nonlinear magnification of the density range and logarithmic scaling of the color/opacity range. By leveraging modern commodity graphics hardware, multiresolution rendering techniques and out-of-core acceleration, our system can effectively produce an interactive visualization of large volume data, such as 2.048/sup 3/.


ieee visualization | 2005

High dynamic range volume visualization

Xiaoru Yuan; Minh X. Nguyen; Baoquan Chen; David H. Porter

High resolution volumes require high precision compositing to preserve detailed structures. This is even more desirable for volumes with high dynamic range values. After the high precision intermediate image has been computed, simply rounding up pixel values to regular display scales loses the computed details. In this paper, we present a novel high dynamic range volume visualization method for rendering volume data with both high spatial and intensity resolutions. Our method performs high precision volume rendering followed by dynamic tone mapping to preserve details on regular display devices. By leveraging available high dynamic range image display algorithms, this dynamic tone mapping can be automatically adjusted to enhance selected features for the final display. We also present a novel transfer function design interface with nonlinear magnification of the density range and logarithmic scaling of the color/opacity range to facilitate high dynamic range volume visualization. By leveraging modern commodity graphics hardware and out-of-core acceleration, our system can produce an effective visualization of huge volume data.


eurographics | 2004

Interactive silhouette rendering for point-based models

Hui Xu; Minh X. Nguyen; Xiaoru Yuan; Baoquan Chen

We present a new method for rendering silhouettes of point-based models. Due to the lack of connectivity information, most existing polygon-based silhouette generation algorithms cannot be applied to point-based models. Our method not only bypasses this connectivity requirement, but also accommodates point-based models with sparse non-uniform sampling and inaccurate/no normal information. Like conventional point-based rendering, we render a model in two passes. The points are rendered as enlarged opaque disks in the first pass to obtain a visibility mask, while being rendered as regular size splats/disks in the second pass. In this way, edges are automatically depicted at depth discontinuities, usually at the silhouette boundaries. The silhouette color is the disk color used in the first pass rendering. The silhouette thickness can be controlled by changing the disk size difference between two passes. We demonstrate our method on different types of point-based models from various sources. The simplicity of our method allows it to be easily integrated with other rendering techniques to cater to many applications. Our method is capable of rendering large scenes of millions of points at interactive rates using modern graphics hardware.


eurographics symposium on rendering techniques | 2005

Stippling and silhouettes rendering in geometry-image space

Xiaoru Yuan; Minh X. Nguyen; Nan Zhang; Baoquan Chen

We present a novel non-photorealistic rendering method that performs all operations in a geometry-image domain. We first apply global conformal parameterization to the input geometry model and generate corresponding geometry images. Strokes and silhouettes are then computed in the geometry-image domain. The geometry-image space provides combined benefits of the existing image space and object space approaches. It allows us to take advantage of the regularity of 2D images and yet still have full access to the object geometry information. A wide range of image processing tools can be leveraged to assist various operations involved in achieving non-photorealistic rendering with coherence.


pacific conference on computer graphics and applications | 2003

INSPIRE: an interactive image assisted non-photorealistic rendering system

Minh X. Nguyen; Hui Xu; Xiaoru Yuan; Baoquan Chen

We present a GPU supported interactive non-photorealistic rendering system, INSPIRE, which performs feature extraction in both image space, on intermediately rendered images, and object space, on models of various representations, e.g., point, polygon, or hybrid models, without needing connectivity information. INSPIRE obtains interactive NPR rendering with most styles of existing NPR systems, but offers more flexibility on model representations and compromises little on rendering speed.


SBM | 2005

Sketch-based Segmentation of Scanned Outdoor Environment Models

Xiaoru Yuan; Hui Xu; Minh X. Nguyen; Amit Shesh; Baoquan Chen

When modeling with scanned outdoor models, being able to select a subset of the points efficiently that collectively represent an object is an important and fundamental operation. Such segmentation problems have been extensively studied, and simple and efficient solutions exist in two dimensions. However, 3D segmentation, especially that of sparse point models obtained by scanning, remains a challenge because of inherent incompleteness and noise. We present a sketched-based interface that allows segmentation of general 3D point-based models. The user marks object and background regions by placing strokes using a stylus, and the tool segments out the marked object(s). To refine the results, the user simply moves the camera to a different location and repeats the process. Our method is based on graph cuts, a popular and well-tested paradigm for segmentation problems. We employ a two-pass process: we use the strokes to perform 2D image segmentation in the projection plane of the camera and use its results for the 3D scanned data segmentation. The advantages of our method are ease of use, speed and robustness. Our method works for general 3D point models and not just range images. Important applications include selection of objects when dealing with large, unorganized point models for refinement, remodeling, meshing, etc.


eurographics | 2003

Hybrid forward resampling and volume rendering

Xiaoru Yuan; Minh X. Nguyen; Hui Xu; Baoquan Chen

The transforming and rendering of discrete objects, such as traditional images (with or without depths) and volumes, can be considered as resampling problem -- objects are reconstructed, transformed, filtered, and finally sampled on the screen grids. In resampling practices, discrete samples (pixels, voxels) can be considered either as infinitesimal sample points (simply called points) or samples of a certain size (splats). Resampling can also be done either forwards or backwards in either the source domain or the target domain. In this paper, we present a framework that features hybrid forward resampling for discrete rendering. Specifically, we apply this framework to enhance volumetric splatting. In this approach, minified voxels are taken simply as points filtered in screen space; while magnified voxels are taken as spherical splats. In addition, we develop two techniques for performing accurate and efficient perspective splatting. The first one is to efficiently compute the 2D elliptical geometry of perspectively projected splats; the second one is to achieve accurate perspective reconstruction filter. The results of our experiments demonstrate both the effectiveness of antialiasing and the efficiency of rendering using this approach.


eurographics | 2006

Perceptually guided rendering of textured point-based models

Lijun Qu; Xiaoru Yuan; Minh X. Nguyen; Gary W. Meyer; Baoquan Chen; Jered E. Windsheimer

In this paper, we present a textured point-based rendering scheme that takes into account the masking properties of the human visual system. In our system high quality textures are mapped to point-based models. Given one texture, an importance map is first computed using the visual masking tool included in the JPEG2000 standard. This importance map indicates the masking potential of the texture. During runtime, point-based models are simplified and rendered based on this computed importance. In our point simplification method, called Simplification by Random Numbers (SRN), each point in the model is pre-assigned a random value. During rendering, the preassigned value is compared with the preferred local point density (derived from importance) to determine whether this point will be rendered. Our method can achieve coherent simplification for point models.

Collaboration


Dive into the Minh X. Nguyen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hui Xu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amit Shesh

Illinois State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijun Qu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Nan Zhang

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge