Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Insoo Woo is active.

Publication


Featured researches published by Insoo Woo.


IEEE Journal of Selected Topics in Signal Processing | 2010

The Use of Mobile Devices in Aiding Dietary Assessment and Evaluation

Fengqing Zhu; Marc Bosch; Insoo Woo; SungYe Kim; Carol J. Boushey; David S. Ebert; Edward J. Delp

There is a growing concern about chronic diseases and other health problems related to diet including obesity and cancer. The need to accurately measure diet (what foods a person consumes) becomes imperative. Dietary intake provides valuable insights for mounting intervention programs for prevention of chronic diseases. Measuring accurate dietary intake is considered to be an open research problem in the nutrition and health fields. In this paper, we describe a novel mobile telephone food record that will provide an accurate account of daily food and nutrient intake. Our approach includes the use of image analysis tools for identification and quantification of food that is consumed at a meal. Images obtained before and after foods are eaten are used to estimate the amount and type of food consumed. The mobile device provides a unique vehicle for collecting dietary information that reduces the burden on respondents that are obtained using more classical approaches for dietary assessment. We describe our approach to image analysis that includes the segmentation of food items, features used to identify foods, a method for automatic portion estimation, and our overall system architecture for collecting the food intake information.


IEEE Transactions on Visualization and Computer Graphics | 2009

Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation

Ross Maciejewski; Insoo Woo; Wei Chen; David S. Ebert

The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial transfer function generation serves as a reasonable base for volumetric rendering, reducing the trial-and-error overhead typically found in transfer function design.


electronic imaging | 2011

Volume Estimation Using Food Specific Shape Templates in Mobile Image-Based Dietary Assessment.

Junghoon Chae; Insoo Woo; Sung Ye Kim; Ross Maciejewski; Fengging Zhu; Edward J. Delp; Carol J. Boushey; David S. Ebert

As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.


IEEE Transactions on Nanotechnology | 2009

Moving Toward Nano-TCAD Through Multimillion-Atom Quantum-Dot Simulations Matching Experimental Data

Muhammad Usman; Hoon Ryu; Insoo Woo; David S. Ebert; Gerhard Klimeck

Low-loss optical communication requires light sources at 1.5 mum wavelengths. Experiments showed, without much theoretical guidance, that InAs/GaAs quantum dots (QDs) may be tuned to such wavelengths by adjusting the In fraction in an InxGa1- xAs strain-reducing capping layer. In this paper, systematic multimillion-atom electronic structure calculations explain, qualitatively and quantitatively, for the first time, available experimental data. The nanoelectronic modeling NEMO 3-D simulations treat strain in a 15-million-atom system and electronic structure in a subset of ~ 9 million atoms using the experimentally given nominal geometries, and without any further parameter adjustments, the simulations match the nonlinear behavior of experimental data very closely. With the match to experimental data and the availability of internal model quantities, significant insight can be gained through mapping to reduced-order models and their relative importance. We can also demonstrate that starting from simple models has, in the past, led to the wrong conclusions. The critical new insight presented here is that the QD changes its shape. The quantitative simulation agreement with experiment, without any material or geometry parameter adjustment in a general atomistic tool, leads us to believe that the era of nanotechnology computer-aided design is approaching. NEMO 3-D will be released on nanoHUB.org, where the community can duplicate and expand on the results presented here through interactive simulations.


IEEE Transactions on Visualization and Computer Graphics | 2013

Abstracting Attribute Space for Transfer Function Exploration and Design

Ross Maciejewski; Yun Jang; Insoo Woo; H. Jänicke; Kelly P. Gaither; David S. Ebert

Currently, user centered transfer function design begins with the user interacting with a one or two-dimensional histogram of the volumetric attribute space. The attribute space is visualized as a function of the number of voxels, allowing the user to explore the data in terms of the attribute size/magnitude. However, such visualizations provide the user with no information on the relationship between various attribute spaces (e.g., density, temperature, pressure, x, y, z) within the multivariate data. In this work, we propose a modification to the attribute space visualization in which the user is no longer presented with the magnitude of the attribute; instead, the user is presented with an information metric detailing the relationship between attributes of the multivariate volumetric data. In this way, the user can guide their exploration based on the relationship between the attribute magnitude and user selected attribute information as opposed to being constrained by only visualizing the magnitude of the attribute. We refer to this modification to the traditional histogram widget as an abstract attribute space representation. Our system utilizes common one and two-dimensional histogram widgets where the bins of the abstract attribute space now correspond to an attribute relationship in terms of the mean, standard deviation, entropy, or skewness. In this manner, we exploit the relationships and correlations present in the underlying data with respect to the dimension(s) under examination. These relationships are often times key to insight and allow us to guide attribute discovery as opposed to automatic extraction schemes which try to calculate and extract distinct attributes a priori. In this way, our system aids in the knowledge discovery of the interaction of properties within volumetric data.


smart graphics | 2010

Automated hedcut illustration using isophotes

Sung Ye Kim; Insoo Woo; Ross Maciejewski; David S. Ebert

In this work, we present an automated system for creating hedcut illustrations, portraits rendered using small image feature aligned dots (stipples). We utilize edge detection and shading cues from the input photograph to direct stipple placement within the image. Both image edges and isophotes are extracted as a means of describing the image feature and shading information. Edge features and isophotes are then assigned different priorities, with isophotes being assigned the highest priority to enhance the depth perception within the hedcut portrait. Priority assignment dictates the stipple alignment and spacing. Finally, stipple size is based on the number of points and intensity and the gradient magnitude of the input image.


applied perception in graphics and visualization | 2010

Evaluating the effectiveness of visualization techniques for schematic diagrams in maintenance tasks

SungYe Kim; Insoo Woo; Ross Maciejewski; David S. Ebert; Timothy D. Ropp; Krystal M. Thomas

In order to perform daily maintenance and repair tasks in complex electrical and mechanical systems, technicians commonly utilize a large number of diagrams and documents detailing system properties in both electronic and print formats. In electronic document views, users typically are only provided with traditional pan and zoom features; however, recent advances in information visualization and illustrative rendering styles should allow users to analyze documents in a more timely and accurate fashion. In this paper, we evaluate the effectiveness of rendering techniques focusing on methods of document/diagram highlighting, distortion, and navigation while preserving contextual information between related diagrams. We utilize our previously developed interactive visualization system (SDViz) for technical diagrams for a series of quantitative studies and an in-field evaluation of the system in terms of usability and usefulness. In the quantitative studies, subjects perform small tasks that are similar to actual maintenance work while using tools provided by our system. First, the effects of highlighting within a diagram and between multiple diagrams are evaluated. Second, we analyze the value of preserving highlighting as well as spatial information when switching between related diagrams, and then we present the effectiveness of distortion within a diagram. Finally, we discuss a field study of the system and report the results of our findings.


ieee vgtc conference on visualization | 2009

SDViz: a context-preserving interactive visualization system for technical diagrams

Insoo Woo; SungYe Kim; Ross Maciejewski; David S. Ebert; Timothy D. Ropp; Krystal M. Thomas

When performing daily maintenance and repair tasks, technicians require access to a variety of technical diagrams. As technicians trace components and diagrams from page‐to‐page, within and across manuals, the contextual information of the components they are analyzing can easily be lost. To overcome these issues, we have developed a Schematic Diagram Visualization System (SDViz) designed for maintaining and highlighting contextual information in technical documents, such as schematic and wiring diagrams. Our system incorporates various features to aid in the navigation and diagnosis of faults, as well as maintaining contextual information when tracing components/connections through multiple diagrams. System features include highlighting relationships between components and connectors, diagram annotation tools, the animation of flow through the system, a novel contextual blending method, and a variety of traditional focus+context visualization techniques. We have evaluated the usefulness of our system through a qualitative user study in which subjects utilized our system in diagnosing faults during a standard aircraft maintenance exercise.


IEEE Transactions on Visualization and Computer Graphics | 2012

Feature-Driven Data Exploration for Volumetric Rendering

Insoo Woo; Ross Maciejewski; Kelly P. Gaither; David S. Ebert

We have developed an intuitive method to semiautomatically explore volumetric data in a focus-region-guided or value-driven way using a user-defined ray through the 3D volume and contour lines in the region of interest. After selecting a point of interest from a 2D perspective, which defines a ray through the 3D volume, our method provides analytical tools to assist in narrowing the region of interest to a desired set of features. Feature layers are identified in a 1D scalar value profile with the ray and are used to define default rendering parameters, such as color and opacity mappings, and locate the center of the region of interest. Contour lines are generated based on the feature layer level sets within interactively selected slices of the focus region. Finally, we utilize feature-preserving filters and demonstrate the applicability of our scheme to noisy data.


IEEE Transactions on Visualization and Computer Graphics | 2018

Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information

Seokyeon Kim; Seongmin Jeong; Insoo Woo; Yun Jang; Ross Maciejewski; David S. Ebert

Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

Collaboration


Dive into the Insoo Woo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. P. Lansbergen

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Caro

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Rogge

University of New South Wales

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge