Daniel Acevedo
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Acevedo.
Central European Journal of Engineering | 2011
Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham
The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
ieee visualization | 2001
Daniel Acevedo; Eileen Vote; David H. Laidlaw; Martha Sharp Joukowsky
Presents the results of an evaluation of the ARCHAVE (ARCHAeological Virtual Environment) system, an immersive virtual reality (VR) environment for archaeological research. ARCHAVE is implemented in a Cave. The evaluation studied researchers analyzing lamp and coin finds throughout the excavation trenches at the Petra Great Temple site in Jordan. Experienced archaeologists used our system to study excavation data, confirming existing hypotheses and postulating new theories they had not been able to discover without the system. ARCHAVE provided access to the excavation database, and researchers were able to examine the data in the context of a life-size representation of the present-day architectural ruins of the temple. They also had access to a miniature model for site-wide analysis. Because users quickly became comfortable with the interface, they concentrated their efforts on examining the data being retrieved and displayed. The immersive VR visualization of the recovered information gave them the opportunity to explore it in a new and dynamic way and, in several cases, enabled them to make discoveries that opened new lines of investigation about the excavation.
international conference on computer graphics and interactive techniques | 2003
Cullen D. Jackson; Daniel Acevedo; David H. Laidlaw; Fritz Drury; Eileen Vote; Daniel F. Keefe
Figure 1. The six visualization methods the designer critiqued. The large image shows a full screen shot. The other six are details from each method (clockwise from top-left: JIT, LIC, LIT, OSTR, GRID, GSTR). The circles represent an advection task used in a previous study [Laidlaw et al. 2001]. Observers were asked to indicate where a particle in the flow, starting at the small concentric circle, would intersect the large concentric circle; the other circle represents the correct intersection point. GSTR JIT LIC LIT OSTR GRID
IEEE Transactions on Visualization and Computer Graphics | 2006
Daniel Acevedo; David H. Laidlaw
We present an evaluation of a parameterized set of 2D icon-based visualization methods where we quantified how perceptual interactions among visual elements affect effective data exploration. During the experiment, subjects quantified three different design factors for each method: the spatial resolution it could represent, the number of data values it could display at each point, and the degree to which it is visually linear. The class of visualization methods includes Poisson-disk distributed icons where icon size, icon spacing, and icon brightness can be set to a constant or coupled to data values from a 2D scalar field. By only coupling one of those visual components to data, we measured filtering interference for all three design factors. Filtering interference characterizes how different levels of the constant visual elements affect the evaluation of the data-coupled element. Our novel experimental methodology allowed us to generalize this perceptual information, gathered using ad-hoc artificial datasets, onto quantitative rules for visualizing real scientific datasets. This work also provides a framework for evaluating visualizations of multi-valued data that incorporate additional visual cues, such as icon orientation or color
medical image computing and computer assisted intervention | 2004
Daniel Acevedo; Song Zhang; David H. Laidlaw; Christopher W. Bull
We describe work toward creating color rapid prototyping (RP) plaster models as visualization tools to support scientific research in diffusion-tensor (DT) MRI analysis. We currently give surgeons and neurologists virtual-reality (VR) applications to visualize different aspects of their brain data, but having physical representations of those virtual models allows them to review the data with a very robust, natural, and fast haptic interface: their own hands. Our initial results are encouraging, and end users are excited about the possibilities of this technique. For example, using these models in conjunction with digital models on the computer screen or VR environment provides a static frame of reference that helps keep users oriented during their analysis tasks.
international conference on computer graphics and interactive techniques | 2003
Eileen Vote; Daniel Acevedo; Cullen D. Jackson; Jason S. Sobel; David H. Laidlaw
We present a design schema for generating data visualizations using examples from art. With this approach a user: 1.) chooses a composition or detail from a favorite painting, 2.) generates a template by extracting characteristics or brushstrokes suitable for representing data variables, 3.) generates a pre-visualization of the data using a rendering framework [Sobel 2003] and, 4.) transfers the features from the painting using the Image Analogies, Texture-By-Numbers algorithm [Hertzmann et al. 2001].
IEEE Transactions on Visualization and Computer Graphics | 2008
Daniel F. Keefe; Daniel Acevedo; Jadrian Miles; Fritz Drury; Sharon M. Swartz; David H. Laidlaw
IEEE Transactions on Visualization and Computer Graphics | 2008
Daniel Acevedo; Cullen D. Jackson; Fritz Drury; David H. Laidlaw
symposium on 3d user interfaces | 2012
Jürgen P. Schulze; Daniel Acevedo; John Mangan; Andrew Prudhomme; Phi Nguyen; Philip Weber
ieee virtual reality conference | 2002
Robert C. Zeleznik; Joseph J. La{V}iola; Daniel Acevedo; Daniel F. Keefe