Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maxine D. Brown is active.

Publication


Featured researches published by Maxine D. Brown.


international conference on computer graphics and interactive techniques | 1997

The ImmersaDesk and Infinity Wall projection-based virtual reality displays

Marek Czernuszenko; Dave Pape; Daniel J. Sandin; Thomas A. DeFanti; Gregory Dawe; Maxine D. Brown

Virtual reality (VR) can be defined as interactive computer graphics that provides viewer-centered perspective, large field of view and stereo. Head-mounted displays (HMDs) and BOOMs™ achieve these features with small display screens which move with the viewer, close to the viewers eyes. Projection-based displays [3, 7], supply these characteristics by placing large, fixed screens more distant from the viewer. The Electronic Visualization Laboratory (EVL) of the University of Illinois at Chicago has specialized in projection-based VR systems. EVLs projection-based VR display, the CAVE™ [2], premiered at the SIGGRAPH 92 conference.In this article we present two new, CAVE-derived, projection-based VR displays developed at EVL: the ImmersaDesk™ and the Infinity Wall™, a VR version of the PowerWall [9]. We describe the different requirements which led to their design, and compare these systems to other VR devices.


Proceedings of SPIE | 2013

CAVE2: a hybrid reality environment for immersive simulation and information analysis

Alessandro Febretti; Arthur Nishimoto; Terrance Thigpen; Jonas Talandis; Lance Long; Jd Pirtle; Tom Peterka; Alan Verlo; Maxine D. Brown; Dana Plepys; Daniel J. Sandin; Luc Renambot; Andrew E. Johnson; Jason Leigh

Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.


Central European Journal of Engineering | 2011

The future of the CAVE

Thomas A. DeFanti; Daniel Acevedo; Richard A. Ainsworth; Maxine D. Brown; Steven Matthew Cutchin; Gregory Dawe; Kai Doerr; Andrew E. Johnson; Chris Knox; Robert Kooima; Falko Kuester; Jason Leigh; Lance Long; Peter Otto; Vid Petrovic; Kevin Ponto; Andrew Prudhomme; Ramesh R. Rao; Luc Renambot; Daniel J. Sandin; Jürgen P. Schulze; Larry Smarr; Madhu Srinivasan; Philip Weber; Gregory Wickham

The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.


Future Generation Computer Systems | 2009

Editorial: Special section: OptIPlanet - The OptIPuter global collaboratory

Larry Smarr; Maxine D. Brown; Cees de Laat

The technological developments made by OptIPuter research project as an OptIPlanet Collaboratory of virtual organizations, in various scientific and technology domains, enhancing and contributing to this evolving cyberinfrastructure to solve complex global problems, are summarized. OptIPuter project has developed an optical control plane, an infrastructure and distributed intelligence, to control the establishment and maintenance of connections in a network and algorithms for engineering an optimal path among endpoints. The Scalable Adaptive Graphics Environment (SAGE) visualization middleware developed by OptIPuter partner Electronic Visualization Laboratory, is an operating system for tiled-display environments, which allows users to launch distributed visualization applications on remote computer clusters.


IEEE Computer Graphics and Applications | 2010

Ultrascale Collaborative Visualization Using a Display-Rich Global Cyberinfrastructure

Byungil Jeong; Jason Leigh; Andrew E. Johnson; Luc Renambot; Maxine D. Brown; Ratko Jagodic; Sungwon Nam; Hyejung Hur

The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.


Communications of The ACM | 2008

Cyber-commons: merging real and virtual worlds

Jason Leigh; Maxine D. Brown

Cyber-mashups of very large data sets let users explore, analyze, and comprehend the science behind the information being streamed.


IEEE Computer Graphics and Applications | 1996

Virtual Reality Over High-Speed Networks

Thomas A. DeFanti; Maxine D. Brown; Rick Stevens

The Supercomputing 95/GII Testbed supported VR-to-VR and supercomputing-to-VR communications that enabled researchers to investigate complex problems over distance.


Future Generation Computer Systems | 2006

The first functional demonstration of optical virtual concatenation as a technique for achieving terabit networking

Akira Hirano; Luc Renambot; Byungil Jeong; Jason Leigh; Alan Verlo; Venkatram Vishwanath; Rajvikram Singh; Julieta C. Aguilera; Andrew E. Johnson; Thomas A. DeFanti; Lance Long; Nicholas Schwarz; Maxine D. Brown; Naohide Nagatsu; Yukio Tsukishima; Masahito Tomizawa; Yutaka Miyamoto; Masahiko Jinno; Yoshihiro Takigawa; Osamu Ishida

The optical virtual concatenation (OVC) function of The Terabit LAN was demonstrated for the first time at the iGrid 2005 workshop in San Diego, California. The TERAbit-LAN establishes a lambda group path (LGP) for an application where the number of lambdas/L2 connections in a LGP can be specified by the application. Each LGP is logically treated as one end-to-end optical path, so during parallel transport, the LGP channels have no relative latency deviation. However, optical path diversity (e.g. restoration) can cause LGP relative latency deviations and negatively affect quality of service. OVC hardware developed by NTT compensates for relative latency deviations to achieve a virtual bulk transport for the Electronic Visualization Laboratorys (EVL) Scalable Adaptive Graphics Environment application.


Advances in Computers | 1991

Visualization in scientific computing

Thomas A. DeFanti; Maxine D. Brown

Publisher Summary Computational science and engineering (CSE it enables them to analyze the data to uncover new information. Computational scientists rely upon a host of high-volume data sources in order to conduct their research. However, they are deluged by the flood of data generated. Using an exclusively numerical format, the human brain cannot interpret gigabytes of data each day, so much information now goes to waste. It is impossible for users to ever quantitatively examine more than a tiny fraction of the solution; that is, it is impossible to investigate the qualitative global nature of numeric solutions. Therefore, the ability to visualize complex computations and simulations is absolutely essential to ensure the integrity of analyses, to provoke insights, and to communicate those insights with others. The chapter focuses on visualization. It is a method of computing that gives visual form to complex data. The growing importance of CS&E, especially with supercomputer capabilities, is creating a commensurate need for more sophisticated visual representations of natural phenomena across time. This requires the development of new tool sets for image generation, visual communication, and analysis. The chapter discusses examples of scientific-visualization for facilitating CS&E research specifically in the fields of planetary sciences, molecular modeling, mathematics, and medical Imaging. The chapter also highlights the current limitations and bottlenecks in visualization technology, focusing on software limitations, data management limitation, hardware limitations, educational limitations, and communication and publication limitation.


Future Generation Computer Systems | 2016

SAGE2: A collaboration portal for scalable resolution displays

Luc Renambot; Thomas Marrinan; Jillian Aurisano; Arthur Nishimoto; Victor A. Mateevitsi; Krishna Bharadwaj; Lance Long; Andrew E. Johnson; Maxine D. Brown; Jason Leigh

Abstract In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education. SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.

Collaboration


Dive into the Maxine D. Brown's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jason Leigh

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Sandin

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Andrew E. Johnson

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Luc Renambot

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Lance Long

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Gregory Dawe

University of California

View shared research outputs
Top Co-Authors

Avatar

Larry Smarr

University of California

View shared research outputs
Top Co-Authors

Avatar

Alan Verlo

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar

Byungil Jeong

University of Illinois at Chicago

View shared research outputs
Researchain Logo
Decentralizing Knowledge