Arthur Nishimoto
University of Illinois at Chicago
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arthur Nishimoto.
Proceedings of SPIE | 2013
Alessandro Febretti; Arthur Nishimoto; Terrance Thigpen; Jonas Talandis; Lance Long; Jd Pirtle; Tom Peterka; Alan Verlo; Maxine D. Brown; Dana Plepys; Daniel J. Sandin; Luc Renambot; Andrew E. Johnson; Jason Leigh
Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
ieee virtual reality conference | 2014
Alessandro Febretti; Arthur Nishimoto; Victor A. Mateevitsi; Luc Renambot; Andrew E. Johnson; Jason Leigh
In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.
Future Generation Computer Systems | 2016
Luc Renambot; Thomas Marrinan; Jillian Aurisano; Arthur Nishimoto; Victor A. Mateevitsi; Krishna Bharadwaj; Lance Long; Andrew E. Johnson; Maxine D. Brown; Jason Leigh
Abstract In this paper, we present SAGE2, a software framework that enables local and remote collaboration on Scalable Resolution Display Environments (SRDE). An SRDE can be any configuration of displays, ranging from a single monitor to a wall of tiled flat-panel displays. SAGE2 creates a seamless ultra-high resolution desktop across the SRDE. Users can wirelessly connect to the SRDE with their own devices in order to interact with the system. Many users can simultaneously utilize a drag-and-drop interface to transfer local documents and show them on the SRDE, use a mouse pointer and keyboard to interact with existing content that is on the SRDE and share their screen so that it is viewable to all. SAGE2 can be used in many configurations and is able to support many communities working with various types of media and high-resolution content, from research meetings to creative session to education. SAGE2 is browser-based, utilizing a web server to host content, WebSockets for message passing and HTML with JavaScript for rendering and interaction. Recent web developments, with the emergence of HTML5, have allowed browsers to use advanced rendering techniques without requiring plug-ins (canvas drawing, WebGL 3D rendering, native video player, etc.). One major benefit of browser-based software is that there are no installation requirements for users and it is inherently cross-platform. A user simply needs a web browser on the device he/she wishes to use as an interaction tool for the SRDE. This lowers considerably the barrier of entry to engage in meaningful collaboration sessions.
international symposium on visual computing | 2011
Alessandro Febretti; Victor A. Mateevitsi; Dennis Chau; Arthur Nishimoto; Brad McGinnis; Jakub Misterka; Andrew E. Johnson; Jason Leigh
OmegaDesk is a device that allows for seamless interaction between 2D and 3D content. In order to develop this hybrid device, a new form of Operating System is needed to manage and display heterogeneous content. In this paper we address the hardware and software requirements for such a system, as well as challenges. A set of heterogeneous applications has been successfully developed on OmegaDesk. They allowed us to develop a set of guidelines to drive future investigations into 2D/3D hybridized viewing and interaction.
Leonardo | 2017
Daria Tsoupikova; Scott Rettberg; Roderick Coover; Arthur Nishimoto
Hearts and Minds: The Interrogations Project is an interactive virtual reality narrative performance made for the EVL’s CAVE2™ large-scale 320-degree panoramic virtual reality environment that visualizes stories of violence and the post-traumatic stress experienced by ordinary American soldiers who became torturers in the course of serving their country. During the American-led counterinsurgency and counter-terrorism campaigns in Iraq in the years after 11 September 2001, the torture and abuse of detainees was a commonplace tactic.
2015 IEEE Scientific Visualization Conference (SciVis) | 2015
Peter Hanula; Kamil Piekutowski; Carlos Uribe; Kyle R. Almryde; Arthur Nishimoto; Julieta C. Aguilera; G. Elisabeta Marai
We present the design and implementation of an immersive visual mining and analysis tool for cosmological data. The tool consists of an immersive linked multiview display which allows domain experts to interact with visual representations of spatial and nonspatial cosmology data. Nonspatial data is represented as time-aligned merger trees, and through a pixel-based heatmap. Spatial data is represented through GPU-accelerated point clouds and geometric primitives. The user can select a halo and visualize a 3D representation of the raw particles, as well as of the halos at the particular time stamp. We have demonstrated the tool to a senior staff member of the Adler Planetarium and report their feedback. The tool can assist researchers in the interaction navigation and mining of large scale cosmological simulation data.
Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces | 2016
Thomas Marrinan; Arthur Nishimoto; Joseph A. Insley; Silvio Rizzi; Andrew E. Johnson; Michael E. Papka
Classic visual analysis relies on a single medium for displaying and interacting with data. Large-scale tiled display walls, virtual reality using head-mounted displays or CAVE systems, and collaborative touch screens have all been utilized for data exploration and analysis. We present our initial findings of combining numerous display environments and input modalities to create an interactive multi-modal display space that enables researchers to leverage various pieces of technology that will best suit specific sub-tasks. Our main contributions are 1) the deployment of an input server that interfaces with a wide array of interaction devices to create a single uniform stream of data usable by custom visual applications, and 2) three real-world use cases of leveraging multiple display environments in conjunction with one another to enhance scientific discovery and data dissemination.
ieee virtual reality conference | 2015
Daria Tsoupikova; Roderick Coover; Scott Rettberg; Arthur Nishimoto
Hearts and Minds: The Interrogations Project is an interactive Virtual Reality narrative performance developed in Unity for the CAVE2™ large-scale 320-degree panoramic virtual reality environment at the Electronic Visualization Lab (EVL) at the University of Illinois Chicago (UIC). The work provides an experience of true stories of abusive violence, battlefield torture and post-traumatic stress during the American-led counterinsurgency and counter terrorism campaigns in Iraq in the years after September 11, 2001. The immersive virtual reality environment of the CAVE2™ is used to communicate both difficult truths about torture and to produce an affective impression of the how these practices were experienced by individual American soldiers.
conference on computability in europe | 2013
Khairi Reda; Dennis Chau; Yasser Mostafa; Nagarajan Sujatha; Jason Leigh; Arthur Nishimoto; Edward Kahler; Jason Demeter
The proliferation of multi-touch, tabletop display systems during the last few years have made them an attractive option for interactive, multiuser applications such as museum exhibits and video games. While there is a large body of research on the use of multi-touch and tabletop devices in general purpose applications, far less research has investigated the use of these systems in video games and other entertainment applications. This paper provides a set of guidelines specific to multi-touch displays that can be used to augment existing video game development principles. Through example, we illustrate how the unique capabilities of multi-touch displays can be leveraged to create unique forms of gameplay that offer highly engaging multiplayer game experience. We describe three multiplayer games that have been developed by students as part of an interdisciplinary course in video game design.
collaborative computing | 2014
Thomas Marrinan; Jillian Aurisano; Arthur Nishimoto; Krishna Bharadwaj; Victor A. Mateevitsi; Luc Renambot; Lance Long; Andrew E. Johnson; Jason Leigh