Todd Margolis
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Todd Margolis.
Future Generation Computer Systems | 2011
Todd Margolis; Sheldon Brown; Tracy Cornish; Hector Bracho; Michael Stanton; Tereza Cristina M. B. Carvalho; Fernando F. Redigolo; Fábio Carneiro de Castro; Kunitake Kaneko; Jane de Almeida; Cicero Inacio da Silva; Eunézio Antônio de Souza
The aesthetic potentials of 4K digital cinema are an impetus for creative practitioners of cinematic arts to undertake full investigations of the methods and expressive possibilities of this end-to-end digital medium. This includes new approaches to production, distribution and the forum of theatrical experience. A group of artists, film-makers and computer scientists have been developing a series of 4K digital cinema projects that have diffused and grown the manner in which 4K cinema is experienced between Brazil, the US and Japan. The recent culmination of this was an event in July 2009 in which a 4K feature length movie had its world premiere on three continents, streamed from Brazil to the US and Japan. This was accompanied by an HD video teleconference (VTC) between the three sites. To the knowledge of the participants, this was both the first uncompressed HD VTC between the northern and southern hemispheres as well as the first feature length 4K film to be streamed across three continents. Through this work, the new creative affordances of 4K cinema were highlighted, along with the new capabilities of cinematic distribution, production and experience.
Proceedings of SPIE | 2013
Todd Margolis; Tracy Cornish
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the “real world”. Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
Proceedings of SPIE | 2012
Todd Margolis; Tracy Cornish; Rodney Berry; Thomas A. DeFanti
Our contemporary imaginings of technological engagement with digital environments has transitioned from flying through Virtual Reality to mobile interactions with the physical world through personal media devices. Experiences technologically mediated through social interactivity within physical environments are now being preferenced over isolated environments such as CAVEs or HMDs. Examples of this trend can be seen in early tele-collaborative artworks which strove to use advanced networking to join multiple participants in shared virtual environments. Recent developments in mobile AR allow untethered access to such shared realities in places far removed from labs and home entertainment environments, and without the bulky and expensive technologies attached to our bodies that accompany most VR. This paper addresses the emerging trend favoring socially immersive artworks via mobile Augmented Reality rather than sensorially immersive Virtual Reality installations. With particular focus on AR as a mobile, locative technology, we will discuss how concepts of immersion and interactivity are evolving with this new medium. Immersion in context of mobile AR can be redefined to describe socially interactive experiences. Having distinctly different sensory, spatial and situational properties, mobile AR offers a new form for remixing elements from traditional virtual reality with physically based social experiences. This type of immersion offers a wide array of potential for mobile AR art forms. We are beginning to see examples of how artists can use mobile AR to create social immersive and interactive experiences.
ieee virtual reality conference | 2009
Ruth West; Joachim Gossmann; Todd Margolis; Jürgen P. Schulze; J. P. Lewis; Ben Hackbarth; Iman Mostafavi
ATLAS in silico is an interactive installation/virtual environment that provides an aesthetic encounter with metagenomics data (and contextual metadata) from the Global Ocean Survey (GOS). The installation creates a visceral experience of the abstraction of nature in to vast data collections - a practice that connects expeditionary science of the 19th Century with 21st Century expeditions like the GOS. Participants encounter a dream-like, highly abstract, and datadriven virtual world that combines the aesthetics of fine-lined copper engraving and grid-like layouts of 19th Century scientific representation with 21st Century digital aesthetics including wireframes and particle systems. It is resident at the Calit2 Immersive visualization Laboratory on the campus of UC San Diego, where it continues in active development. The installation utilizes a combination of infrared motion tracking, custom computer vision, multi-channel (10.1) spatialized interactive audio, 3D graphics, data sonification, audio design, networking, and the VarrierTM 60 tile, 100-million pixel barrier strip auto-stereoscopic display. Here we describe the physical and audio display systems for the installation and a hybrid strategy for multi-channel spatialized interactive audio rendering in immersive virtual reality that combines amplitude, delay and physical modeling-based, real-time spatialization approaches for enhanced expressivity in the virtual sound environment that was developed in the context of this artwork. The desire to represent a combination of qualitative and quantitative multidimensional, multi-scale data informs the artistic process and overall system design. We discuss the resulting aesthetic experience in relation to the overall system.
Archive | 2015
David Srour; John Mangan; Aliya Hoff; Todd Margolis; Jessica Block; Matthew L. Vincent; Thomas A. DeFanti; Thomas E. Levy; Falko Kuester
Scientific storytelling can be utilized to effectively convey research by synthesizing media-based data sets within a narrative frame. The “MediaCommons Framework” (MCF) was developed to efficiently integrate high-resolution media via cluster display systems for immersive, collaborative visualization. By incorporating temporal, spatial, and audio localization components into a wide-range of high-resolution media types, the MediaCommons Framework provides an ideal platform for scientific storytelling as it offers a coherent view of contextualized data, imparting a more engaging and intelligible experience to the public. As a case study, we describe our experiences using the framework to develop a storytelling application for the 2013 EX3: Exodus, Cyber-archaeology, and the Future digital museum exhibit. The use of the MCF within EX3 demonstrates the significance of advanced visualization in archaeology and the exigency to advance cyber-archaeological and other transdisciplinary research endeavors. This chapter’s focus is on the technology used during the Exodus exhibit to convey stories to an audience within a museum-like environment. It includes high-level technical details about the framework used, our experiences developing and using storytelling applications for tiled-display walls, and the outcome of the public events these applications were used in. The experiences mentioned in this chapter are from the points of view of the framework’s developers, exhibit content creators and managers, as well as archaeologists who have had the chance to use the applications to tell their stories to their targeted audience. This research embodies the ancient World Building methodology outlined in Seldess et al. (Chap. 11).
optical fiber communication conference | 2013
Naohisa Ohta; Todd Margolis; Tracy Cornish; Janak Bhimani; Ali Almahr; Daisuke Shirai; Devin O'Hara; Wayne McLemore
We developed a new collaborative scheme for creating high-quality documentary content with cloud-based content sharing and collaborative directing/editing capabilities in remote locations. The platform was examined iteratively through real documentary productions over high-speed networked environments.
The New Review of Hypermedia and Multimedia | 2012
Jean-François Lucas; Tracy Cornish; Todd Margolis
Mixed reality defines the sharing of a space-time between the real and the virtual world. The definition of this concept is further extended when virtual worlds such as Second Life® (SL) are included. Through cultural events such as concerts and operas, we will see that the main goal of these kinds of projects is not simply to offer a video and audio broadcast of these events in the digital dimension. The current challenge is to create interactions between the individuals who are in different shared spaces. By studying the unfolding of these events in its various phases—before, during, and after—we examine the culture of the event. We question how the culture of the event can be transposed in a mixed reality display, and how this kind of event can affect people on both sides of the “membrane” made by the technical configuration. Beyond the alignments and adjustments that we can see between the different individuals involved in these events, we examine more broadly the changes and mutations of the culture of the event in this specific configuration.
international conference on computer graphics and interactive techniques | 2009
Ruth West; J. P. Lewis; Todd Margolis; Joachim Gossmann; Jürgen P. Schulze; Daniel Tenedorio; Rajvikram Singh
This aesthetically impelled work explores the use of dimensional glyphs generated by a custom meta-shape grammar algorithm to visually differentiate individual records from a massive meta-genomics dataset comprised of 17.4 million sequences and place them in a human context to reflect on the digitization of nature and culture. The Global Ocean Sampling Expedition, conducted by the J. Craig Venter Institute, studies genetics of communities of marine microorganisms throughout the worlds oceans, which sequester carbon from the atmosphere with potentially significant effects on global climate. The vast dataset contains DNA sequences, 17.4 million associated, predicted amino-acid sequences called ORFs (Open Reading Frames) along with a series of metadata descriptors.
Proceedings of SPIE | 2014
Todd Margolis
We are witnessing an explosion of new forms of Human Computer Interaction devices lately for both laboratory research and home use. With these new affordance in user interfaces (UI), how can gestures be used to improve interaction for large scale immersive display environments. Through the investigation of full body, head and hand tracking, this paper will discuss various modalities of gesture recognition and compare their usability to other forms of interactivity. We will explore a specific implementation of hand gesture tracking within a large tiled display environment for use with common collaborative media interaction activities.
Proceedings of SPIE | 2011
Todd Margolis; Thomas A. DeFanti; Gregory Dawe; Andrew Prudhomme; Jürgen P. Schulze; Steve Cutchin