Michael E. Goss
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael E. Goss.
symposium on volume visualization | 1998
Craig M. Wittenbrink; Thomas Malzbender; Michael E. Goss
Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. The authors demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. They present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. They show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. The proposed technique may also have broad impact on opacity-texture-mapped polygon rendering.
ACM Transactions on Multimedia Computing, Communications, and Applications | 2005
Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; W. Bruce Culbertson; Thomas Malzbender
Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility---participants may move around the shared space---and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents.Coliseum is a complex software system which pushes commodity computing resources to the limit. We set out to measure the different aspects of resource, network, CPU, memory, and disk usage to uncover the bottlenecks and guide enhancement and control of system performance. Latency is a key component of Quality of Experience for video conferencing. We present how each aspect of the system---cameras, image processing, networking, and display---contributes to total latency. Performance measurement is as complex as the system to which it is applied. We describe several techniques to estimate performance through direct light-weight instrumentation as well as use of realistic end-to-end measures that mimic actual user experience. We describe the various techniques and how they can be used to improve system performance for Coliseum and other network applications. This article summarizes the Coliseum technology and reports on issues related to its performance---its measurement, enhancement, and control.
acm multimedia | 2003
Harlyn Baker; Nina Bhatti; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; John MacCormick; Kei Yuasa; W. Bruce Culbertson; Thomas Malzbender
Coliseum is a multiuser immersive remote teleconferencing system designed to provide collaborative workers the experience of face-to-face meetings from their desktops. Five cameras are attached to each PC display and directed at the participant. From these video streams, view synthesis methods produce arbitrary-perspective renderings of the participant and transmit them to others at interactive rates, currently about 15 frames per second. Combining these renderings in a shared synthetic environment gives the appearance of having all participants interacting in a common space. In this way, Coliseum enables users to share a virtual world, with acquired-image renderings of their appearance replacing the synthetic representations provided by more conventional avatar-populated virtual worlds. The system supports virtual mobility--participants may move around the shared space--and reciprocal gaze, and has been demonstrated in collaborative sessions of up to ten Coliseum workstations, and sessions spanning two continents. This paper summarizes the technology, and reports on issues related to its performance.
international conference on computer graphics and interactive techniques | 1998
Michael E. Goss; Kei Yuasa
Three-dimensional scenes have become an important form of content deliverable through the Internet. Standard formats such as Virtual Reality Modeling Language (VRML) make it possible to dynamically download complex scenes from a server directly to a web browser. However, limited bandwidth between servers and clients presents an obstacle to the availability of more complex scenes, since geometry and texture maps for a reasonably complex scene may take many minutes to transfer over a typical telephone modem link. This paper addresses one part of the bandwidth bottleneck, texture transmission. Current display methods transmit an entire texture to the client before it can be used for rendering. We present an alternative method which subdivides each texture into tiles, and dynamically determines on the client which tiles are visible to the user. Texture tiles are requested by the client in an order determined by the number of screen pixels affected by the texture tile, so that texture tiles which affect the greatest number of screen pixels are transmitted first. The client can render images during texture loading using tiles which have already been loaded. The tile visibility calculations take full account of occlusion and multiple texture image resolution levels, and are dynamically recalculated each time a new frame is rendered. We show how a few additions to the standard graphics hardware pipeline can add this capability without radical architecture changes, and with only moderate hardware cost. The addition of this capability makes it practical to use large textures even over relatively slow network connections.
Archive | 2002
Thomas Malzbender; W. Bruce Culbertson; Harlyn Baker; Michael E. Goss; Daniel G. Gelb; Irwin Sobel; Donald Tanguay
Archive | 2002
Harlyn Baker; Donald Tanguay; Irwin Sobel; Dan Gelb; Michael E. Goss; W. Bruce Culbertson; Thomas Malzbender
Archive | 2003
John G. Apostolopoulos; Nina Bhatti; W. Culbertson; Daniel G. Gelb; Michael E. Goss; Thomas Malzbender; Kei Yuasa
Archive | 1998
Kei Yuasa; Michael E. Goss
Archive | 1999
Michael E. Goss; Kei Yuasa
Archive | 2003
Michael E. Goss; Daniel G. Gelb; Thomas Malzbender