Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lawrence J. Rosenblum is active.

Publication


Featured researches published by Lawrence J. Rosenblum.


Proceedings IEEE and ACM International Symposium on Augmented Reality (ISAR 2000) | 2000

Information filtering for mobile augmented reality

Simon J. Julier; Marco Lanzagorta; Yohan Baillot; Lawrence J. Rosenblum; Steven Feiner; Tobias Höllerer; Sabrina Sestito

Augmented reality is a potentially powerful paradigm for annotating the (real) environment with computer-generated material. These benefits will be even greater when augmented reality systems become mobile and wearable. However, to minimize the problem of clutter and to maximize the effectiveness of the display, algorithms must be developed to select only the most important information for the user. In this paper, we describe a region-based information filtering algorithm. The algorithm takes account of the state of the user (location and intent) and the state of individual objects about which information can be presented. It can dynamically respond to changes in the environment and the users state. We also describe how simple temporal, distance and angle cues can be used to refine the transitions between different information sets.


Computers & Graphics | 2001

User Interface Management Techniques for Collaborative Mobile Augmented Reality

Tobias Höllerer; Steven Feiner; Drexel Hallaway; Blaine Bell; Marco Lanzagorta; Dennis G. Brown; Simon J. Julier; Yohan Baillot; Lawrence J. Rosenblum

Mobile Augmented Reality Systems (MARS) have the potential to revolutionize the way in which information is provided to users. Virtual information can be directly integrated with the real world surrounding the mobile user, who can interact with it to display related information, to pose and resolve queries, and to collaborate with other users. However, we believe that the benefits of MARS will only be achieved if the user interface (UI) is actively managed so as to maximize the relevance and minimize the confusion of the virtual material relative to the real world. This article addresses some of the steps involved in this process, focusing on the design and layout of the mobile user’s overlaid virtual environment. The augmented view of the user’s surroundings presents an interface to context-dependent operations, many of which are related to the objects in view—the augmented world is the user interface. We present three user interface design techniques that are intended to make this interface as obvious and clear to the user as possible: information filtering, UI component design, and view management. Information filtering helps select the most relevant information to present to the user. UI component designdetermines the format in which this information should be conveyed, based on the available display resources and tracking accuracy. For example, the absence of high accuracy position tracking would favor body- or screenstabilized components over world-stabilized ones that would need to be exactly registered with the physical objects to which they refer. View management attempts to ensure that the virtual objects that are displayed visually are arranged appropriately with regard to their projections on the view plane. For example, the relationships among objects should be as unambiguous as possible, and physical or virtual objects should not obstruct the user’s view of more important physical or virtual objects in the scene. We illustrate these interface design techniques using our prototype collaborative, cross-site MARS environment, which is composed of mobile and non-mobile augmented reality and virtual reality systems.


IEEE Computer Graphics and Applications | 1999

Multimodal interaction for 2D and 3D environments [virtual reality]

Philip R. Cohen; David McGee; Sharon Oviatt; Lizhong Wu; Josh Clow; Rob King; Simon J. Julier; Lawrence J. Rosenblum

The allure of immersive technologies is undeniable. Unfortunately, the users ability to interact with these environments lags behind the impressive visuals. In particular, its difficult to navigate in unknown visual landscapes, find entities, access information and select entities using six-degrees-of-freedom (6-DOF) devices. We believe multimodal interaction-specifically speech and gesture-will make a major difference in the usability of such environments.


IEEE Transactions on Image Processing | 1998

Underwater imaging with a moving acoustic lens

Behzad Kamgar-Parsi; Lawrence J. Rosenblum; Edward O. Belcher

The acoustic lens is a high-resolution, forward-looking sonar for three dimensional (3-D) underwater imaging. We discuss processing the lens data for recreating and visualizing the scene. Acoustical imaging, compared to optical imaging, is sparse and low resolution. To achieve higher resolution, we obtain a denser sample by mounting the lens on a moving platform and passing over the scene. This introduces the problem of data fusion from multiple overlapping views for scene formation, which we discuss. We also discuss the improvements in object reconstruction by combining data from several passes over an object. We present algorithms for pass registration and show that this process can be done with enough accuracy to improve the image and provide greater detail about the object. The results of in-water experiments show the degree to which size and shape can be obtained under (nearly) ideal conditions.


ieee visualization | 1996

Virtual workbench - a non-immersive virtual environment for visualizing and interacting with 3D objects for scientific visualization

Upul Obeysekare; Chas Williams; Jim Durbin; Lawrence J. Rosenblum; Robert Rosenberg; Fernando F. Grinstein; Ravi Ramamurthi; Alexandra Landsberg; William Sandberg

The Virtual Workbench (VW) is a non-immersive virtual environment that allows users to view and interact with stereoscopic objects displayed on a workspace similar to a tabletop workspace used in day-to-day life. A VW is an ideal environment for collaborative work where several colleagues can gather around the table to study 3D virtual objects. The Virtual Reality laboratory at the Naval Research Laboratory has implemented the VW using a concept similar to (Froehlich et al., 1994). This paper investigates how the VW can be used as a non-immersive display device for understanding and interpreting complex objects encountered in the scientific visualization field. Different techniques for interacting with 3D visualization objects on the table and using VW as a display device for visualization are evaluated using several cases.


IEEE Computer Graphics and Applications | 1994

European activities in virtual reality

José L. Encarnação; Martin Göbel; Lawrence J. Rosenblum

We survey European activities in virtual reality, with an emphasis on selected efforts in architecture and sound, telepresence, scientific visualization, simulation, software design, and entertainment. This article surveys European activities and funding for VR with two caveats: First, nearly a year separates writing and publication. For most scientific fields, this publication delay for survey material would be minimal: for virtual reality, significant changes might have since occurred in some programs. We took advantage of the revision period to upgrade our information and the references as much as possible. Second, some long standing, significant European efforts go unmentioned as outside the scope of our short survey or as duplicates of others included. Despite the limitations, this sampling of Europes leading efforts collectively gives an accurate snapshot of current European activity.<<ETX>>


IEEE Computer Graphics and Applications | 1994

Research issues in volume visualization

Arie E. Kaufman; Karl Heinz Höhne; Wolfgang Krüger; Lawrence J. Rosenblum; Peter Schröder

Volume visualization is a method of extracting meaningful information from volumetric data sets through the use of interactive graphics and imaging. It addresses the representation, manipulation, and rendering of volumetric data sets, providing mechanisms for peering into structures and understanding their complexity and dynamics. Typically, the data set is represented as a 3D regular grid of volume elements (voxels) and stored in a volume buffer (also called a cubic frame buffer), which is a large 3D array of voxels. However, data is often defined at scattered or irregular locations that require using alternative representations and rendering algorithms. There are eight major research issues in volume visualization: volume graphics, volume rendering, transform coding of volume data, scattered data, enriching volumes with knowledge, segmentation, real-time rendering and parallelism, and special purpose hardware.<<ETX>>


ieee virtual reality conference | 1999

The software architecture of a real-time battlefield visualization virtual environment

Simon J. Julier; Rob King; Brad Colbert; Jim Durbin; Lawrence J. Rosenblum

This paper describes the software architecture of Dragon, a real-time situational awareness virtual environment for battlefield visualization. Dragon receives data from a number of different sources and creates a single, coherent, and consistent three-dimensional display. We describe the problem of Battlefield Visualization and the challenges it imposes. We discuss the Dragon architecture, the rational for its design, and its performance in an actual application. The battlefield VR system is also suitable for similar civilian domains such as large-scale disaster relief and hostage rescue.


Handbook of Augmented Reality | 2011

Military Applications of Augmented Reality

Mark A. Livingston; Lawrence J. Rosenblum; Dennis G. Brown; Gregory S. Schmidt; Simon J. Julier; Yohan Baillot; J. Edward Swan; Zhuming Ai; Paul Maassel

This chapter reviews military benefits and requirements that have led to a series of research efforts in augmented reality (AR) and related systems for the military over the past few decades, beginning with the earliest specific application of AR. While by no means a complete list, we note some themes from the various projects and discuss ongoing research at the Naval Research Laboratory. Two of the most important thrusts within these applications are the user interface and human factors. We summarize our research and place it in the context of the field.


IEEE Computer Graphics and Applications | 2004

Multidimensional visual representations for underwater environmental uncertainty

Greg S. Schmidt; Sue-Ling Chen; Aaron N. Bryden; Mark A. Livingston; Lawrence J. Rosenblum; Bryan R. Osborn

We investigate how to represent the resulting multivariate information and multidimensional uncertainty by developing and applying candidate visual techniques. Although good techniques exist for visualizing many data types, less progress has been made on how to display uncertainty and multivariate information - this is especially true as the dimensionality rises. At this time, our primary focus is to develop the statistical characterizations for the environmental uncertainty (described only briefly in this article) and to develop a visual method for each characterization. The mariner community needs enhanced characterizations of environmental uncertainty now, but the accuracy of the characterizations is still not sufficient, and therefore formal user evaluations cannot take place at this point in development. We received feedback on the applicability of our techniques from domain experts. We used this in conjunction with previous results to compile a set of development guidelines.

Collaboration


Dive into the Lawrence J. Rosenblum's collaboration.

Top Co-Authors

Avatar

Simon J. Julier

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yohan Baillot

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis G. Brown

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Mark A. Livingston

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Marco Lanzagorta

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Greg S. Schmidt

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. E. Swan

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jim Durbin

United States Naval Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge