Saied Moezzi
University of California, San Diego
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Saied Moezzi.
IEEE MultiMedia | 1997
Saied Moezzi; Li-Cheng Tai; Philippe Gerard
Virtual reality systems use digital models to provide interactive viewing. We present a 3D digital video system that attempts to provide the same capabilities for actual performances such as dancing. Recreating the original dynamic scene in 3D, the system allows photorealistic interactive playback from arbitrary viewpoints using video streams of a given scene from multiple perspectives.
international conference on image processing | 1996
Michael H. Goldbaum; Saied Moezzi; Adam L. Taylor; Shankar Chatterjee; Jeffrey E. Boyd; Edward Hunter; Ramesh Jain
Medical imaging is shifting from film to electronic images. The STARE (structured analysis of the retina) system is a sophisticated image management system that will automatically diagnose images, compare images, measure key features in images, annotate image contents, and search for images similar in content. The authors concentrate on automated diagnosis. The images are annotated by segmentation of objects of interest, classification of the extracted objects, and reasoning about the image contents. The inferencing is accomplished with Bayesian networks that learn from image examples of each disease. This effort at image understanding in fundus images anticipates the future use of medical images. As these capabilities mature, the authors expect that ophthalmologists and physicians in other fields that rely in images will use a system like STARE to reduce repetitive work, to provide assistance to physicians in difficult diagnoses or with unfamiliar diseases, and to manage images in large image databases.
IEEE Computer Graphics and Applications | 1996
Saied Moezzi; Arun Katkere; Don Y. Kuramura; Ramesh Jain
A reality modeling and visualization system called Immersive Video uses multiple videos of an event from different perspectives to generate a 3D digital sequence of object movement. Given appropriate camera coverage, full 3D digital videos can be generated using todays technology. The potential for photorealistic videos from arbitrary perspectives exists, although the coverage and degree of realism in virtual views will ultimately be determined by the application. Many issues demand resolution before one will see practical and realistic 3D digital video, including more robust shape recovery methods, broadcast-quality virtual views, and real-time interactive rendering of long 3D sequences.
Multimedia Systems | 1997
Arun Katkere; Saied Moezzi; Don Y. Kuramura; Patrick H. Kelly; Ramesh Jain
Abstract.Video provides a comprehensive visual record of environment activity over time. Thus, video data is an attractive source of information for the creation of virtual worlds which require some real-world fidelity. This paper describes the use of multiple streams of video data for the creation of immersive virtual environments. We outline our multiple perspective interactive video (MPI-Video) architecture which provides the infrastructure for the processing and analysis of multiple streams of video data. Our MPI-Video system performs automated analysis of the raw video and constructs a model of the environment and object activity within this environment. This model provides a comprehensive representation of the world monitored by the cameras which, in turn, can be used in the construction of a virtual world. In addition, using the information produced and maintained by the MPI-Video system, our immersive video system generates virtual video sequences. These are sequences of the dynamic environment from an arbitrary view point generated using the real camera data. Such sequences allow a user to navigate through the environment and provide a sense of immersion in the scene. We discuss results from our MPI-Video prototype, outline algorithms for the construction of virtual views and provide examples of a variety of such immersive video sequences.
international conference on image processing | 1996
Amarnath Gupta; Saied Moezzi; Adam L. Taylor; Shankar Chatterjee; Ramesh Jain; I. Goldbaum; S. Burgess
This paper describes steps towards an information system for the storage and content-based retrieval of ocular fundus images. Based on the Virage Incorporated framework for defining similarity metrics, the authors have developed a number of primitives for the representation of ocular fundus images. A prototype Query By Pictorial Example (QBPE) system yields similarity rankings in approximate agreement with those of a human expert.
international conference on multimedia computing and systems | 1996
Saied Moezzi; Arun Katkere; Don Y. Kuramura; Ramesh Jain
This paper describes a new visual medium called interactive 3D digital video. 3D digital video displays motion pictures of real-world events from the view of a virtual camera controlled by the viewer during playback. For 3D video to become of practical use, sophisticated data manipulation, management and processing capabilities are required. These tasks are daunting, given the amount and complexity of data involved. Furthermore, due to the hybrid nature of the 3D video data, no standardized representation and coding schemes are available. We explore these issues and present an overview of a functional system called immersive video. For real events such as basketball games, immersive video analyzes and composites recorded multiviewpoint videos to create a full 3D version of the event which is then encoded and stored for immersive playback. While replaying this 3D digital movie, viewers are able to explore the scene continuously from any perspective. We describe the steps we have taken toward using the Internet infrastructure and a client-server configuration to allow World Wide Web users to interactively view any of our experimental 3D videos including a one minute staged karate demonstration captured by six video cameras. Applications of this new medium include telepresence, interactive video and television, video-based virtual environments, and immersive feature films.
IEEE MultiMedia | 1995
Patrick H. Kelly; Saied Moezzi
It would be difficult to overestimate the importance of visual information in current computer systems. Visual computing, which embraces processing, interpreting, modeling, assimilating, storing, retrieving, and synthesizing visual information, now plays a crucial role in many fields. These include multimedia, virtual reality, robotics, scientific visualization, and communications systems. And the demand for further integration of visual information into these areas shows every sign of continuing unabated. Under the direction of Ramesh Jain, the Visual Computing Laboratory at the University of California, San Diego, was established as a center for innovative visual computing research to address the requirements of these applications in next-generation computer technologies. As such, the Visual Computing Lab hosts a group of researchers working in a variety of areas, notably multimedia databases, information assimilation, interactive video, and visual interaction through gesture recognition. This article presents a high-level overview of activities in the Visual Computing Laboratory and provides some details on prototype systems that we are currently developing. >
Archive | 1995
Saied Moezzi; Arun Katkere; Ramesh Jain
acm multimedia | 1995
Patrick H. Kelly; Arun Katkere; Don Y. Kuramura; Saied Moezzi; Shankar Chatterjee
ieee virtual reality conference | 1996
Saied Moezzi; Arun Katkere; Don Y. Kuramura; Ramesh Jain