Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radu S. Jasinschi is active.

Publication


Featured researches published by Radu S. Jasinschi.


international conference on image processing | 1995

Content-based video sequence representation

Radu S. Jasinschi; José M. F. Moura

The compact representation of video sequences is important for many applications, including very low bit-rate video compression and digital image libraries. We discuss here a novel approach, called generative video, by which video sequences are compactly represented in terms of their contents. This is achieved by reducing the video sequence to constructs. Constructs encode video sequence contents, such as, the shape and the velocity of independently moving objects, and the camera motion. Constructs are of two types: world images and generative operators. World images are augmented images incrementally generated. Generative operators, access video sequence contents and reconstruct the sequence from the world images. The reduction of a video sequence to constructs proceeds in steps. First, the shape of independently moving regions in the image is tessellated into rectangles. Second, world images are generated using the tessellated shape representation. This is described with an experiment using a real video sequence.


IEEE Personal Communications | 1996

Retrieving quality video across heterogeneous networks. Video over wireless

José M. F. Moura; Radu S. Jasinschi; Hirohisa Shiojiri; Jyh-Cherng Lin

The article addresses the issues that arise in delivering video across Wireless Andrew. Wireless Andrew is the Carnegie Mellon University (CMU) heterogeneous networking environment. It interconnects wideband and narrowband wireless and wired networks using off-the-shelf technologies. The authors design a video system where video servers distributed across the network deliver, upon request, video clips to clients scattered around the campus. The main novelty of the article is generative video (GV), a content-based meta representation for video that is well suited for the heterogeneous dynamic environment of Wireless Andrew. GV provides a framework for graceful degradation as end-to-end network throughput varies. GV reduces the video sequence to a small set of still images and side information needed to reconstruct the original sequence. The authors have demonstrated for three real live video sequences that GV delivers compression ratios in the range of 1000-10000 with good perceptual quality They develop a scalable implementation for the GV coder/decoder (codec). Scalability coding avoids duplicating the encoding of each video clip when servicing a wide range of throughputs, as in heterogeneous networks. They discuss the video delivery requirements, the video system architecture, the system implementation, and the trade-offs needed to ensure graceful degradation under real-life network operating conditions.


international conference on acoustics, speech, and signal processing | 1995

Video compression via constructs

Radu S. Jasinschi; José M. F. Moura; Jia-Ching Cheng; Amir Asif

Current video compression standards compress video sequences at NTSC quality with factors in the range of 10-100, like in MPEG-1 and MPEC-2. To operate beyond this range, that is, MPEG-4, radically new techniques are needed. We discuss one such technique called generative video (GV). Video compression is realized in GV in two steps. First, the video sequence is reduced to constructs. Constructs are world images, corresponding to augmented images containing the non-redundant information on the sequence, and window, figure, motion, and signal processing operators, representing video sequence properties. Second, the world images are spatially compressed. The video sequence is reconstructed by applying the various operators to the decompressed world images. We apply GV to a 10 sec video sequence of a real 3-D scene and obtain compression ratios of about 2260 and 4520 for two experiments done with different quantization codebooks. The reconstructed video sequence exhibits very good perceptual quality.


international conference on acoustics speech and signal processing | 1996

Nonlinear editing by Generative Video

Radu S. Jasinschi; José M. F. Moura

Nonlinear video editors manipulate video sequences by contents irrespective of frame order. These are computer based tools that contrast with analogue linear tape editing technologies. The latter are extremely taxing of videographers time and resources. Current computerized editing methods represent video in terms of individual images. This poses a formidable task to the manipulation task due to the large data volumes associated with them. We discuss a framework-Generative Video, which deals with this problem in an efficient way. Generative Video represents video sequences in terms of constructs-compact models. These are world images and generative operators. World images are augmented images, which contain the non-redundant information in the video sequence, and they describe video contents information. For each independently moving object we have a different world image. World images are stratified in layers according to occlusion information. The generative operators access video contents information, such as the shape and motion of objects moving in the sequence. Nonlinear video editing is realized by applying generative operators to world images. This approach to nonlinear editing facilitates the access, storage, and manipulation of video contents information. We describe the main properties of Generative Video and demonstrate nonlinear editing on a real video sequence.


intelligent robots and systems | 1991

The perception of visual motion coherence and transparency: a statistical model

Kazuhiko Sumi; Radu S. Jasinschi; Azriel Rosenfeld

Proposes a statistical model for the perception of motion transparency and coherence which is given by a two stage process for the extraction of the optical flow and the velocity histogram. In the first stage, a sequence of images is divided into regions; in each region, an optical flow or a normal component of the optical flow is computed. In the second stage, in each of the regions mentioned above, a local velocity histogram is computed and the local histogram is summarised into a global histogram. As for experiment, the authors describe the perception of motion coherence and transparency for synthetic images, such as line patterns and filled regions. They then extend the model for real scenes and apply it to the analysis of the motion of pedestrians.<<ETX>>


Archive | 1995

Content based video compression system

José M. F. Moura; Radu S. Jasinschi


Storage and Retrieval for Image and Video Databases | 1996

Scalable Video Coding over Heterogeneous Networks

José M. F. Moura; Radu S. Jasinschi; H. Shiojiri-h; Cheng-ru Lin


Archive | 2004

Content-based Image Sequence Representation ?

Pedro M. Q. Aguiar; Radu S. Jasinschi; José M. F. Moura; Charnchai Pluempitiwiriyawej


Proceedings of SPIE | 1996

Scalable video coding over heterogeneous networks

José M. F. Moura; Radu S. Jasinschi; Hirohisa Shiojiri; Jyh-Cherng Lin


usenix security symposium | 1998

Generative video: Very low bit rate video compression

Radu S. Jasinschi; José M. F. Moura

Collaboration


Dive into the Radu S. Jasinschi's collaboration.

Top Co-Authors

Avatar

José M. F. Moura

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Jyh-Cherng Lin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jia-Ching Cheng

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge