Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert L. Schmidt is active.

Publication


Featured researches published by Robert L. Schmidt.


IEEE Transactions on Circuits and Systems for Video Technology | 1991

Digital HDTV compression using parallel motion-compensated transform coders

Hsueh-Ming Hang; Riccardo Leonardi; Barry G. Haskell; Robert L. Schmidt; Hemant Bheda; Joseph H. Othmer

The authors suggest a parallel processing structure using the proposed international standard for visual telephony (CCITT P*64 kbs standard) as processing elements, to compress digital high definition television (HDTV) pictures. The basic idea is to partition an HDTV picture, in space or in frequency, into smaller sub-pictures and then compress each sub-picture using a CCITT P*64 kbs coder. This seems to be a cost-effective solution to the HDTV hardware. Since each sub-picture is processed by an independent coder, without coordination these coded sub-pictures may have unequal picture quality. To maintain a uniform quality HDTV picture, the following two issues are studied: sub-channel control strategy (bits allocated to each sub-picture); and quantization and buffer control strategy for individual sub-picture coders. Algorithms to resolve these problems and their computer simulations are presented. >


IEEE Transactions on Circuits and Systems | 1989

A memory control chip for formatting data into blocks suitable for video coding applications

Robert L. Schmidt

Most compression algorithms for motion television require large data storage, usually several television fields, and typically operate on blocks of data. A chip has been built to support both of these features. It generates, from a single clock source, all of the control and address signals required by standard off-the-shelf dynamic RAMs (DRAMs). This includes data packing and unpacking and automatic refresh when required. Counters are provided to address the data into and out of the memories of the form of blocks. The block sizes and field dimensions are programmable and are independent for both read and write operations. Thus, one set of counters can be programmed for sequentially scanned data coming from a camera or going to a television monitor, and other set of counters can be programmed for the block size employed in the compression hardware. Blocks of data can be accessed either continuously or one at a time. When data are read from the memories, a single pel-width pulse marks the start of valid data. Signals marking both end of the block and end of field have also been provided to ease system interfacing. >


visual communications and image processing | 1994

Performance evaluation of nonscalable MPEG-2 video coding

Robert L. Schmidt; Atul Puri; Barry G. Haskell

The second phase of the ISO Moving Picture Experts Group audio-visual coding standard (MPEG-2) is nearly complete and this standard is expected to be used in a wide range of applications at variety of bitrates. While the standard specifies the syntax of the compressed bitstream and the semantics of the decoding process, it allows considerably flexibility in choice of encoding parameters and options enabling appropriate tradeoffs in performance versus complexity as might be suitable for an application. First, we present a review of profile and level structure in MPEG-2 which is the key for enabling use of coding tools in MPEG-2. Next, we include a brief review of tools for nonscalable coding within MPEG-2 standard. Finally, we investigate via simulations, tradeoffs in coding performance with choice of various parameters and options so that within the encoder complexity that can be afforded, encoder design with good performance tradeoffs can be accomplished. Simulations are performed on standard TV and HDTV resolution video of various formats and at many bitrates using nonscalable (single layer) video coding tools of the MPEG-2 standard.


visual communications and image processing | 1996

SBASIC video coding and its 3D-DCT extension for MPEG-4 multimedia

Atul Puri; Robert L. Schmidt; Barry G. Haskell

Due to the need to interchange video data in a seamless and cost effective manner, interoperability between applications, terminals and services has become increasingly important. The ISO Moving Picture Experts Group (MPEG) has developed the MPEG-1 and the MPEG-2 audio-visual coding standards to meet these challenges; these standards allow a range of applications at bitrates from 1 Mbits to 100 Mbit/s. However, in the meantime, a new breed of applications has arisen which demand higher compression, more interactivity and increased error resilience. These applications are expected to be addressed by the next phase standard, called MPEG-4, which is currently in progress. We discuss the various functionalities expected to be offered by the MPEG-4 standard along with the development plan and the framework used for evaluation of video coding proposals in the recent first evaluation tests. Having clarified the requirements, functionalities and the development process of MPEG-4, we propose a generalized approach for video coding referred to as adaptive scalable interframe coding (ASIC) for MPEG-4. Using this generalized approach we develop a video coding scheme suitable for MPEG-4 based multimedia applications in bitrate range of 320 kbit/s to 1024 kbit/s. The proposed scheme is referred to as source and bandwidth adaptive scalable interframe coding (SBASIC) and builds not only on the proven framework of motion compensated DCT coding and scalability but also introduces several new concepts. The SNR and MPEG-4 subjective evaluation results are presented to show the good performance achieved by SBASIC. Next, extension of SBASIC by motion compensated 3D- DCT coding is discussed. It is envisaged that this extension when complete will further improve the coding efficiency of SBASIC.


visual communications and image processing | 1990

Digital HDTV Compression at 44 Mbps Using Parallel Motion-compensated Transform Coders

Hsueh-Ming Hang; Riccardo Leonardi; Barry G. Haskell; Robert L. Schmidt; Hemant Bheda; Joseph H. Othmer

In the absence of the requisite compression, a digital HDTV channel may cost as much as 1 Gbit/sec in transmission bandwidth. A parallel-processing architecture is presently described which uses the proposed international standard for visual telephony to compress digital HDTV images. The images are partitioned into smaller subpictures, and these undergo compression using a coder which is on the basis of current technology cost-effective only for small images. Algorithms are devised in order to preserve picture quality across the subpictures


Proceedings of the IEEE | 1985

An experimental time-compression system for satellite television transmission

Robert L. Schmidt; Barry G. Haskell; K.Y. Eng; S.M. O'Riordan

We describe an experimental system for demonstrating time-compression multiplexing (TCM) of two NTSC color television signals in a satellite channel of 36 MHz. The system employs digital processing to derive line or field differentials from each picture. The television signals are then converted into interleaved lines (or fields) of unimpaired baseband video and companded line (or field) differentials. These signal components are finally time compressed and multiplexed into a combined signal for single-carrier FM transmission. With 4.5-m earth stations, the field-differential technique offers extremely good transmission quality suitable for TV distribution to cable head-ends (weighted signal-to-noise ratio, WSNR ≈ 51.5 dB), while the line-differential method provides a slightly lower WSNR ( ≈ 49 dB). We recommend the field-differential approach because of its superior overall picture quality. For larger receive stations (7m), higher picture quality (WSNR ≈ 56 dB) could be obtained. If 10-m earth stations are employed, the received video performance is practically indistinguishable from the corresponding one in the one-television-per-transponder case, and we infer that three pictures can indeed be sent with a graceful degradation as previously suggested. By choosing the parameters properly, the current TCM system can be optimized for a wide variety of applications with higher channel capacity.


visual communications and image processing | 1989

A Low-Bit-Rate Video Codec Using Block-List-Transform

Hsueh-Ming Hang; Barry G. Haskell; Robert L. Schmidt; James C. Candy

A simple but efficient video codec is built for low-bit-rate videophone applications. The goal of this project is to construct a video codec at a low cost and good performance. In order to reduce hardware complexity, a simple interframe predictive coding structure is selected. The input pictures are partitioned into 2-D blocks, and only the blocks with significant frame differences are coded and transmitted to the decoder. The Block-list-transform (BLT) coding algorithm, a generalized DPCM, is used to encode the frame differences between the previously reconstructed picture and the current picture. Preliminary results using this coding system at 64 kbits/sec show reasonable coding performance on typical videophone sequences.


visual communications and image processing | 1988

A Circuit For Evaluating 64Kbit/S Encoding Procedures

James C. Candy; Robert L. Schmidt; Hsueh-Ming Hang; E. G. Bowen; R. C. Brainard; Barry G. Haskell

A prototype encoder-decoder that incorporates special VLSI circuits has been set up to demonstrate various interframe encoding techniques in real time. A Video processing chip converts the signal from R-G-B to Y-U-V format and then filters the signal both horizontally and vertically before subsampling it in various ways. A memory control chip reforms the raster into a sequence of 8*8 blocks, at 15 fields per second. A predictor chip derives frame-to-frame difference signals for each block in every other field and interpolates the remaining fields from the predicted ones. Both prediction and interpolation error signals can be encoded using block encoding techniques in a digital signal processor, or by quantizing their DCTs that are generated in a special orthogonal-transform chip. Equivalent inverse processing is available at the receiver. The system is controlled by general purpose microcomputers, one at the transmitter and one at the receiver. The circuit provides a means for evaluating various block encoding techniques and serves as a base for finding suitable subcircuits that can be integrated.


Archive | 1983

Technique for the time compression multiplexing of three television signals

Kai Y. Eng; Barin Geoffry Haskell; Robert L. Schmidt


Archive | 1983

Technique for the time frequency multiplexing on three television signals

Barin Geoffrey Haskell; Robert L. Schmidt

Collaboration


Dive into the Robert L. Schmidt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsueh-Ming Hang

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge