S.G. Chang
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by S.G. Chang.
IEEE Transactions on Image Processing | 2000
S.G. Chang; Bin Yu; Martin Vetterli
The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanens minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.
international conference on image processing | 1998
S.G. Chang; Bin Yu; Martin Vetterli
The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the work has been concentrated on finding the best uniform threshold or best basis. However, not much has been done to make this method adaptive to spatially changing statistics which is typical of a large class of images. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to the non-stationarity of images. We model each coefficient as a random variable with the generalized Gaussian prior with unknown parameters. Context modeling is used to estimate the parameters for each coefficient, which are then used to adapt the thresholding strategy. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than optimal uniform thresholding.
international conference on acoustics, speech, and signal processing | 1993
S.G. Chang; David G. Messerschmitt
A novel decoding algorithm for the MC-DCT (motion-compensation discrete-cosine-transform)-based video, which performs inverse MC before inverse DCT, is designed. This algorithm can be applied in compositing compressed video within the network, which may take multiple compressed video sources and combine them into a single compressed output stream. The proposed algorithm convers all MC-DCT compressed video into the DCT domain and performs compositing in the DCT domain. This DCT-domain approach can reduce the required computations with a speedup factor depending on the compression ratio and the nonzero motion vector percentage. However, dropping some least-significant DCT coefficients may be necessary for the worst case of high-motion video in real-time implementations. Some issues of networked video compositing are also discussed. Another direct application of the proposed decoding algorithm is converting MC-DCT compressed video to the DCT compressed format directly in the DCT domain.<<ETX>>
international conference on acoustics, speech, and signal processing | 1995
S.G. Chang; Zoran Cvetkovic; Martin Vetterli
One problem of image interpolation refers to magnifying a small image without loss in image clarity. We propose a wavelet based method which estimates the higher resolution information needed to sharpen the image. This method extrapolates the wavelet transform of the higher resolution based on the evolution of the wavelet transform extrema across the scales. By identifying three constraints that the higher resolution information needs to obey, we enhance the reconstructed image through alternating projections onto the sets defined by these constraints.
IEEE Transactions on Circuits and Systems for Video Technology | 1992
S.G. Chang; David G. Messerschmitt
Two classes of architectures-the tree-based and the PLA-based architectures-have been discussed in the literature for the variable length code (VLC) decoder. Pipelined or parallel architectures in these two classes are proposed for high-speed implementation. The pipelined tree-based architectures have the advantages of fully pipelined design, short clock cycle, and partial programmability. They are suitable for concurrent decoding of multiple independent bit streams. The PLA-based architectures have greater flexibility and can take advantages of some high-level optimization techniques. The input/output rate can be fixed or variable to meet the application requirements. As an experiment, the authors have constructed a VLC based on a popular video compression system and compared the architectures. A layout of the major parts and a simulation of the critical path of the pipelined constant-input-rate PLA-based architecture using a high-level synthesis approach estimates that a decoding throughput of 200 Mb/s with a single chip is achievable with CMOS 2.0 mu m technology. >
IEEE Transactions on Image Processing | 2000
S.G. Chang; Bin Yu; Martin Vetterli
This correspondence addresses the recovery of an image from its multiple noisy copies. The standard method is to compute the weighted average of these copies. Since the wavelet thresholding technique has been shown to effectively denoise a single noisy copy, we consider in this paper combining the two operations of averaging and thresholding. Because thresholding is a nonlinear technique, averaging then thresholding or thresholding then averaging produce different estimators. By modeling the signal wavelet coefficients as Laplacian distributed and the noise as Gaussian, our investigation finds the optimal ordering to depend on the number of available copies and on the signal-to-noise ratio. We then propose thresholds that are nearly optimal under the assumed model for each ordering. With the optimal and near-optimal thresholds, the two methods yield similar performance, and both show considerable improvement over merely averaging.
international conference on image processing | 1997
S.G. Chang; Bin Yu; Martin Vetterli
Some past work has proposed to use lossy compression to remove noise, based on the rationale that a reasonable compression method retains the dominant signal features more than the randomness of the noise. Building on this theme, we explain why compression (via coefficient quantization) is appropriate for filtering noise from signal by making the connection that quantization of transform coefficients approximates the operation of wavelet thresholding for denoising. That is, denoising is mainly due to the zero-zone and that the full precision of the thresholded coefficients is of secondary importance. The method of quantization is facilitated by a criterion similar to Rissanens minimum description length principle. An important issue is the threshold value of the zero-zone (and of wavelet thresholding). For a natural image, it has been observed that its subband coefficients can be well modeled by a Laplacian distribution. With this assumption, we derive a threshold which is easy to compute and is intuitive. Experiments show that the proposed threshold performs close to optimal thresholding.
international conference on image processing | 1997
S.G. Chang; Martin Vetterli
Wavelet thresholding with uniform threshold has shown some success in denoising. For images, we propose that this can be improved by adjusting thresholds spatially, based on the rationale that detailed regions such as edges and textures tolerate some noise but not blurring, whereas smooth regions tolerate blurring but not noise. The proposed algorithm is based on multiscale edge detection and image segmentation and then thresholding the coefficients of different regions with adaptive thresholds.
acm special interest group on data communication | 1992
S.G. Chang; David G. Messerschmitt
As video applications become popular, the production of video sources can be spread geographicall y and chronically. Video sources produced at different locations or at different time can be composited into a singl e scene. Example applications include multi-point video conferencing, video editing/publishing and advance d workstation displays [1] . In this paper, we study the pros and cons of compositing video objects in the networ k nodes rather than in the final receiver sites . Tradeoffs exist between communication cost and other performanc e factors such as video quality and computational complexity . Especially, we focus on the video objects compressed with the Motion Compensation (MC) algorithm [3] since it causes extra overhead. We analyze sources o f the drawbacks, and quantify them with simulations of the real video sequence. We propose some methods to mitigate the drawbacks and retain the attractiveness of performing video compositing within the network .
international conference on communications | 1991
S.G. Chang; David G. Messerschmitt; Andres Albanese
Managing the bandwidth resource for an adaptable-bit-rate video service (in a MAN environment) provisioned with DQDB (distributed-queue dual-bus) access networks is the main objective of this work. Use is made of the bandwidth balancing (BWB) scheme to perform dynamic bandwidth allocation in addition to improving the fairness of the DQDB protocol. The steady-state average throughput measured at each station is directly proportional to its BWB modular value. Thus, the BWB modular value it the only control parameter required; no other overhead is necessary. The channel utilization can be nearly 100% when stations are overloaded. The transition of bandwidth reallocation makes the adaptable-bit-rate video codecs more feasible. Simulation examples are presented, and the relevant analytical formulae are derived.<<ETX>>