Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wonyong Sung is active.

Publication


Featured researches published by Wonyong Sung.


IEEE Signal Processing Letters | 1999

A statistical model-based voice activity detection

Jongseo Sohn; Nam Soo Kim; Wonyong Sung

In this letter, we develop a robust voice activity detector (VAD) for the application to variable-rate speech coding. The developed VAD employs the decision-directed parameter estimation method for the likelihood ratio test. In addition, we propose an effective hang-over scheme which considers the previous observations by a first-order Markov process modeling of speech occurrences. According to our simulation results, the proposed VAD shows significantly better performances than the G.729B VAD in low signal-to-noise ratio (SNR) and vehicular noise environments.


IEEE Transactions on Signal Processing | 1995

Simulation-based word-length optimization method for fixed-point digital signal processing systems

Wonyong Sung; Ki-Il Kum

Word-length optimization and scaling software that utilizes the fixed-point simulation results using realistic input signal samples is developed for the application to general, including nonlinear and time-varying, signal processing systems. Word-length optimization is conducted to minimize the hardware implementation cost while satisfying a fixed-point performance measure. In order to minimize the computing time, signal grouping and efficient search methods are developed. The search algorithms first determine the minimum bound of the word-length for an individual group of signals and then try to find out the cost-optimal solution by using either exhaustive or heuristic methods.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2001

Combined word-length optimization and high-level synthesis of digital signal processing systems

Ki-Il Kum; Wonyong Sung

Conventional approaches for fixed-point implementation of digital signal processing algorithms require the scaling and word-length (WL) optimization at the algorithm level and the high-level synthesis for functional unit sharing at the architecture level. However, the algorithm-level WL optimization has a few limitations because it can neither utilize the functional unit sharing information for signal grouping nor estimate the hardware cost for each operation accurately. In this study, we develop a combined WL optimization and high-level synthesis algorithm not only to minimize the hardware implementation cost, but also to reduce the optimization time significantly. This software initially finds the WL sensitivity or minimum WL of each signal throughout fixed-point simulations of a signal flow graph, performs the WL conscious high-level synthesis where signals having the similar WL sensitivity are assigned to the same functional unit, and then conducts the final WL optimization by iteratively modifying the WLs of the synthesized hardware model. A list-scheduling-based and an integer linear-programming-based algorithms are developed for the WL conscious high-level synthesis. The hardware cost function to minimize is generated by using a synthesized hardware model. Since fixed-point simulation is used to measure the performance, this method can be applied to general, including nonlinear and time-varying, digital signal processing systems. A fourth-order infinite-impulse response filter, a fifth-order elliptic filter, and a 12th-order adaptive least mean square filter are implemented using this software.


international conference on acoustics speech and signal processing | 1998

A voice activity detector employing soft decision based noise spectrum adaptation

Jongseo Sohn; Wonyong Sung

In this paper, a voice activity detector (VAD) for variable rate speech coding is decomposed into two parts, a decision rule and a background noise statistic estimator, which are analysed separately by applying a statistical model. A robust decision rule is derived from the generalized likelihood ratio test by assuming that the noise statistics are known a priori. To estimate the time-varying noise statistics, allowing for the occasional presence of the speech signal, a novel noise spectrum adaptation algorithm using the soft decision information of the proposed decision rule is developed. The algorithm is robust, especially for the time-varying noise such as babble noise.


signal processing systems | 2006

Low-Power High-Throughput BCH Error Correction VLSI Design for Multi-Level Cell NAND Flash Memories

Wei Liu; Junrye Rho; Wonyong Sung

As the reliability is a critical issue for new generation multi-level cell (MLC) flash memories, there is growing call for fast and compact error correction code (ECC) circuit with minimum impact on memory access time and chip area. This paper presents a high-throughput and low-power ECC scheme for MLC NAND flash memories. The BCH encoder and decoder architecture features byte-wise processing and a low complexity key equation solver using a simplified Berlekamp-Massey algorithm. Resource sharing and power reduction techniques are also applied. Synthesized using 0.25-mum CMOS technology in a supply voltage of 2.5 V, the proposed BCH (4148,4096) encoder/decoder achieves byte-wise processing, and it needs an estimated cell area of 0.2 mm2, and an average power of 3.18 mW with 50 MB/s throughput


asilomar conference on signals, systems and computers | 1994

Fixed-point simulation utility for C and C++ based digital signal processing programs

Seehyun Kim; Wonyong Sung

The present utility software automatically converts a floating-point digital signal processing program written in C or C++ language to a fixed-point program. The conversion is conducted by defining a new fixed-point data class and utilizing the operator overloading characteristic of the C++ language. A generalized fixed-point format which consists of the wordlength, integer wordlength, sign, overflow, and quantization mode is employed for specifying a fixed-point variable or a fixed-point constant. Thus, it is possible to simulate the finite wordlength and the scaling effects of digital signal processing programs very easily.<<ETX>>


signal processing systems | 2014

Fixed-point feedforward deep neural network design using weights +1, 0, and −1

Kyuyeon Hwang; Wonyong Sung

Feedforward deep neural networks that employ multiple hidden layers show high performance in many applications, but they demand complex hardware for implementation. The hardware complexity can be much lowered by minimizing the word-length of weights and signals, but direct quantization for fixed-point network design does not yield good results. We optimize the fixed-point design by employing backpropagation based retraining. The designed fixed-point networks with ternary weights (+1, 0, and -1) and 3-bit signal show only negligible performance loss when compared to the floating-point coun-terparts. The backpropagation for retraining uses quantized weights and fixed-point signal to compute the output, but utilizes high precision values for adapting the networks. A character recognition and a phoneme recognition examples are presented.


IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 2000

AUTOSCALER for C: an optimizing floating-point to integer C program converter for fixed-point digital signal processors

Ki-Il Kum; Jiyang Kang; Wonyong Sung

A translator which converts C-based floating-point digital signal processing programs to optimized integer C versions is developed for convenient programming and efficient use of fixed-point digital signal processors (DSPs). It not only converts data types and supports automatic scaling, but also conducts shift optimization to enhance execution speed. Since the input and output of this translator are ANSI C compliant programs, it can be used for any fixed-point DSP that supports ANSI C compiler. The number of shift operations that are required for scaling in the converted integer programs is reduced by equalizing the integer word-lengths of relevant variables and constants. For an optimal reduction, a cost function that represents the overhead of scaling is formulated by considering the data- path of a target processor, program parsing, and profiling results. This cost function is then minimized by using either integer linear programming or simulated annealing algorithms. The translated integer C codes are 5-400 times faster than the floating-point versions when applied to TMS320C50, TMS320C60 and Motorola 56000 DSPs.


IEEE Transactions on Very Large Scale Integration Systems | 2010

VLSI Implementation of BCH Error Correction for Multilevel Cell NAND Flash Memory

Hyojin Choi; Wei Liu; Wonyong Sung

Bit-error correction is crucial for realizing cost-effective and reliable NAND Flash-memory-based storage systems. In this paper, low-power and high-throughput error-correction circuits have been developed for multilevel cell (MLC) nand Flash memories. The developed circuits employ the Bose-Chaudhuri-Hocquenghem code to correct multiple random bit errors. The error-correcting codes for them are designed based on the bit-error characteristics of MLC NAND Flash memories for solid-state drives. To trade the code rate, circuit complexity, and power consumption, three error-correcting architectures, named as whole-page, sector-pipelined, and multistrip ones, are proposed. The VLSI design applies both algorithmic and architectural-level optimizations that include parallel algorithm transformation, resource sharing, and time multiplexing. The chip area, power consumption, and throughput results for these three architectures are presented.


international conference on acoustics, speech, and signal processing | 2015

Fixed point optimization of deep convolutional neural networks for object recognition

Sajid Anwar; Kyuyeon Hwang; Wonyong Sung

Deep convolutional neural networks have shown promising results in image and speech recognition applications. The learning capability of the network improves with increasing depth and size of each layer. However this capability comes at the cost of increased computational complexity. Thus reduction in hardware complexity and faster classification are highly desired. This work proposes an optimization method for fixed point deep convolutional neural networks. The parameters of a pre-trained high precision network are first directly quantized using L2 error minimization. We quantize each layer one by one, while other layers keep computation with high precision, to know the layer-wise sensitivity on word-length reduction. Then the network is retrained with quantized weights. Two examples on object recognition, MNIST and CIFAR-10, are presented. Our results indicate that quantization induces sparsity in the network which reduces the effective number of network parameters and improves generalization. This work reduces the required memory storage by a factor of 1/10 and achieves better classification results than the high precision networks.

Collaboration


Dive into the Wonyong Sung's collaboration.

Top Co-Authors

Avatar

Kyuyeon Hwang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Junho Cho

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kisun You

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Dong-hwan Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Hoseok Chang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jonghong Kim

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jaewoo Ahn

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Jiyang Kang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Ki-Il Kum

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Sungho Shin

Seoul National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge