Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seonyoung Lee is active.

Publication


Featured researches published by Seonyoung Lee.


ieee region 10 conference | 2012

HOG feature extractor circuit for real-time human and vehicle detection

Seonyoung Lee; Haengseon Son; Jong Chan Choi; Kyoungwon Min

Smart vehicle technologies such as ADAS are growing concern about. Especially, pedestrian and vehicle recognition system based on machine vision is a big issue. In this paper, we propose the hardwired HOG feature extractor circuit for real-time human and vehicle detection, and describe the hardware implementation results. Our HOG feature extractor supports weighted gradient value, 2D histogram interpolation and block normalization. We have used the simplified methods of the square root and division operation for the hardware implementation. Our HOG feature extractor circuit was verified on FPGA environment and can be processed 33 frames per seconds for 640×480 VGA images in real-time.


international soc design conference | 2009

Design of area-efficient unified transform circuit for multi-standard video decoder

Hoyoung Chang; Soojin Kim; Seonyoung Lee; Kyeongsoon Cho

This paper proposes a method to perform the inverse transform operations for three popular video compression standards H.264, VC-1 and MPEG-4 using the concept of delta coefficent matrix. We designed the unified inverse transform circuit based on the proposed method. Our circuit supports 4-point and 8-point transforms for H.264, VC-1 and MPEG-4. The proposed unified circuit was verified on the SoC platform board synthesized into a gate-level circuit using 130nm standard cell library and showed its efficiency in terms of the area.


international conference on design and technology of integrated systems in nanoscale era | 2007

Circuit implementation for transform and quantization operations of H.264/MPEG-4/VC-1 video decoder

Seonyoung Lee; Kyeongsoon Cho

Current trend of digital convergence leads to the need of the video decoder that can support the multiple standards such as H.264, MPEG-4 and VC-1. We implemented a circuit to perform the transform and quantization operations for the three video compression standards. Instead of designing the circuit for each standard separately, we analyzed the transform and quantization operations in detail to find the possibilities of sharing resources such as adders and multipliers, and finally devised a common architecture that can be applied to all of them. The resultant circuit is efficient in terms of its size compared to the case of separate implementation for each standard.


asia pacific conference on circuits and systems | 2010

Implementation of lane detection system using optimized hough transform circuit

Seonyoung Lee; Haengseon Son; Kyungwon Min

This paper describes a vision-based lane detection system with the optimized Hough Transform circuit. The Hough Transform is a popular method to find the line features in an image. This is very robust to noises and changes in the illumination level, but it requires long computation time and large data storage for calculation. It needs large logic gates for implementation. It is difficult to apply in products that require real-time performance. In this paper, we propose the optimized Hough Transform circuit architecture and a lane departure warning system using vision device. We suggest the Hough Transform architecture to minimize the size of logic and the number of cycle time. Our implemented Hough Transform circuit show the good performance than other circuit architecture. We tested the Hough Transform circuit and lane departure warning system on the Xilinx FPGA board.


asia pacific conference on circuits and systems | 2008

Design of high-performance transform and quantization circuit for unified video CODEC

Seonyoung Lee; Kyeongsoon Cho

This paper presents the new high-performance circuit architecture of the transform and quantization for unified video CODEC. The proposed architecture can be applied to all kinds of transforms for the video compression standards such as JPEG, MPEG-1/2/4, H.264 and VC-1. It exploits the similarity of 4-point DCT and 8-point DCT using permutation matrices. Since our circuit accepts the transform coefficients from the users, it can be extended very easily to cover any kind of DCT-based transforms for future standards. The multipliers in the transform circuit are shared by the quantization circuit in order to minimize the circuit size. The quantization operations are performed using spare clock cycles during the transform operations in order to minimize the number of clock cycles required. We described the proposed transform circuit at RTL and verified its operation on FPGA board.


signal processing systems | 2007

Design of Transform and Quantization Circuit for Multi-Standard Integrated Video Decoder

Seonyoung Lee; Kyeongsoon Cho

This paper presents a new method to design the circuit that can perform the inverse transform and inverse quantization operations for three popular video compression standards WMV9, MPEG-4 and H.264. We introduced a delta coefficient matrix and implemented the integrated inverse transform circuit based on the proposed idea. We designed the integrated inverse quantization circuit using a shared multiplier. The entire circuit was verified on the SoC platform board, synthesized into a gate-level circuit using 130nm standard cell library and showed its efficiency in terms of the circuit size.


ieee region 10 conference | 2012

Design of high-performance pedestrian and vehicle detection circuit using Haar-like features

Soojin Kim; Sangkyun Park; Seonyoung Lee; Seungsang Park; Kyeongsoon Cho

This paper describes the design of high-performance pedestrian and vehicle detection circuit using Haar-like features for intelligent vehicle application. The proposed circuit uses a sliding window for every image frame in order to extract Haar-like features and to detect pedestrians and vehicles. A total of 200 Haar-like features per sliding window are extracted from Haar-like feature extraction circuit and the extracted features are provided to AdaBoost classifier circuit. In order to increase the processing speed, the proposed circuit adopts the parallel architecture and it can process two sliding windows at the same time. We described the proposed high-performance pedestrian and vehicle detection circuit using Verilog HDL and synthesized the gate-level circuit using 130nm standard cell library. The synthesized circuit consists of 1,388,260 gates and its maximum operating frequency is 203MHz. Since the proposed circuit processes about 47.8 640×480 image frames per second, it can be used to provide the real-time pedestrian and vehicles detection for intelligent vehicle application.


asia pacific conference on circuits and systems | 2006

Implementation of an AMBA-Compliant IP for H.264 Transform and Quantization

Seonyoung Lee; Kyeongsoon Cho

This paper presents an AMBA-based IP to perform the forward and inverse transform and quantization required in the H.264 video compression standard. The transform and quantization circuit was optimized for area and performance. The AHB wrapper was added to the circuit for the AMBA-based operation. The user of the IP can specify how long the bus may be occupied by the IP and also where the video data are stored in the external memory. The AMBA-compliant operation of the proposed IP was verified on the platform board with Xilinx FPGA and ARM9 processor. We fabricated an MPW chip using 0.25mum standard cells to prove the correct operations on silicon


international soc design conference | 2011

Design of AdaBoost classifier circuit using Haar-like features for automobile applications

Sangkyun Park; Seonyoung Lee; Soojin Kim; Kyeongsoon Cho

This paper describes the design of AdaBoost classifier circuit using the Haar-like features for the automobile applications. In order to extract the features of an object, the proposed circuit uses the Haar-like feature extraction method. The proposed circuit extracts a total of 200 Haar-like features per sliding window. The extracted features are provided to the AdaBoost classifier circuit in order to determine whether the object is the expected one or not. A 48×96 or 64×64 sliding window with 10 window strides is used in the proposed circuit for the efficient pattern recognition. In order to increase the processing speed, the proposed circuit adopts the parallel architecture and it can process two sliding windows at the same time. We described the proposed high-performance pattern recognition circuit using Verilog HDL and synthesized the gate-level circuit using the 130nm standard cell library. The synthesized circuit consists of 428,397 gates and its maximum operating frequency is 203MHz. 148 240×8-bit, 107 192×10-bit, and one 200×102-bit SRAMs are used in the proposed circuit. The circuit processes 38.4 640×480 image frames per second, assuming ten different levels of resolution for each frame, i. e. nine successive scaling down for each 640×480 image frame.


ieee region 10 conference | 2010

Efficient pedestrian detection by Bin-interleaved Histogram of Oriented Gradients

Haengseon Son; Seonyoung Lee; Jongchan Choi; Kyungwon Min

This paper presents an efficient pedestrian detection by Bin-interleaved Histogram of Oriented Gradients (Bi-HOG) for automotive applications. The state-of-art feature named HOG [5] is adopted as the basic feature. We arrange alternately even-bin cells and odd-bin cells in one block and then extract the only even-bin feature elements for even-bin cells and the only odd-bin feature elements for odd-bin cells. So the feature dimension of our Bi-HOG is a half size of HOG by bin-interleaved method like this. We experimentally demonstrate that SVM classifiers trained by Bi-HOG have the same detection performance on the DaimlerChrysler data set as one by the original HOG in our two-staged pedestrian detection system and considerably reduce storage requirement and simplify the computational complexity.

Collaboration


Dive into the Seonyoung Lee's collaboration.

Top Co-Authors

Avatar

Kyeongsoon Cho

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Soojin Kim

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Hoyoung Chang

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Jaeoh Shim

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Sangkyun Park

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Dongyeob Chun

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Hojin Kim

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Jaeho Shin

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Jihye Yoo

Hankuk University of Foreign Studies

View shared research outputs
Top Co-Authors

Avatar

Jongtae Kim

Sungkyunkwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge