Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youchang Kim is active.

Publication


Featured researches published by Youchang Kim.


international solid-state circuits conference | 2014

18.4 A 4.9mΩ-sensitivity mobile electrical impedance tomography IC for early breast-cancer detection system

Sunjoo Hong; Kwonjoon Lee; Unsoo Ha; Hyunki Kim; Yongsu Lee; Youchang Kim; Hoi-Jun Yoo

A mobile electrical impedance tomography (EIT) IC is proposed for early breast cancer detection personally at home. To assemble the entire system into a simple brassiere shape, EIT IC is integrated via a multi-layered fabric circuit board which includes 90 EIT electrodes and two reference electrodes for current stimulation and voltage sensing. The IC supports three operating modes; gain scanning, contact impedance monitoring, and EIT modes for the clear EIT image. A differential sinusoidal current stimulator (DSCS) is proposed for injection of low-distortion programmable current which has harmonics less than -59 dBc at a load impedance of 2 kΩ. To get high sensitivity, a 6-channel voltage sensing amplifier can adaptively control the gain up to a maximum of 60 dB, and has low input referred noise, 36 nV/ √Hz. The 2.5 × 5 mm chip is fabricated in a 0.18 μm 1P6M CMOS process and consumes 53.4 mW on average. As a result, a sensitivity of 4.9 mΩ is achieved which enables the detection of a 5 mm cancer mass within an agar test phantom.


international solid-state circuits conference | 2013

A 646GOPS/W multi-classifier many-core processor with cortex-like architecture for super-resolution recognition

Jun-Young Park; Injoon Hong; Gyeonghoon Kim; Youchang Kim; Kyuho Jason Lee; Seong-Wook Park; Kyeongryeol Bong; Hoi-Jun Yoo

Object recognition processors have been reported for the applications of autonomic vehicle navigation, smart surveillance and unmanned air vehicles (UAVs) [1-3]. Most of the processors adopt a single classifier rather than multiple classifiers even though multi-classifier systems (MCSs) offer more accurate recognition with higher robustness [4]. In addition, MCSs can incorporate the human vision system (HVS) recognition architecture to reduce computational requirements and enhance recognition accuracy. For example, HMAX models the exact hierarchical architecture of the HVS for improved recognition accuracy [5]. Compared with SIFT, known to have the best recognition accuracy based on local features extracted from the object [6], HMAX can recognize an object based on global features by template matching and a maximum-pooling operation without feature segmentation. In this paper we present a multi-classifier many-core processor combining the HMAX and SIFT approaches on a single chip. Through the combined approach, the system can: 1) pay attention to the target object directly with global context consideration, including complicated background or camouflaging obstacles, 2) utilize the super-resolution algorithm to recognize highly blurred or small size objects, and 3) recognize more than 200 objects in real-time by context-aware feature matching.


international solid-state circuits conference | 2014

10.4 A 1.22TOPS and 1.52mW/MHz augmented reality multi-core processor with neural network NoC for HMD applications

Gyeonghoon Kim; Youchang Kim; Kyuho Jason Lee; Seong-Wook Park; Injoon Hong; Kyeongryeol Bong; Dongjoo Shin; Sungpill Choi; Jinwook Oh; Hoi-Jun Yoo

Augmented reality (AR) is being investigated in advanced displays for the augmentation of images in a real-world environment. Wearable systems, such as head-mounted display (HMD) systems, have attempted to support real-time AR as a next generation UI/UX [1-2], but have failed, due to their limited computing power. In a prior work, a chip with limited AR functionality was reported that could perform AR with the help of markers placed in the environment (usually 1D or 2D bar codes) [3]. However, for a seamless visual experience, 3D objects should be rendered directly on the natural video image without any markers. Unlike marker-based AR, markerless AR requires natural feature extraction, general object recognition, 3D reconstruction, and camera-pose estimation to be performed in parallel. For instance, markerless AR for a VGA input-test video consumes ~1.3W power at 0.2fps throughput, with TIs OMAP4430, which exceeds power limits for wearable devices. Consequently, there is a need for a high-performance energy-efficient markerless AR processor to realize a real-time AR system, especially for HMD applications.


international solid-state circuits conference | 2015

18.3 A 0.5V 54μW ultra-low-power recognition processor with 93.5% accuracy geometric vocabulary tree and 47.5% database compression

Youchang Kim; Injoon Hong; Hoi-Jun Yoo

Microwatt object recognition is being considered for many applications, such as autonomous micro-air-vehicle (MAV) navigation, a vision-based wake-up or user authentication for the smartphones, and a gesture recognition-based natural UI for wearable devices in the Internet-of-Things (IoT) era. These applications require extremely low power consumption, while maintaining high recognition accuracy - constraints that arise because of the requirement for continuous heavy vision processing under limited battery capacity. Recently, a low-power feature-extraction accelerator operating at near-threshold voltage (NTV) was proposed, however, it did not support the object matching essential for the object recognition [1]. Even state-of-the-art object matching accelerators consume over 10mW, thereby making them unsuitable for an MAV [2, 3]. Therefore, an ultra-low-power high-accuracy recognition processor is necessary, especially for MAVs and IoT devices.


international solid-state circuits conference | 2015

18.1 A 2.71nJ/pixel 3D-stacked gaze-activated object-recognition system for low-power mobile HMD applications

Injoon Hong; Kyeongryeol Bong; Dongjoo Shin; Seong-Wook Park; Kyuho Jason Lee; Youchang Kim; Hoi-Jun Yoo

Smart eyeglasses or head-mounted displays (HMDs) have been gaining traction as next-generation mainstream wearable devices. However, previous HMD systems [1] have had limited application, primarily due to their lacking a smart user interface (Ul) and user experience (UX). Since HMD systems have a small compact wearable platform, their Ul requires new modalities, rather than a computer mouse or a 2D touch panel. Recent speech-recognition-based Uls require voice input to reveal the users intention to not only HMD users but also others, which raises privacy concerns in a public space. In addition, prior works [2-3] attempted to support object recognition (OR) or augmented reality (AR) in smart eyeglasses, but consumed considerable power, >381mW, resulting in <;6 hours operation time with a 2100mWh battery.


international solid-state circuits conference | 2017

14.6 A 0.62mW ultra-low-power convolutional-neural-network face-recognition processor and a CIS integrated with always-on haar-like face detector

Kyeongryeol Bong; Sungpill Choi; Chang-Hyeon Kim; Sang Hoon Kang; Youchang Kim; Hoi-Jun Yoo

Recently, face recognition (FR) based on always-on CIS has been investigated for the next-generation UI/UX of wearable devices. A FR system, shown in Fig. 14.6.1, was developed as a life-cycle analyzer or a personal black box, constantly recording the people we meet, along with time and place information. In addition, FR with always-on capability can be used for user authentication for secure access to his or her smart phone and other personal systems. Since wearable devices have a limited battery capacity for a small form factor, extremely low power consumption is required, while maintaining high recognition accuracy. Previously, a 23mW FR accelerator [1] was proposed, but its accuracy was low due to its hand-crafted feature-based algorithm. Deep learning using a convolutional neural network (CNN) is essential to achieve high accuracy and to enhance device intelligence. However, previous CNN processors (CNNP) [2–3] consume too much power, resulting in <10 hours operation time with a 190mAh coin battery.


international solid-state circuits conference | 2016

14.3 A 0.55V 1.1mW artificial-intelligence processor with PVT compensation for micro robots

Youchang Kim; Dongjoo Shin; Jinsu Lee; Yongsu Lee; Hoi-Jun Yoo

Micro robots with artificial intelligence (AI) are being investigated for many applications, such as unmanned delivery services. The robots have enhanced controllers that realize AI functions, such as perception (information extraction) and cognition (decision making). Historically, controllers have been based on general-purpose CPUs, and only recently, a few perception SoCs have been reported. SoCs with cognition capability have not been reported thus far, even though cognition is a key AI function in micro robots for decision making, especially autonomous drones. Path planning and obstacle avoidance require more than 10,000 searches within 50ms for a fast response, but a software implementation running on a Cortex-M3 takes ~5s to make decisions. Micro robots require 10× lower power and 100× faster decision making than conventional robots because of their fast movement in the environment, small form factor, and limited battery capacity. Therefore, an ultra-low-power high-performance artificial-intelligence processor (AIP) is necessary for micro robots to make fast and smart maneuvers in dynamic environments filled with obstacles.


IEEE Journal of Solid-state Circuits | 2016

A 2.71 nJ/Pixel Gaze-Activated Object Recognition System for Low-Power Mobile Smart Glasses

Injoon Hong; Kyeongryeol Bong; Dongjoo Shin; Seong-Wook Park; Kyuho Jason Lee; Youchang Kim; Hoi-Jun Yoo

A low-power object recognition (OR) system with intuitive gaze user interface (UI) is proposed for battery-powered smart glasses. For low-power gaze UI, we propose a low-power single-chip gaze estimation sensor, called gaze image sensor (GIS). In GIS, a novel column-parallel pupil edge detection circuit (PEDC) with new pupil edge detection algorithm XY pupil detection (XY-PD) is proposed which results in 2.9× power reduction with 16× larger resolution compared to previous work. Also, a logarithmic SIMD processor is proposed for robust pupil center estimation, (1 pixel error, with low-power floating-point implementation. For OR, low-power multicore OR processor (ORP) is implemented. In ORP, task-level pipeline with keypoint-level scoring is proposed to reduce the number of cores as well as the operating frequency of keypoint-matching processor (KMP) for low-power consumption. Also, dual-mode convolutional neural network processor (CNNP) is designed for fast tile selection without external memory accesses. In addition, a pipelined descriptor generation processor (DGP) with LUT-based nonlinear operation is newly proposed for low-power OR. Lastly, dynamic voltage and frequency scaling (DVFS) for dynamic power reduction in ORP is applied. Combining both of the GIS and ORP fabricated in 65 nm CMOS logic process, only 75 mW average power consumption is achieved with real-time OR performance, which is 1.2× and 4.4× lower power than the previously published work.


IEEE Journal of Solid-state Circuits | 2015

A 27 mW Reconfigurable Marker-Less Logarithmic Camera Pose Estimation Engine for Mobile Augmented Reality Processor

Injoon Hong; Gyeonghoon Kim; Youchang Kim; Donghyun Kim; Byeong-Gyu Nam; Hoi-Jun Yoo

A marker-less camera pose estimation engine (CPEE) with reconfigurable logarithmic SIMD processor is proposed for the view angle estimation used in mobile augmented reality (AR) applications. Compared to previous marker-based approach, marker-less camera pose estimation can overlay virtual images directly on natural world without the help of markers. However, they require 150× larger computing costs and large power consuming floating-point operations where such requirement severely restricts real-time operations and low-power implementations in mobile platform, respectively. To overcome the gap in computational costs between marker-based and marker-less method, speculative execution (SE) and reconfigurable data-arrangement layer (RDL) are proposed to reduce computing time by 17% and 27%, respectively. For low-power implementation of floating-point units, logarithmic processing element (LPE) is designed to reduce overall power consumption by 18%. The proposed marker-less CPEE is fabricated in 65 nm Logic CMOS technology, and successfully realizes real-time marker-less camera pose estimation with only 27 mW power consumption.


international symposium on circuits and systems | 2016

A 17.5 fJ/bit energy-efficient analog SRAM for mixed-signal processing

Jinsu Lee; Dongjoo Shin; Youchang Kim; Hoi-Jun Yoo

An energy-efficient analog SRAM (A-SRAM) is proposed to eliminate redundant analog-to-digital (A/D) and digital-to-analog (D/A) conversions in the mixed-signal processing such as a biomedical and a neural network applications. The D/A and the A/D conversion are integrated into the SRAM readout by the charge sharing of the proposed split bit-line (BL) and the SRAM write by the successive approximation method, respectively. And a data structure is newly proposed to allocate each bit of the input data to the binary-weighted bit-cell array. The proposed A-SRAM is implemented using 65 nm CMOS technology. As a result, it achieves 17.5 fJ/bit read energy-efficiency and 21 Gbit/s read throughput, which are 54% lower and 1.3× higher than the conventional SRAM. Also, the area is reduced by 31% compared to the conventional SRAM with ADC and DAC.

Collaboration


Dive into the Youchang Kim's collaboration.

Researchain Logo
Decentralizing Knowledge