Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyungnam Kim is active.

Publication


Featured researches published by Kyungnam Kim.


international conference on pattern recognition | 2014

Performance Evaluation of Neuromorphic-Vision Object Recognition Algorithms

Rangachar Kasturi; Dmitry B. Goldgof; Rajmadhan Ekambaram; Gill Pratt; Eric Krotkov; Douglas Hackett; Yang Ran; Qinfen Zheng; Rajeev Sharma; Mark B. Anderson; Mark Peot; Mario Aguilar; Deepak Khosla; Yang Chen; Kyungnam Kim; Lior Elazary; Randolph Charles Voorhies; Daniel Parks; Laurent Itti

The U.S. Defense Advanced Research Projects Agencys (DARPA) Neovision2 program aims to develop artificial vision systems based on the design principles employed by mammalian vision systems. Three such algorithms are briefly described in this paper. These neuromorphic-vision systems performance in detecting objects in video was measured using a set of annotated clips. This paper describes the results of these evaluations including the data domains, metrics, methodologies, performance over a range of operating points and a comparison with computer vision based baseline algorithms.


Proceedings of SPIE | 2012

Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

T. Nathan Mundhenk; Kang-Yu Ni; Yang Chen; Kyungnam Kim; Yuri Owechko

An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.


Proceedings of SPIE | 2013

A neuromorphic system for object detection and classification

Deepak Khosla; Yang Chen; Kyungnam Kim; Shinko Y. Cheng; Alexander L. Honda; Lei Zhang

Unattended object detection, recognition and tracking on unmanned reconnaissance platforms in battlefields and urban spaces are topics of emerging importance. In this paper, we present an unattended object recognition system that automatically detects objects of interest in videos and classifies them into various categories (e.g., person, car, truck, etc.). Our system is inspired by recent findings in visual neuroscience on feed-forward object detection and recognition pipeline and mirrors that via two main neuromorphic modules (1) A front-end detection module that combines form and motion based visual attention to search for and detect “integrated” object percepts as is hypothesized to occur in the human visual pathways; (2) A back-end recognition module that processes only the detected object percepts through a neuromorphic object classification algorithm based on multi-scale convolutional neural networks, which can be efficiently implemented in COTS hardware. Our neuromorphic system was evaluated using a variety of urban area video data collected from both stationary and moving platforms. The data are quite challenging as it includes targets at long ranges, occurring under variable conditions of illuminations and occlusion with high clutter. The experimental results of our system showed excellent detection and classification performance. In addition, the proposed bio-inspired approach is good for hardware implementation due to its low complexity and mapping to off-the-shelf conventional hardware.


Multimedia Analysis, Processing and Communications | 2011

Object Detection and Tracking for Intelligent Video Surveillance

Kyungnam Kim; Larry S. Davis

As CCTV/IP cameras and network infrastructure become cheaper and more affordable, today’s video surveillance solutions are more effective than ever before, providing new surveillance technology that’s applicable to a wide range end-users in retail sectors, schools, homes, office campuses, industrial /transportation systems, and government sectors. Vision-based object detection and tracking, especially for video surveillance applications, is studied from algorithms to performance evaluation. This chapter is composed of three topics: (1) background modeling and detection, (2) performance evaluation of sensitive target detection, and (3) multi-camera segmentation and tracking of people.


international symposium on visual computing | 2011

A neuromorphic approach to object detection and recognition in airborne videos with stabilization

Yang Chen; Deepak Khosla; David J. Huber; Kyungnam Kim; Shinko Y. Cheng

Research has shown that the application of an attention algorithm to the front-end of an object recognition system can provide a boost in performance over extracting regions from an image in an unguided manner. However, when video imagery is taken from a moving platform, attention algorithms such as saliency can lose their potency. In this paper, we show that this loss is due to the motion channels in the saliency algorithm not being able to distinguish object motion from motion caused by platform movement in the videos, and that an object recognition system for such videos can be improved through the application of image stabilization and saliency. We apply this algorithm to airborne video samples from the DARPA VIVID dataset and demonstrate that the combination of stabilization and saliency significantly improves object recognition system performance for both stationary and moving objects.


computer vision and pattern recognition | 2014

2D/3D Sensor Exploitation and Fusion for Enhanced Object Detection

Jiejun Xu; Kyungnam Kim; Zhiqi Zhang; Hai-Wen Chen; Yuri Owechko

This paper describes a method for object (e.g., vehicles, pedestrians) detection and recognition using a combination of 2D and 3D sensor data. Detection of individual data modalities is carried out in parallel, and then combined using a fusion scheme to deliver the final results. Specifically, we first apply deformable part based object detection in the 2D image domain to obtain initial estimates of candidate object regions. Meanwhile, 3D blobs (i.e., clusters of 3D points) containing potential objects are extracted from the corresponding input point cloud in an unsupervised manner. A novel morphological feature set Morph166 is proposed to characterize each of these 3D blobs, and only blobs matched to predefined object models are kept. Based on the individual detections from the aligned 2D and 3D data, we further develop a fusion scheme to boost object detection and recognition confidence. Experimental results with the proposed method show good performance.


Frontiers in Computational Neuroscience | 2014

A neuromorphic system for video object recognition

Deepak Khosla; Yang Chen; Kyungnam Kim

Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS), is inspired by computational neuroscience models of feed-forward object detection and classification pipelines for processing visual data. The NEOVUS architecture is inspired by the ventral (what) and dorsal (where) streams of the mammalian visual pathway and integrates retinal processing, object detection based on form and motion modeling, and object classification based on convolutional neural networks. The object recognition performance and energy use of the NEOVUS was evaluated by the Defense Advanced Research Projects Agency (DARPA) under the Neovision2 program using three urban area video datasets collected from a mix of stationary and moving platforms. These datasets are challenging and include a large number of objects of different types in cluttered scenes, with varying illumination and occlusion conditions. In a systematic evaluation of five different teams by DARPA on these datasets, the NEOVUS demonstrated the best performance with high object recognition accuracy and the lowest energy consumption. Its energy use was three orders of magnitude lower than two independent state of the art baseline computer vision systems. The dynamic power requirement for the complete system mapped to commercial off-the-shelf (COTS) hardware that includes a 5.6 Megapixel color camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W), for an effective energy consumption of 5.45 nanoJoules (nJ) per bit of incoming video. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition toward enabling practical low-power and mobile video processing applications.


Proceedings of SPIE | 2013

Real-time low-power neuromorphic hardware for autonomous object recognition

Deepak Khosla; Yang Chen; David J. Huber; Darrel J. Van Buer; Kyungnam Kim; Shinko Y. Cheng

Unmanned surveillance platforms have a ubiquitous presence in surveillance and reconnaissance operations. As the resolution and fidelity of the video sensors on these platforms increases, so does the bandwidth required to provide the data to the analyst and the subsequent analyst workload to interpret it. This leads to an increasing need to perform video processing on-board the sensor platform, thus transmitting only critical information to the analysts, reducing both the data bandwidth requirements and analyst workload. In this paper, we present a system for object recognition in video that employs embedded hardware and CPUs that can be implemented onboard an autonomous platform to provide real-time information extraction. Called NEOVUS (NEurOmorphic Understanding of Scenes), our system draws inspiration from models of mammalian visual processing and is implemented in state-of-the-art COTS hardware to achieve low size, weight and power, while maintaining realtime processing at reasonable cost. We use visual attention methods for detection of stationary and moving objects from a moving platform based in motion and form, and employ multi-scale convolutional neural networks for classification, which has been mapped to FPGA hardware. Evaluation of our system has shown that we can achieve real-time speeds of thirty frames per second with up to five-megapixel resolution videos. Our system shows three to four orders of magnitude in power reduction compared to state of the art computer vision algorithms while reducing the communications bandwidth required for evaluation.


computer vision and pattern recognition | 2017

Zero Shot Learning via Multi-scale Manifold Regularization

Shay Deutsch; Soheil Kolouri; Kyungnam Kim; Yuri Owechko; Stefano Soatto

We address zero-shot learning using a new manifold alignment framework based on a localized multi-scale transform on graphs. Our inference approach includes a smoothness criterion for a function mapping nodes on a graph (visual representation) onto a linear space (semantic representation), which we optimize using multi-scale graph wavelets. The robustness of the ensuing scheme allows us to operate with automatically generated semantic annotations, resulting in an algorithm that is entirely free of manual supervision, and yet improves the state-of-the-art as measured on benchmark datasets.


computer vision and pattern recognition | 2012

Manifold-based fingerprinting for target identification

Kang-Yu Ni; Terrell N. Mundhenk; Kyungnam Kim; Yuri Owechko

In this paper, we propose a fingerprint analysis algorithm based on using product manifolds to create robust signatures for individual targets in motion imagery. The purpose of target fingerprinting is to reidentify a target after it disappears and then reappears due to occlusions or out of camera view and to track targets persistently under camera handoff situations. The proposed method is statistics-based and has the benefit of being compact and invariant to viewpoint, rotation, and scaling. Moreover, it is a general framework and does not assume a particular type of objects to be identified. For improved robustness, we also propose a method to detect outliers of a statistical manifold formed from the training data of individual targets. Our experiments show that the proposed framework is more accurate in target reidentification than single-instance signatures and patch-based methods.

Collaboration


Dive into the Kyungnam Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge