Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shinko Y. Cheng is active.

Publication


Featured researches published by Shinko Y. Cheng.


Proceedings of SPIE | 2013

A neuromorphic system for object detection and classification

Deepak Khosla; Yang Chen; Kyungnam Kim; Shinko Y. Cheng; Alexander L. Honda; Lei Zhang

Unattended object detection, recognition and tracking on unmanned reconnaissance platforms in battlefields and urban spaces are topics of emerging importance. In this paper, we present an unattended object recognition system that automatically detects objects of interest in videos and classifies them into various categories (e.g., person, car, truck, etc.). Our system is inspired by recent findings in visual neuroscience on feed-forward object detection and recognition pipeline and mirrors that via two main neuromorphic modules (1) A front-end detection module that combines form and motion based visual attention to search for and detect “integrated” object percepts as is hypothesized to occur in the human visual pathways; (2) A back-end recognition module that processes only the detected object percepts through a neuromorphic object classification algorithm based on multi-scale convolutional neural networks, which can be efficiently implemented in COTS hardware. Our neuromorphic system was evaluated using a variety of urban area video data collected from both stationary and moving platforms. The data are quite challenging as it includes targets at long ranges, occurring under variable conditions of illuminations and occlusion with high clutter. The experimental results of our system showed excellent detection and classification performance. In addition, the proposed bio-inspired approach is good for hardware implementation due to its low complexity and mapping to off-the-shelf conventional hardware.


international symposium on visual computing | 2011

A neuromorphic approach to object detection and recognition in airborne videos with stabilization

Yang Chen; Deepak Khosla; David J. Huber; Kyungnam Kim; Shinko Y. Cheng

Research has shown that the application of an attention algorithm to the front-end of an object recognition system can provide a boost in performance over extracting regions from an image in an unguided manner. However, when video imagery is taken from a moving platform, attention algorithms such as saliency can lose their potency. In this paper, we show that this loss is due to the motion channels in the saliency algorithm not being able to distinguish object motion from motion caused by platform movement in the videos, and that an object recognition system for such videos can be improved through the application of image stabilization and saliency. We apply this algorithm to airborne video samples from the DARPA VIVID dataset and demonstrate that the combination of stabilization and saliency significantly improves object recognition system performance for both stationary and moving objects.


Proceedings of SPIE | 2013

Real-time low-power neuromorphic hardware for autonomous object recognition

Deepak Khosla; Yang Chen; David J. Huber; Darrel J. Van Buer; Kyungnam Kim; Shinko Y. Cheng

Unmanned surveillance platforms have a ubiquitous presence in surveillance and reconnaissance operations. As the resolution and fidelity of the video sensors on these platforms increases, so does the bandwidth required to provide the data to the analyst and the subsequent analyst workload to interpret it. This leads to an increasing need to perform video processing on-board the sensor platform, thus transmitting only critical information to the analysts, reducing both the data bandwidth requirements and analyst workload. In this paper, we present a system for object recognition in video that employs embedded hardware and CPUs that can be implemented onboard an autonomous platform to provide real-time information extraction. Called NEOVUS (NEurOmorphic Understanding of Scenes), our system draws inspiration from models of mammalian visual processing and is implemented in state-of-the-art COTS hardware to achieve low size, weight and power, while maintaining realtime processing at reasonable cost. We use visual attention methods for detection of stationary and moving objects from a moving platform based in motion and form, and employ multi-scale convolutional neural networks for classification, which has been mapped to FPGA hardware. Evaluation of our system has shown that we can achieve real-time speeds of thirty frames per second with up to five-megapixel resolution videos. Our system shows three to four orders of magnitude in power reduction compared to state of the art computer vision algorithms while reducing the communications bandwidth required for evaluation.


international symposium on visual computing | 2011

Optimal multiclass classifier threshold estimation with particle swarm optimization for visual object recognition

Shinko Y. Cheng; Yang Chen; Deepak Khosla; Kyungnam Kim

We present a novel method to maximize multiclass classifier performance by tuning the thresholds of the constituent pairwise binary classifiers using Particle Swarm Optimization. This post-processing step improves the classification performance in multiclass visual object detection by maximizing the area under the ROC curve or various operating points on the ROC curve. We argue that the precision-recall or confusion matrix commonly used for measuring the performance of multiclass visual object detection algorithms is inadequate to the Multiclass ROC when the intent is to apply the recognition algorithm for surveillance where objects remain in view for multiple consecutive frames, and where background instances exists in far greater numbers than target instances. We demonstrate its efficacy on the visual object detection problem with a 4-class classifier. Despite this, the PSO threshold tuning method can be applied to all pairwise multiclass classifiers using any computable performance metric.


Archive | 2011

Multi-Modal Sensor Fusion

Yuri Owechko; Shinko Y. Cheng; Swarup Medasani; Kyungnam Kim


Archive | 2014

Multi-object detection and recognition using exclusive non-maximum suppression (eNMS) and classification in cluttered scenes

Lei Zhang; Kyungnam Kim; Yang Chen; Deepak Khosla; Shinko Y. Cheng; Alexander L. Honda; Changsoo S. Jeong


Archive | 2014

Robust static and moving object detection system via attentional mechanisms

Alexander L. Honda; Deepak Khosla; Yang Chen; Kyungnam Kim; Shinko Y. Cheng; Lei Zhang; Changsoo S. Jeong


Archive | 2013

Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms

Lei Zhang; Shinko Y. Cheng; Yang Chen; Alexander L. Honda; Kyungnam Kim; Deepak Khosla; Changsoo S. Jeong


Archive | 2013

System for object detection and recognition in videos using stabilization

Yang Chen; Kyungnam Kim; Deepak Khosla; Shinko Y. Cheng


Archive | 2014

Object recognition consistency improvement using a pseudo-tracklet approach

Yang Chen; Changsoo S. Jeong; Deepak Khosla; Kyungnam Kim; Shinko Y. Cheng; Lei Zhang; Alexander L. Honda

Collaboration


Dive into the Shinko Y. Cheng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge