Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shih-Shinh Huang is active.

Publication


Featured researches published by Shih-Shinh Huang.


intelligent robots and systems | 2005

Driver assistance system for lane detection and vehicle recognition with night vision

Chun-Che Wang; Shih-Shinh Huang; Li-Chen Fu

The objective of this research is to develop a vision-based driver assistance system to enhance the drivers safety in the nighttime. The proposed system performs both lane detection and vehicle recognition. In lane detection, three features including lane markers, brightness, slenderness and proximity are applied to detect the positions of lane markers in the image. On the other hand, vehicle recognition is achieved by using an evident feature which is extracted through three four steps: taillight standing-out process, adaptive thresholding, centroid detection, and taillight pairing algorithm. Besides, an automatic method is also provided to calculate the tilt and the pan of the camera by using the position of vanishing point which is detected in the image by applying Canny edge detection, Hough transform, major straight line extraction and vanishing point estimation. Experimental results for thousands of images are provided to demonstrate the effectiveness of the proposed approach in the nighttime. The lane detection rate is nearly 99%, and the vehicle recognition rate is about 91%. Furthermore, our system can process the image in almost real time.


IEEE Transactions on Vehicular Technology | 2009

A Portable Vision-Based Real-Time Lane Departure Warning System: Day and Night

Pei-Yung Hsiao; Chun-Wei Yeh; Shih-Shinh Huang; Li-Chen Fu

Lane departure warning systems (LDWS) are an important element in improving driving safety. In this paper, we propose an embedded advanced RISC machines (ARM)-based real-time LDWS. As for software development, an improved lane detection algorithm based on peak finding for feature extraction is used to successfully detect lane boundaries. Then, a spatiotemporal mechanism using the detected lane boundaries is designed to generate appropriate warning signals. As for hardware implementation, a 1-D Gaussian smoother and a global edge detector are adopted to reduce noise effects in the images. By using the developed data transfer channel (DTC) in the reconfigurable field-programmable gate array (FPGA) module, the data transfer rate among the complementary metal-oxide-semiconductor (CMOS) imager module, liquid-crystal display (LCD) display module, and central processing unit (CPU) bus is about 25 frame/s for an image size of 256 times 256. In addition, the proposed departure warning algorithm based on spatial and temporal mechanisms is successfully executed on the presented ARM-based platform. The effectiveness of our system concludes that the lane detection rate is 99.57% during the day and 98.88% at night in a highway environment. The proposed departure mechanisms effectively generate effective warning signals and avoid most false warnings.


international conference on robotics and automation | 2004

On-board vision system for lane recognition and front-vehicle detection to enhance driver's awareness

Shih-Shinh Huang; Chung-Jen Chen; Pei-Yung Hsiao; Li-Chen Fu

The objectives of this research are to develop a driving assistance system that can locate the positions of the lane boundaries and detect the existence of the front-vehicle. By providing warning mechanism, the system can protect drivers from dangerousness. In lane recognition, Gaussian filter, peak-finding procedure, and line-segment grouping procedure are used to detect land markers successfully and effectively. On the other hand, vehicle detection is achieved by using three features, such as underneath, vertical edge, and symmetry property. The proposed system is shown to work well under various conditions on the roadway. The vehicle detection rate is higher than 97%. Besides, the computation cost is inexpensive and the systems response is almost real time. Thus, the results of the present research work can improve traffic safety for on-road driving.


international conference on intelligent transportation systems | 2007

Vehicle Detection under Various Lighting Conditions by Incorporating Particle Filter

Yi-Ming Chan; Shih-Shinh Huang; Li-Chen Fu; Pei-Yung Hsiao

We propose an automatic system to detect preceding vehicles on the highway under various lighting and different weather conditions based on the computer vision technologies. To adapt to different characteristics of vehicle appearance at daytime and nighttime, four cues including underneath, vertical edge, symmetry and taillight are fused for the preceding vehicle detection. By using particle filter with four cues through the processes including initial sampling, propagation, observation and cue fusion and evaluation, particle filter accurately generates the vehicle distribution. Thus, the proposed system can successfully detect and track preceding vehicles and be robust to different lighting conditions. Unlike normal particle filter focuses on a single target distribution in a discrete state space, we detect multiple vehicles with particle filter through a high-level tracking strategy using clustering technique called basic sequential algorithmic scheme (BSAS). Finally, experimental results for several videos from different scenes are provided to demonstrate the effectiveness of our proposed system.


international conference on green circuits and systems | 2010

Image-based vehicle tracking and classification on the highway

Jin-Cyuan Lai; Shih-Shinh Huang; Chien-Cheng Tseng

In recent years, the development of automatic traffic surveillance system has received great attention in the academic and industrial research. Based on computer vision technology, the purpose of this work is to construct the traffic surveillance system on the highway for estimating traffic parameters, such as vehicle counting and classification. The proposed system mainly consists of three steps including vehicle region extraction, vehicle tracking, and classification. The background subtraction method is firstly utilized to extract the foreground regions from the highway scene. Some geometric properties are applied to remove the false regions and shadow removal algorithm is used for obtaining more accurate segmentation results. After vehicle detection, a graph-based vehicle tracking method is used for building the correspondence between vehicles detected at different time instants. Finally, we introduce two measures, such as aspect ratio and compactness to classify vehicles. In the experiment, three videos with different lighting conditions are used to demonstrate the effectiveness of our proposed system.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Region-Level Motion-Based Foreground Segmentation Under a Bayesian Network

Shih-Shinh Huang; Li-Chen Fu; Pei-Yung Hsiao

This paper presents a probabilistic approach for automatically segmenting foreground objects from a video sequence. In order to save computation time and be robust to noise effects, a region detection algorithm incorporating edge information is first proposed to identify the regions of interest, within which the spatial relationships are represented by a region adjacency graph. Next, we consider the motion of the foreground objects and, hence, utilize the temporal coherence property in the regions detected. Thus, the foreground segmentation problem is formulated as follows. Given two consecutive image frames and the segmentation result priorly obtained, we simultaneously estimate the motion vector field and the foreground segmentation mask in a mutually supporting manner by maximizing the conditional joint probability density function of these two elements. To represent the conditional joint probability density function in a compact form, a Bayesian network is adopted, which is derived to model the interdependency of these two elements. Experimental results for several video sequences are provided to demonstrate the effectiveness of the proposed approach.


international conference on robotics and automation | 2005

A Region-Level Motion-Based Background Modeling and Subtraction Using MRFs

Shih-Shinh Huang; Li-Chen Fu; Pei-Yung Hsiao

This paper presents a new approach to automatic segmentation of the foreground objects from the sequence of images by integrating techniques of background subtraction and motion-based segmentation. At first, a background model is built to represent information of both color and motion of the background scene. Based on temporal and spatial information, an initial partition of each image is obtained. Next, we formulate the classification problem as a graph labeling over a region adjacency graph (RAG) based on Markov random fields (MRFs) statistical framework. The Bhattacharyya distance for estimating the similarity between color and motion distributions of the background model and the currently obtained regions are used to model the likelihood energies. The object tracking strategy for finding the correspondence between region at different time instant is used to maintain the temporal coherence of the segmentation. For spatial coherence, the length of the common boundaries of two regions is taken into consideration for classification. Both spatial and temporal coherence are incorporated into the prior energy to maintain the continuity of the segmentation. Finally, a labeling is obtained by maximizing a posterior probability of the MRFs. Under such formulation, we integrate two different kinds of framework in an elegant way to make the foreground detection become more accurate. Experimental results for two image sequences including the hall monitoring and our e-home demo site are provided to demonstrate the effectiveness of the proposed approach.


international conference on intelligent transportation systems | 2011

Near-infrared based nighttime pedestrian detection by combining multiple features

Yu-Chun Lin; Yi-Ming Chan; Luo-Chieh Chuang; Li-Chen Fu; Shih-Shinh Huang; Pei-Yung Hsiao; Min-Fang Luo

Pedestrian detection is important in the computer vision field. In the nighttime, pedestrian detection is even more valuable. In this paper, we address the issue of detecting pedestrians in video streams from a moving camera at nighttime. Most nighttime human detection approaches only use single feature extracted from images. The effective image features in daytime environment may suffer from textureless, high contrast and low light problems at night. To deal with these issues, we first segment the foreground by using the proposed Smart Region Detection approach to generate candidates. Then we design a nighttime pedestrian detection system based on the AdaBoost and the support vector machine (SVM) classifiers with contour and histogram of oriented gradients (HOG) features to effectively recognize pedestrians from those candidates. Combining different type of complementary features improve the detection performance. Results show that our pedestrian detection system is promising in the nighttime environment.


international conference on robotics and automation | 2009

Active-learning assisted self-reconfigurable activity recognition in a dynamic environment

Yu-Chen Ho; Ching-Hu Lu; I-han chen; Shih-Shinh Huang; Ching-Yao Wang; Li-Chen Fu

It is desirable to know a residents on-going activities before a robot or a smart system can provide attentive services to meet real human needs. This work addresses the problem of learning and recognizing human daily activities in a dynamic environment. Most currently available approaches learn offline activity models and recognize activities of interest on a real time basis. However, the activity models become outdated when human behaviors or device deployment have changed. It is a tedious and error-prone job to recollect data for retraining the activity models. In such a case, it is important to adapt the learnt activity models to the changes without much human supervision. In this work, we present a self-reconfigurable approach for activity recognition which reconfigures previously learnt activity models and infers multiple activities under a dynamic environment meanwhile pursuing minimal human efforts in relabeling training data by utilizing active-learning assistance.


international conference on pattern recognition | 2008

Monocular multi-human detection using Augmented Histograms of Oriented Gradients

Cheng-Hsiung Chuang; Shih-Shinh Huang; Li-Chen Fu; Pei-Yung Hsiao

We introduce an augmented histograms of oriented gradients (AHOG) feature for human detection from a nonstatic camera. We increase the discriminating power of original histograms of oriented gradients (HOG) feature by adding human shape properties, such as contour distances, symmetry, and gradient density. Based on the biological structure of human shape, we impose the symmetry property on HOG features by computing the similarity between itself and itspsila symmetric pair to weight HOG features. After that, the capability of describing human features is much better than the original one, especially when the humans are moving across. We also augment the gradient density into features to mitigate the influences caused by repetitive backgrounds. In the experiments, our method demonstrates most reliable performance at any view of targets.

Collaboration


Dive into the Shih-Shinh Huang's collaboration.

Top Co-Authors

Avatar

Pei-Yung Hsiao

National University of Kaohsiung

View shared research outputs
Top Co-Authors

Avatar

Li-Chen Fu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ching-Hu Lu

National Taiwan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yi-Ming Chan

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chan-Yu Huang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chien-Cheng Tseng

National Kaohsiung First University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun-Yuan Chen

National Kaohsiung First University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shih-Han Ku

National Kaohsiung First University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge