Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yi-Ming Chan is active.

Publication


Featured researches published by Yi-Ming Chan.


IEEE Transactions on Intelligent Transportation Systems | 2012

Integrating Appearance and Edge Features for Sedan Vehicle Detection in the Blind-Spot Area

Bin-Feng Lin; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao; Li-An Chuang; Shin-Shinh Huang; Min-Fang Lo

Changing lanes while having no information about the blind spot area can be dangerous. We propose a vision-based vehicle detection system for a lane changing assistance system to monitor the potential sedan vehicle in the blind-spot area. To serve our purpose, we select adequate features, which are directly obtained from vehicle images, to detect possible vehicles in the blind-spot area. This is challenging due to the significant change in the view angle of a vehicle along with its location throughout the blind-spot area. To cope with this problem, we propose a method to combine two kinds of part-based features that are related to the characteristics of the vehicle, and we build multiple models based on different viewpoints of a vehicle. The location information of each feature is incorporated to help construct the detector and estimate the reasonable position of the presence of the vehicle. The experiments show that our system is reliable in detecting various sedan vehicles in the blind-spot area.


international conference on intelligent transportation systems | 2007

Vehicle Detection under Various Lighting Conditions by Incorporating Particle Filter

Yi-Ming Chan; Shih-Shinh Huang; Li-Chen Fu; Pei-Yung Hsiao

We propose an automatic system to detect preceding vehicles on the highway under various lighting and different weather conditions based on the computer vision technologies. To adapt to different characteristics of vehicle appearance at daytime and nighttime, four cues including underneath, vertical edge, symmetry and taillight are fused for the preceding vehicle detection. By using particle filter with four cues through the processes including initial sampling, propagation, observation and cue fusion and evaluation, particle filter accurately generates the vehicle distribution. Thus, the proposed system can successfully detect and track preceding vehicles and be robust to different lighting conditions. Unlike normal particle filter focuses on a single target distribution in a discrete state space, we detect multiple vehicles with particle filter through a high-level tracking strategy using clustering technique called basic sequential algorithmic scheme (BSAS). Finally, experimental results for several videos from different scenes are provided to demonstrate the effectiveness of our proposed system.


international conference on intelligent transportation systems | 2011

Near-infrared based nighttime pedestrian detection by combining multiple features

Yu-Chun Lin; Yi-Ming Chan; Luo-Chieh Chuang; Li-Chen Fu; Shih-Shinh Huang; Pei-Yung Hsiao; Min-Fang Luo

Pedestrian detection is important in the computer vision field. In the nighttime, pedestrian detection is even more valuable. In this paper, we address the issue of detecting pedestrians in video streams from a moving camera at nighttime. Most nighttime human detection approaches only use single feature extracted from images. The effective image features in daytime environment may suffer from textureless, high contrast and low light problems at night. To deal with these issues, we first segment the foreground by using the proposed Smart Region Detection approach to generate candidates. Then we design a nighttime pedestrian detection system based on the AdaBoost and the support vector machine (SVM) classifiers with contour and histogram of oriented gradients (HOG) features to effectively recognize pedestrians from those candidates. Combining different type of complementary features improve the detection performance. Results show that our pedestrian detection system is promising in the nighttime environment.


IEEE Transactions on Intelligent Transportation Systems | 2015

Near-Infrared-Based Nighttime Pedestrian Detection Using Grouped Part Models

Yi-Shu Lee; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao

Pedestrian detection is an important issue in the field of intelligent transportation systems. As a pedestrian is not an apparent object at nighttime, it brings about critical difficulties in effectively detecting a pedestrian for a driving assistant vision system. While using an infrared projector to enhance the illumination contrast, objects in a nighttime environment might reflect the infrared projected by the emitted spotlight. In some cases, however, the clothes on a pedestrian might absorb most of the infrared, thus causing the pedestrian to be partially invisible. To deal with this problem, a nighttime part-based pedestrian detection method is proposed. It divides a pedestrian into parts for a moving vehicle with a camera and a near-infrared lighting projector. Due to a high computation load, selecting effective parts becomes imperative. By analyzing the spatial relationship between every pair of parts, the confidence of the detected parts can be enhanced even when some parts are occluded. At the last stage of this system, the pedestrian detection result is refined by a block-based segmentation method. The system is verified by experiments, and the appealing results are demonstrated.


international conference on intelligent transportation systems | 2012

Comparison of granules features for pedestrian detection

Yu-Fu Kao; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao; Shin-Shinh Huang; Cheng-En Wu; Min-Fang Luo

Pedestrian detection is an important part of intelligent transportation systems. In the literature, Histogram of Oriented Gradients (HOG) detector for pedestrian detection is known for its good performance, but there are still some false detections appearing in the cases with flat area or clustered background. To deal with these problems, in this research work we develop a new feature which is based on pairing comparison computations, called Comparison of Granules (CoG). The idea of CoG is to encode the textural information of local area describing how different the pixel intensities are distributed within a region. It is shown that the special characteristics of CoG feature are “small” and “efficiency” relative to HOG. By incorporating this new feature, we propose a HOG-CoG detector which through our validation experiment achieves 38% log-average miss rate in full image evaluation and 90% detection rate at 10-4 false positives per window on INRIA Person Dataset. Another contribution of this work is that, we also present a training scheme that can be applied on huge database for training a detector. Such training scheme can reduce the number of hard samples during bootstrap training.


international conference on intelligent transportation systems | 2007

Driver Assistance System Using Integrated Information from Lane Geometry and Vehicle Direction

Chan-Yu Huang; Shih-Shinh Huang; Yi-Ming Chan; Yi-Hang Chiu; Li-Chen Fu; Pei-Yung Hsiao

This paper presents an approach to detect multiple lane and vehicles. Instead of assuming that the processes of lane and vehicle detection are independently, we integrate these two processes in a mutually supporting way to achieve more accurate results. In lane boundary detection, the features of lane boundary often affect by the edge and color of the vehicle. Furthermore, the results of vehicle detection could be non-robust if there are some non-vehicle objects that have similar features to vehicle. Here, we use the distance of the position between central position of lane boundary and vehicle position from hypotheses to filter out the non-vehicle object. And we use the similarity of the lane boundaries direction and the moving direction from hypotheses to get the optimal lane solution. By applying iterative optimization algorithm, we can achieve sub-optimal solution of lane and vehicle detection and the experimental results shows that the error rate is successfully reduced from 32.6% to 2.7%.


international conference on intelligent transportation systems | 2014

Integrating Appearance and Edge Features for On-Road Bicycle and Motorcycle Detection in the Nighttime

Han-Hsuan Chen; Chun-Cheng Lin; Wei-Yu Wu; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao

It is critical to detect bicycles and motorcycles on the road because collision of autos with those light vehicles becomes major cause of on-road accidents nowadays especially in the nighttime. Therefore, a vision-based nighttime bicycle and motorcycle detection method relying on use of a camera and near-infrared lighting mounted on an auto vehicle is proposed in this paper. Generally, the foreground objects in front of the auto, not the far-away background, will reflect near-infrared lighting in the nighttime environments. However, some components of the bicycles and the motorcycles absorb most infrared lighting and thus make the bicycles and motorcycles hardly recognizable. To cope with this problem, the aforementioned detection method is part-based, which combines the two kinds of features related to the characteristics of bicycles and motorcycles. Also, the information about the geometric relation among all the parts and the object centroid is learned off-line. Due to high computation load, Adaboost algorithm is used to select effective parts with better geometric information for detection. To validate the proposed results, several experiments are conducted to show that the developed system is reliable in detecting bicycles and motorcycles in the nighttime.


international conference on intelligent transportation systems | 2010

Incorporating appearance and edge features for vehicle detection in the blind-spot area

Bin-Feng Lin; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao; Li-An Chuang; Shin-Shinh Huang

It is dangerous that changing lane without knowing the information of the other lane in the blind-spot area. We propose a vision based lane changing assistance system to monitor the vehicle in the blind-spot area. So far in the literature, only few results are found using the features of the vehicle to detect the vehicle. Without using features from vehicle, to conclude that vehicles do appear in that area with strong evidence is hard. We use the image features which are directly obtained from vehicle images to detect vehicles possibility in the area. In order to overcome large variation problem due to significant difference in view angle during the process of detecting vehicles in the blind-spot area, we propose a method to combine two kinds of part-based features. After building all the features from training images, we use Adaboost algorithm to choose the best features with better geometric information for detection. The experiments show that our system is reliably in detecting the vehicles in the blind-spot area.


international conference on intelligent transportation systems | 2013

Combining multiple complementary features for pedestrian and motorbike detection

Cheng-En Wu; Yi-Ming Chan; Li-Chen Fu; Pei-Yung Hsiao; Shin-Shinh Huang; Han-Hsuan Chen; Pang-Ting Huang; Shao-Chung Hu

Pedestrian and motorbike detection are two important areas in obstacle detection on road. Most state-of-the-art detectors are constructed with new features or learning methods on Histograms of Oriented Gradients (HOG) features. However, few researches focus on analyzing which features are complementary for the aforementioned detection. According to our study of pedestrians and motorbikes, there are three major properties including shape, texture, and self-similarity. We design a Shape, Texture and Self-Similarity (STSS) feature for these properties. The features we have employed here are HOG, Local Oriented Pattern (LOP), Color Self-Similarity (CSS), and Texture Self-Similarity (TSS). The STSS detector which combines Shape, Texture, and Self-Similarty features achieves 31% log-average miss rate. At the same time, 93% detection rate at 10-4 false positives per window on INRIA Person Dataset has also been concluded. Besides, we also have evaluated our detector on Caltech Motorbike Dataset and Caltech Pedestrian Dataset, and found the detector outperforms HOG detector in these datasets. As a result, we have shown that these features are complement to each other and useful in pedestrian and motorbike detection.


international conference on intelligent transportation systems | 2009

Tracking and detection of lane and vehicle integrating lane and vehicle information using PDAF tracking model

Ssu-Ying Hung; Yi-Ming Chan; Bin-Feng Lin; Li-Chen Fu; Pei-Yung Hsiao; Shin-Shinh Huang

We propose a robust system for multi-vehicle and multi-lane detection with integrating lane and vehicle information. Most research work only can detect the lanes or vehicles separately. However, the dependency between lane information and vehicle information are able to support each other achieving more reliable results. We use probabilistic data association filter to integrate the information of lane and vehicle. In probabilistic data association filter, cumulate history of target is kept in the data association probability. Target tracking can improve the detection results through region of interests. At the same time, a high-level traffic model combines the lane and vehicle information. The tracking and detection can benefit each other through iterations. Experimental results show that our approach can detect multi-vehicle and multi-lane reliably.

Collaboration


Dive into the Yi-Ming Chan's collaboration.

Top Co-Authors

Avatar

Li-Chen Fu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Pei-Yung Hsiao

National University of Kaohsiung

View shared research outputs
Top Co-Authors

Avatar

Shin-Shinh Huang

National Kaohsiung First University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Shih-Shinh Huang

National Kaohsiung First University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bin-Feng Lin

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Cheng-En Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chan-Yu Huang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Han-Hsuan Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Li-An Chuang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Pang-Ting Huang

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge