L Lykele Hazelhoff
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by L Lykele Hazelhoff.
workshop on applications of computer vision | 2012
L Lykele Hazelhoff; Im Ivo Creusen
Accurate inventories of traffic signs are required for road maintenance and increase of the road safety. These inventories can be performed efficiently based on street-level panoramic images. However, this is a challenging problem, as these images are captured under a wide range of weather conditions. Besides this, occlusions and sign deformations occur and many sign look-a-like objects exist. Our approach is based on detecting present signs in panoramic images, both to derive a classification code and to combine multiple detections into an accurate position of the signs. It starts with detecting the present signs in each panoramic image. Then, all detections are classified to obtain the specific sign type, where also false detections are identified. Afterwards, detections from multiple images are combined to calculate the sign positions. The performance of this approach is extensively evaluated in a large, geographical region, where over 85% of the 3; 341 signs are automatically localized, with only 3:2% false detections. As nearly all missed signs are detected in at least a single image, only very limited manual interactions have to be supplied to safeguard the performance for highly accurate inventories.
Proceedings of SPIE | 2012
L Lykele Hazelhoff; Im Ivo Creusen; Dennis W. J. M. van de Wouw
Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.
Journal of Electronic Imaging | 2013
Im Ivo Creusen; S Solmaz Javanbakhti; Mjh Marijn Loomans; L Lykele Hazelhoff; Nadejda Roubtsova; S Sveta Zinger
Abstract. The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan–tilt–zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.
Proceedings of SPIE | 2011
L Lykele Hazelhoff
This study aims at the robust automatic detection of buildings with a gable roof in varying rural areas from very-high-resolution aerial images. The originality of our approach resides in a custom-made design extracting key features close to modeling, such as e.g. roof ridges and gutters. In this way, we allow a large freedom in roof appearances. The proposed method is based on a combination of two hypotheses. First, it exploits the physical properties of gable roofs and detects straight line-segments within non-vegetated and non-farmland areas, as possibilities of occurring roof-ridges. Second, for each of these candidate roof-ridges, the likely roof-gutter positions are estimated for both sides of the line segment, resulting in a set of possible roof configurations. These hypotheses are validated based on the analysis of size, shadow, color and edge information, where for each roof-ridge candidate the optimal configuration is selected. Roof configurations with unlikely properties are rejected and afterwards ridges with overlapping configurations are fused. Experiments conducted on a set of 200 images covering various rural regions, with a large variation in both building appearance and surroundings, show that the algorithm is able to detect 75% of the buildings with a precision of 69.4%. We consider this as a reasonably good result, since the computing is fully unconstrained, numerous buildings were occluded by trees and because there is a significant appearance difference between the considered test images.
international conference on pattern recognition | 2014
L Lykele Hazelhoff; Im Ivo Creusen
This paper focuses on road sign classification for creating accurate and up-to-date inventories of traffic signs, which is important for road safety and maintenance. This is a challenging multi-class classification task, as a large number of different sign types exist which only differ in minor details. Moreover, changes in viewpoint, capturing conditions and partial occlusions result in large intra-class variations. Ideally, road sign classification systems should be robust against these variations, while having an acceptable computational load. This paper presents a classification approach based on the popular Bag Of Words (BOW) framework, which we optimize towards the best trade-off between performance and execution time. We analyze the performance aspects of PCA-based dimensionality reduction, soft and hard assignment for BOW codebook matching and the codebook size. Furthermore, we provide an efficient implementation scheme. We compare these techniques to design a fast and accurate BOW-based classification scheme. This approach allows for the selection of a fast but accurate classification methodology. This BOW approach is compared against structural classification, and we show that their combination outperforms both individual methods. This combination, exploiting both BOW and structural information, attains high classification scores (96.25% to 98%) on our challenging real-world datasets.
international conference on image processing | 2012
Im Ivo Creusen; L Lykele Hazelhoff
This paper considers large scale traffic sign detection on a dataset consisting of high-resolution street-level panoramic photographs. Traffic signs are automatically detected and classified with a set of state-of-the-art algorithms. We introduce a color transformation to extend a Histogram of Oriented Gradients (HOG) based detection algorithm to further improve the performance. This transformation uses a specific set of reference colors that aligns with traffic sign characteristics, and measures the distance of each pixel to these reference colors. This results in an improved consistency on the gradients at the outer edge of the traffic sign. In an experiment with 33, 400 panoramic images, the number of misdetections decreased by 53.6% and 51.4% for red/blue circular signs, and by 19.6% and 28.4% for yellow speed bump signs, measured at a realistic detector operating point.
international conference on image processing | 2012
L Lykele Hazelhoff; Im Ivo Creusen
Traffic sign inventories are created for road safety and maintenance based on street-level panoramic images. Due to the large capturing interval, large viewpoint deviations between the different capturings occur. These viewpoint variations complicate the classification procedure, which aims at the selection of the correct sign type, out of a high number of nearly similar sign types, typically resulting in misclassifications. This paper describes a novel approach for incorporating viewpoint information to the classification procedure, where the sign orientation is estimated based on dense matching. Afterwards, each sample is corrected to a frontal viewpoint, which is then classified. Finally, the sign type is obtained by weighted voting. Large-scale experiments including 2, 224 traffic signs show that this approach reduces the misclassification rate by about 33% compared to the single-view case.
electronic imaging | 2015
C Cheng Li; Im Ivo Creusen; L Lykele Hazelhoff
Detection of road lane markings is attractive for practical applications such as advanced driver assistance systems and road maintenance. This paper proposes a system to detect and recognize road lane markings in panoramic images. The system can be divided into four stages. First, an inverse perspective mapping is applied to the original panoramic image to generate a top-view road view, in which the potential road markings are segmented based on their intensity difference compared to the surrounding pixels. Second, a feature vector of each potential road marking segment is extracted by calculating the Euclidean distance between the center and the boundary at regular angular steps. Third, the shape of each segment is classified using a Support Vector Machine (SVM). Finally, by modeling the lane markings, previous falsely detected segments can be rejected based on their orientation and position relative to the lane markings. Our experiments show that the system is promising and is capable of recognizing 93%, 95% and 91% of striped line segments, blocks and arrows respectively, as well as 94% of the lane markings.
Proceedings of SPIE | 2012
Im Ivo Creusen; L Lykele Hazelhoff
The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.
workshop on applications of computer vision | 2014
L Lykele Hazelhoff; Im Ivo Creusen
Accurate and up-to-date inventories of lighting poles are of interest to energy companies, beneficial for the transition to energy-efficient lighting and may contribute to a more adequate lighting of streets. This potentially improves social security and reduces crime and vandalism during nighttime. This paper describes a system for automated surveying of lighting poles from street-level panoramic images. The system consists of two independent detectors, focusing at the detection of the pole itself and at the detection of a specific lighting fixture type. Both follow the same approach, and start with detection of the feature of interest (pole or fixture) within the individual images, followed by a multi-view analysis to retrieve the real-world coordinates of the poles. Afterwards, the detection output of both algorithms is merged. Large-scale validations, covering about 135 km of road, show that over 91% of the lighting poles is found, while the precision remains above 50%. When applying this system in a semi-automated fashion, high-quality inventories can be created up to 5 times more efficiently compared to manually surveying all poles from the images.