Kristof Van Beeck
Katholieke Universiteit Leuven
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kristof Van Beeck.
Design Automation for Embedded Systems | 2012
Wim Meeus; Kristof Van Beeck; Toon Goedemé; Jan Meel; Dirk Stroobandt
High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.
computer vision and pattern recognition | 2013
Floris De Smedt; Kristof Van Beeck; Tinne Tuytelaars; Toon Goedemé
Object detection, and in particular pedestrian detection, is a challenging task, due to the wide variety of appearances. The application domain is extremely broad, ranging from e.g. surveillance to automotive safety systems. Many practical applications however often rely on stringent real-time processing speeds combined with high accuracy needs. These demands are contradictory, and usually a compromise needs to be made. In this paper we present a pedestrian detection framework which is extremely fast (500 detections per second) while still maintaining excellent accuracy results. We achieve these results by combining our fast pedestrian detection algorithm (implemented as a hybrid CPU and GPU combination) with the exploitation of scene constraints (using a warping window approach and temporal information), which yields state-of-the-art detection accuracy. We present profound evaluation results of our algorithm concerning both speed and accuracy on the challenging Caltech dataset. Furthermore we present evaluation results on a very specific application showing the full potential of this warping window approach: detection of pedestrians in a trucks blind spot zone.
international conference on computer vision theory and applications | 2016
Dries Hulens; Kristof Van Beeck; Toon Goedemé
In numerous applications it is important to collect information about the gaze orientation or head-angle of a person. Examples are measuring the alertness of a car driver to see if he is still awake, or the attentiveness of people crossing a street to see if they noticed the cars driving by. In our own application we want to apply cinematographic rules (e.g. the rule of thirds where a face should be positioned left or right in the frame depending on the gaze direction) on images taken from an Unmanned Aerial Vehicle (UAV). For this an accurate estimation of the angle of the head is needed. These applications should run on embedded hardware so that they can be easily attached to e.g. a car or a UAV. This implies that the head angle detection algorithm should run in real-time on minimal hardware. Therefore we developed an approach that runs in real-time on embedded hardware while achieving excellent accuracy. We demonstrated these approaches on both a publicly available face dataset and our own dataset recorded from a UAV.
asian conference on pattern recognition | 2015
Kristof Van Beeck; Toon Goedemé
In this paper we propose an efficient detection and tracking framework targeting vulnerable road users in the blind spot camera images of a truck. Existing non-vision based safety solutions are not able to handle this problem completely. Therefore we aim to develop an active safety system, based solely on the vision input of the blind spot camera. This is far from trivial: vulnerable road users are a diverse class and consist of a wide variety of poses and appearances. Evidently we need to achieve excellent accuracy results and furthermore we need to cope with the large lens distortion and extreme viewpoints induced by the blind spot camera. In this work we present a multiclass detection methodology which enables the efficient detection of both pedestrians and bicyclists in these challenging images. To achieve this we propose the integration of a warping window approach with multiple object detectors which we intelligently combine in a probabilistic manner. To validate our framework we recorded several simulated dangerous blind spot scenarios with a genuine blind spot camera mounted on a real truck. We show that our approach achieves excellent accuracy on these challenging datasets.
international conference on pattern recognition applications and methods | 2014
Kristof Van Beeck; Toon Goedemé
In this paper we present a multi-pedestrian detection and tracking framework targeting a specific application: detecting vulnerable road users in a trucks blind spot zone. Research indicates that existing non-vision based safety solutions are not able to handle this problem completely. Therefore we aim to develop an active safety system which warns the truck driver if pedestrians are present in the trucks blind spot zone. Our system solely uses the vision input from the trucks blind spot camera to detect pedestrians. This is not a trivial task, since the application inherently requires real-time operation while at the same time attaining very high accuracy. Furthermore we need to cope with the large lens distortion and the extreme viewpoints introduced by the blind spot camera. To achieve this, we propose a fast and efficient pedestrian detection and tracking framework based on our novel perspective warping window approach. To evaluate our algorithm we recorded several realistically simulated blind spot scenarios with a genuine blind spot camera mounted on a real truck. We show that our algorithm achieves excellent accuracy results at real-time performance, using a single core CPU implementation only.
international conference on pattern recognition | 2014
Floris De Smedt; Kristof Van Beeck; Tinne Tuytelaars; Toon Goedemé
In recent years, the accuracy of pedestrian detectors significantly improved. Currently, state-of-the-art pedestrian detectors achieve high accuracy results on challenging datasets. As opposed to refining a single detector, in this paper we propose a different approach to further increase the detection accuracy: combining multiple pedestrian detectors. The most straight-forward way to combine pedestrian detectors would be a naive AND or OR combination. Here, we present a novel generic combination framework in which we exploit specific information from each pedestrian detector to determine the optimal combination parameters. Our main motivation for this approach is based on the fact that several pedestrian detection approaches are based on very different techniques (e.g. a different feature pool), and thus an efficient combination should yield higher accuracy results. Indeed, such a combination is far more powerful, and our experiments indicate that specific (that is, cleverly chosen) combinations outperform existing state-of-the-art pedestrian detection results.
Archive | 2014
Kristof Van Beeck; Toon Goedemé; Tinne Tuytelaars
In this chapter we present a vision-based pedestrian tracking system targeting a specific application: avoiding accidents in the blind spot zone of trucks. Existing blind spot safety systems do not offer a complete solution to this problem. Therefore we propose an active alarm system, which automatically detects vulnerable road users in blind spot camera images, and warns the truck driver about their presence. The demanding time constraint, the need for a high accuracy and the large distortion that a blind spot camera introduces makes this a challenging task. To achieve this we propose a warping window multi-pedestrian tracking algorithm. Our algorithm achieves real-time performance while maintaining high accuracy. To evaluate our algorithm we recorded several pedestrian datasets with a real blind spot camera mounted on a real truck, consisting of realistic simulated dangerous blind spot situations. Furthermore we recorded and performed preliminary experiments with datasets including bicyclists.
international joint conference on computer vision imaging and computer graphics theory and applications | 2018
Steven Puttemans; Kristof Van Beeck; Toon Goedemé
Object detection using a boosted cascade of weak classifiers is a principle that has been used in a variety of applications, ranging from pedestrian detection to fruit counting in orchards, and this with a high average precision. In this work we prove that using both the boosted cascade approach suggest by Viola & Jones and the adapted approach based on integral or aggregate channels by Dollár yield promising results on coconut tree detection in aerial images. However with the rise of robust deep learning architectures for both detection and classification, and the significant drop in hardware costs, we wonder if it is feasible to apply deep learning to solve the task of fast and robust coconut tree detection and classification in aerial imagery. We examine both classificationand detection-based architectures for this task. By doing so we prove that deep learning is indeed a feasible alternative for robust coconut tree detection with a high average precision in aerial imagery, keeping attention to known issues with the selected architectures.
international conference on information science and applications | 2018
Timothy Callemein; Kristof Van Beeck; Geert Brône; Toon Goedemé
Mobile eye-tracking systems have been available for about a decade now and are becoming increasingly popular in different fields of application, including marketing, sociology, usability studies and linguistics. While the user-friendliness and ergonomics of the hardware are developing at a rapid pace, the software for the analysis of mobile eye-tracking data in some points still lacks robustness and functionality. With this paper, we investigate which state-of-the-art computer vision algorithms may be used to automate the post-analysis of mobile eye-tracking data. For the case study in this paper, we focus on mobile eye-tracker recordings made during human-human face-to-face interactions. We compared two recent publicly available frameworks (YOLOv2 and OpenPose) to relate the gaze location generated by the eye-tracker to the head and hands visible in the scene camera data. In this paper we will show that the use of this single-pipeline framework provides robust results, which are both more accurate and faster than previous work in the field. Moreover, our approach does not rely on manual interventions during this process.
international conference on image analysis and recognition | 2018
Maarten Vandersteegen; Kristof Van Beeck; Toon Goedemé
The need for fast and robust pedestrian detection in various applications is growing every day. The addition of a thermal camera could help solving this problem resulting in higher detection accuracy in day but especially during night and bad weather conditions. Using convolutional neural networks, the leading technology in the field of object detection and classification, we propose a network architecture and training method for an accurate real-time multispectral pedestrian detector. We select a regression based single-pass network architecture with pre-trained weights from the Pascal VOC 2007 dataset. The network is then transfer-learned without changing the architecture but taking as input three image channels composed from information of the four available image channels (RGB+T). In our experiments we compare the results of different input-channel compositions and select a top performing combination. Our results show that this simple approach easily outperforms the improved ACF+T+THOG detector, coming close to the accuracy of other state-of-the-art multispectral CNNs with a log-average miss-rate of 31.2% measured on the KAIST multispectral benchmark dataset. Our main contribution: it runs as fast as 80FPS, estimated 10(times ) faster than the closest competitors.