Cornelis J. Koeleman
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cornelis J. Koeleman.
IEEE Transactions on Consumer Electronics | 2011
Marijn J. H. Loomans; Cornelis J. Koeleman
We study scalable image and video coding for the surveillance of rooms and personal environments based on inexpensive cameras and portable devices. The scalability is achieved through a multi-level 2D dyadic wavelet decomposition featuring an accurate low-cost integer wavelet implementation with lifting. As our primary contribution, we present a modification to the SPECK wavelet coefficient encoding algorithm to significantly improve the efficiency of an embedded system implementation. The modification consists of storing the significance of all quadtree nodes in a buffer, where each node comprises several coefficients. This buffer is then used to efficiently construct the code with minimal and direct memory access. Our approach allows efficient parallel implementation on multi-core computer systems and gives a substantial reduction of memory access and thus power consumption. We report experimental results, showing an approximate gain factor of 1,000 in execution time compared to a straightforward SPECK implementation, when combined with code optimization on a common digital signal processor. This translates to 75 full color 4CIF 4:2:0 encoding cycles per second, clearly demonstrating the realtime capabilities of the proposed modification.
visual communications and image processing | 2008
Marijn J. H. Loomans; Cornelis J. Koeleman
In this paper, we explore the complexity-performance trade-offs for camera surveillance applications. For this purpose, we propose a Scalable Video Codec (SVC), based on wavelet transformation in which we have adopted a t+2D architecture. Complexity is adjusted by adapting the configuration of the lifting-based motion-compensated temporal filtering (MCTF). We discuss various configurations and have found an SVC that has a scalable complexity and performance, enabling embedded applications. The paper discusses the trade-off of coder complexity, e.g. motion-compensation stages, compression efficiency and end-to-end delay of the video coding chain. Our SVC has a lower complexity than H.264 SVC, but the quality performance at full resolution is close to H.264 SVC (within 1 dB for surveillance type video at 4CIF, 60Hz) and at lower resolutions sufficient for our video surveillance application.
advanced video and signal based surveillance | 2009
Julien A. Vijverberg; Marijn J. H. Loomans; Cornelis J. Koeleman
This paper presents a background segmentation technique, which is able to process acceptable segmentation masks under fast global illumination changes. The histogram of the frame-based background difference is modeled with multiple kernels. The model that represents the histogram at best, is used to determine the shift in luminance due to global illumination or diaphragm changes, such that the background difference can be compensated. Experimental results have revealed that the number of incorrectly classified pixels using global illumination compensation instead of only the approximated median method reduces from 77% to 19% shortly after a fast change. The performance of the proposed technique is similar to state-of-the-art related work for global illumination changes, despite the fact that only luminance information is used. The algorithm is computationally simple and can operate at 30 frames-per-second for VGA resolution on a P-IV 3-GHz PC.
international conference on digital signal processing | 2009
Marijn J. H. Loomans; Cornelis J. Koeleman
In this paper, we discuss the design and real-time implementation of a multi-level two-dimensional Discrete Wavelet Transform (2D-DWT). The wavelet transform uses the well-known 5/3 filter coefficients and is implemented using the lifting framework. However, the transform allows complexity-scalable solutions with different latencies for scalable video coding. We have extensively utilized SIMD (Single Instruction Multiple Data) and DMA (Direct Memory Access) techniques, where the proposed process of background DMA transfers is so effective, that the ALUs are almost never starved for data input. The obtained execution performs a 4-level transform at CCIR-601 broadcast resolution in 3.65 Mcycles, including memory stalls, on a DM642 DSP. At a clock rate of 600MHz this translates to more than 160 transforms per second, satisfying the performance requirements for a real-time image/video encoding system for e.g. surveillance applications.
Proceedings of SPIE | 2010
Julien A. Vijverberg; Marijn J. H. Loomans; Cornelis J. Koeleman
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
international conference on multimedia and expo | 2009
Marijn J. H. Loomans; Cornelis J. Koeleman
In this paper, we discuss the design and real-time implementation of a Scalable Video Codec (SVC) for surveillance applications. We present a complexity-scalable temporal wavelet transform and the implementation of a multi-level 2D 5/3 wavelet transform, using the lifting framework. We have employed SIMD (Single Instruction Multiple Data) and DMA (Direct Memory Access) techniques, where the proposed process of background DMA transfers is so effective, that the ALUs are always supplied with input data. We have realized the execution of a 4-level transform at 4CIF (CCIR-601) broadcast resolution in 3.65 Mcycles, including memory stalls, on a TMS320DM642 DSP. At a clock rate of 600 MHz, this translates to more than 160 transforms per second. For our complete SVC, we achieve a frame rate of 12.5–15 fps depending on scene activity.
international conference on image processing | 2009
Marijn J. H. Loomans; Cornelis J. Koeleman
In this paper, we discuss the design of a highly-parallel motion estimator for real-time Scalable Video Coding (SVC). In an SVC, motion is commonly estimated bidirectionally and over various temporal distances. Current motion estimators are optimized for frame-by-frame estimation, and such estimators are designed without serious implementation constraints. To support efficient embedded applications, we propose a Highly Parallel Predictive Search (HPPS) motion estimator while preserving an accurate estimation performance. The motion estimation algorithm is optimized for processing on parallel cores and utilizes a novel recursive search strategy. This strategy is based on hierarchically increasing the temporal distance in the estimation algorithm while using the state of the previous hierarchical layer as an input. Due to the absence of local recursions in the algorithm, the proposed motion estimator has a constant computational load, regardless of video activity or temporal distance. We compared our proposed motion estimator to the well-known full search, ARPS3, 3DRS and EPZS motion estimators for the SVC case, and obtain a performance close to full search (0.2dB), while outperforming other algorithms in prediction.
international conference on image processing | 2010
Marijn J. H. Loomans; Cornelis J. Koeleman
In this paper, we present a temporal candidate generation scheme that can be applied to motion estimators in Scalable Video Codecs (SVCs). For bidirectional motion estimation, usually a test is made for each block to determine which motion compensation direction is preferred: forward, bidirectional or backward. Instead of simply using the last computed motion vector field (backward or forward), giving an asymmetry in the estimation, we involve both vector fields to generate a single candidate field for a more stable and improved prediction. This field is generated with the aid of mode decision information of the codec. This single field of motion vector candidates serves two purposes: (1) it initializes the next recursion and (2) it is the foundation for the succeeding scale in the scalable coding. We have implemented this improved candidate system for both HPPS as EPZS motion estimators in a scalable video codec. We have found that it reduces the errors caused by occlusion of moving objects or image boundaries. For EPZS, only a small improvement is observed compared to the simple candidate scheme. However, for HPPS improvements are more significant: when looking at individual levels, motion compensation performance improves by up to 0.84 dB and when implemented in SVC, HPPS slightly outperforms EPZS.
advanced video and signal based surveillance | 2013
Julien A. Vijverberg; Cornelis J. Koeleman
This paper considers tracking of objects for video-based intrusion detection systems. Current tracking algorithms can be used for surveillance, but in that use-case, these algorithms execute with too high latency and are not suitable for real-time applications. In this paper, we propose novel techniques for tracking algorithms based on tracklets in order to improve the execution time by limiting the number of tracklets and connection updates between tracklets. An additional improvement is that tracklet clustering has previously been applied to tracking with complete detections, i.e. a detection has a one-to-one correspondence to an object, while our proposed algorithm can handle incomplete detections as well. We show that the algorithm yields only two avoidable false positives on the i-LIDS SZTE dataset. To show that the algorithm can be executed in real-time, we have measured the worst-case execution time on a popular DSP which is only 31 ms per frame. Furthermore, the tracking algorithm requires only 35 seconds to process the complete i-LIDS dataset on a PC.
computational intelligence and security | 2011
Julien A. Vijverberg; Cornelis J. Koeleman
This paper considers the problem of tracking a variable number of objects through a surveillance site monitored by multiple cameras with slightly overlapping field-of-views. To this end, we propose to cluster tracklets generated by a commercially available single-camera video-analysis algorithm which is solely based on the position of objects. A first contribution of this paper is the proposal of a novel, extended energy function representing the confidence that two tracklets correspond to the same object. In contrast to previous work, the proposed motion-consistency error enables the clustering of tracklets from arbitrary views and temporal overlap. A second contribution is to evaluate the performance of several clustering algorithms. The results show that the clustering techniques employing only the merging of tracklets yield 10–15% higher F1 score than clustering techniques using various types of clustering moves including split and swap moves.