Vedran Jelaca
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vedran Jelaca.
Proceedings of SPIE | 2012
Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.
international conference on distributed smart cameras | 2011
Sebastian Gruenwedel; Vedran Jelaca; Peter Van Hese; Richard P. Kleihorst; Wilfried Philips
An occupancy map provides an abstract top view of a scene and can be used for many applications such as domotics, surveillance, elderly-care and video teleconferencing. Such maps can be accurately estimated from multiple camera views. However, using a network of regular high resolution cameras makes the system expensive, and quickly raises privacy concerns (e.g. in elderly homes). Furthermore, their power consumption makes battery operation difficult. A solution could be the use of a network of low resolution visual sensors, but their limited resolution could degrade the accuracy of the maps. In this paper we used simulations to determine the minimum required resolution needed for deriving accurate occupancy maps which were then used to track people. Multi-view occupancy maps were computed from foreground silhouettes derived via an analysis of moving edges. Ground occupancies computed from each view were fused in a Dempster-Shafer framework. Tracking was done via a Bayes filter using the occupancy map per time instance as measurement. We found that for a room of 8.8 by 9.2 m, 4 cameras with a resolution as low as 64 by 48 pixels was sufficient to estimate accurate occupancy maps and track up to 4 people. These findings indicate that it is possible to use low resolution visual sensors to build a cheap, power efficient and privacy-friendly system for occupancy monitoring.
Image and Vision Computing | 2013
Vedran Jelaca; Aleksandra Piurica; Jorge Oswaldo Niño-Castañeda; Andrés Frías-Velázquez; Wilfried Philips
Tracking vehicles using a network of cameras with non-overlapping views is a challenging problem of great importance in traffic surveillance. One of the main challenges is accurate vehicle matching across the cameras. Even if the cameras have similar views on vehicles, vehicle matching remains a difficult task due to changes of their appearance between observations, and inaccurate detections and occlusions, which often occur in real scenarios. To be executed on smart cameras the matching has also to be efficient in terms of needed data and computations. To address these challenges we present a low complexity method for vehicle matching robust against appearance changes and inaccuracies in vehicle detection. We efficiently represent vehicle appearances using signature vectors composed of Radon transform like projections of the vehicle images and compare them in a coarse-to-fine fashion using a simple combination of 1-D correlations. To deal with appearance changes we include multiple observations in each vehicle appearance model. These observations are automatically collected along the vehicle trajectory. The proposed signature vectors can be calculated in low-complexity smart cameras, by a simple scan-line algorithm of the camera software itself, and transmitted to the other smart cameras or to the central server. Extensive experiments based on real traffic surveillance videos recorded in a tunnel validate our approach.
ACM Transactions on Sensor Networks | 2014
Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips
Real-time tracking of people has many applications in computer vision, especially in the domain of surveillance. Typically, a network of cameras is used to solve this task. However, real-time tracking remains challenging due to frequent occlusions and environmental changes. Besides, multicamera applications often require a trade-off between accuracy and communication load within a camera network. In this article, we present a real-time distributed multicamera tracking system for the analysis of people in a meeting room. One contribution of the article is that we provide a scalable solution using smart cameras. The system is scalable because it requires a very small communication bandwidth and only light-weight processing on a “fusion center” which produces final tracking results. The fusion center can thus be cheap and can be duplicated to increase reliability. In the proposed decentralized system all low level video processing is performed on smart cameras. The smart cameras transmit a compact high-level description of moving people to the fusion center, which fuses this data using a Bayesian approach. A second contribution in our system is that the camera-based processing takes feedback from the fusion center about the most recent locations and motion states of tracked people into account. Based on this feedback and background subtraction results, the smart cameras generate a best hypothesis for each person. We evaluate the performance (in terms of precision and accuracy) of the tracker in indoor and meeting scenarios where individuals are often occluded by other people and/or furniture. Experimental results are presented based on the tracking of up to 4 people in a meeting room of 9 m by 5 m using 6 cameras. In about two hours of data, our method has only 0.3 losses per minute and can typically measure the position with an accuracy of 21 cm. We compare our approach to state-of-the-art methods and show that our system performs at least as good as other methods. However, our system is capable to run in real-time and therefore produces instantaneous results.
digital image computing: techniques and applications | 2011
Jorge Oswaldo Niño Castañeda; Vedran Jelaca; Andres Frias; Aleksandra Pizurica; Wilfried Philips; Reyes Rios Cabrera; Tinne Tuytelaars
We propose a real-time multi-camera tracking approach to follow vehicles in a tunnel surveillance environment with multiple non-overlapping cameras. In such system, vehicles have to be tracked in each camera and passed correctly from one camera to another through the tunnel. This task becomes extremely difficult when intra-camera errors are accumulated. Most typical issues to solve in tunnel scenes are due to low image quality, poor illumination and lighting from the vehicles. Vehicle detection is performed using Adaboost detector, speeded up by separating different cascades for cars and trucks improving general accuracy of detection. A Kalman Filter with two observations, given by the vehicle detector and an averaged optical flow vector, is used for single-camera tracking. Information from collected tracks is used for feeding the inter-camera matching algorithm, which measures the correlation of Radon transform-like projections between the vehicle images. Our main contribution is a novel method to reduce the false positive rate induced by the detection stage. We impose recall over precision in the detection correctness, and identify false positives patterns which are then included subsequently in a high-level decision making step. Results are presented for the case of 3 cameras placed consecutively in an inter-city tunnel. We demonstrate the increased tracking performance of our method compared to existing Bayesian filtering techniques for vehicle tracking in tunnel surveillance.
Proceedings of SPIE | 2012
Vedran Jelaca; Jorge Oswaldo Niño Castañeda; Aleksandra Pizurica; Wilfried Philips
Vehicle tracking is of great importance for tunnel safety. To detect incidents or disturbances in traffic flow it is necessary to reliably track vehicles in real-time. The tracking is a challenging task due to poor lighting conditions in tunnels and frequent light reflections from tunnel walls, the road and the vehicles themselves. In this paper we propose a multi-clue tracking approach combining foreground blobs, optical flow of Shi-Tomasi features and image projection profiles in a Kalman filter with a constant velocity model. The main novelty of our approach lies in using vertical and horizontal image projection profiles (so-called vehicle signatures) as additional measurements to overcome the problems of inconsistent foreground and optical flow clues in cases of severe lighting changes. These signatures consist of Radon-transform like projections along each image column and row. We compare the signatures from two successive video frames to align them and to correct the predicted vehicle position and size. We tested our approach on a real tunnel video sequence. The results show an improvement in the accuracy of the tracker and less target losses when image projection clues are used. Furthermore, calculation and comparison of image projections is computationally efficient so the tracker keeps real-time performance (25 fps, on a single 1.86 GHz processor).
international conference on distributed smart cameras | 2011
Vedran Jelaca; Sebastian Griinwedel; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this demo we present a people tracker in indoor environments. The tracker executes in a network of smart cameras with overlapping views. Special attention is given to real-time processing by distribution of tasks between the cameras and the fusion server. Each camera performs tasks of processing the images and tracking of people in the image plane. Instead of camera images, only metadata (a bounding box per person) are sent from each camera to the fusion server. The metadata are used on the server side to estimate the position of each person in real-world coordinates. Although the tracker is designed to suit any indoor environment, in this demo the trackers performance is presented in a meeting scenario, where occlusions of people by other people and/or furniture are significant and occur frequently. Multiple cameras insure views from multiple angles, which keeps tracking accurate even in cases of severe occlusions in some of the views.
advanced concepts for intelligent vision systems | 2010
Ivana Despotovic; Vedran Jelaca; Ewout Vansteenkiste; Wilfried Philips
Segmentation of noisy images is one of the most challenging problems in image analysis and any improvement of segmentation methods can highly influence the performance of many image processing applications. In automated image segmentation, the fuzzy c-means (FCM) clustering has been widely used because of its ability to model uncertainty within the data, applicability to multi-modal data and fairly robust behaviour. However, the standard FCM algorithm does not consider any information about the spatial image context and is highly sensitive to noise and other imaging artefacts. Considering above mentioned problems, we developed a new FCM-based approach for the noise-robust fuzzy clustering and we present it in this paper. In this new iterative algorithm we incorporated both spatial and feature space information into the similarity measure and the membership function. We considered that spatial information depends on the relative location and features of the neighbouring pixels. The performance of the proposed algorithm is tested on synthetic image with different noise levels and real images. Experimental quantitative and qualitative segmentation results show that our method efficiently preserves the homogeneity of the regions and is more robust to noise than other FCM-based methods.
Proceedings of SPIE | 2011
Andrés Frías-Velázquez; Jorge Oswaldo Niño-Castañeda; Vedran Jelaca; Aleksandra Pižurica; Wilfried Philips
A novel approach to automatically detect vehicles in road tunnels is presented in this paper. Non-uniform and poor illumination conditions prevail in road tunnels making difficult to achieve robust vehicle detection. In order to cope with the illumination issues, we propose a local higher-order statistic filter to make the vehicle detection invariant to illumination changes, whereas a morphological-based background subtraction is used to generate a convex hull segmentation of the vehicles. An evaluation test comparing our approach with a benchmark object detector shows that our approach outperforms in terms of false detection rate and overlap area detection.
Proceedings of SPIE | 2012
M. Macesic; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; N. Prodanovic; M. Panic; Aleksandra Pizurica; Vladimir S. Crnojevic; Wilfried Philips
With rapid increase of number of vehicles on roads it is necessary to maintain close monitoring of traffic. For this purpose many surveillance cameras are placed along roads and on crossroads, creating a huge communication load between the cameras and the monitoring center. Therefore, the data needs to be processed on site and transferred to the monitoring centers in form of metadata or as a set of selected images. For this purpose it is necessary to detect events of interest already on the camera side, which implies using smart cameras as visual sensors. In this paper we propose a method for tracking of vehicles and analysis of vehicle trajectories to detect different traffic events. Kalman filtering is used for tracking, combining foreground and optical flow measurements. Obtained vehicle trajectories are used to detect different traffic events. Every new trajectory is compared with collection of normal routes and clustered accordingly. If the observed trajectory differs from all normal routes more than a predefined threshold, it is marked as abnormal and the alarm is raised. The system was developed and tested on Texas Instruments OMAP platform. Testing was done on four different locations, two locations in the city and two locations on the open road.