Sebastian Gruenwedel
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sebastian Gruenwedel.
advanced concepts for intelligent vision systems | 2011
Sebastian Gruenwedel; Peter Van Hese; Wilfried Philips
Foreground segmentation is an essential task in many image processing applications and a commonly used approach to obtain foreground objects from the background. Many techniques exist, but due to shadows and changes in illumination the segmentation of foreground objects from the background remains challenging. In this paper, we present a powerful framework for detections of moving objects in realtime video processing applications under various lighting changes. The novel approach is based on a combination of edge detection and recursive smoothing techniques. We use edge dependencies as statistical features of foreground and background regions and define the foreground as regions containing moving edges. The background is described by short- and long-term estimates. Experiments prove the robustness of our method in the presence of lighting changes in sequences compared to other widely used background subtraction techniques.
Proceedings of SPIE | 2012
Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.
international conference on distributed smart cameras | 2011
Sebastian Gruenwedel; Vedran Jelaca; Peter Van Hese; Richard P. Kleihorst; Wilfried Philips
An occupancy map provides an abstract top view of a scene and can be used for many applications such as domotics, surveillance, elderly-care and video teleconferencing. Such maps can be accurately estimated from multiple camera views. However, using a network of regular high resolution cameras makes the system expensive, and quickly raises privacy concerns (e.g. in elderly homes). Furthermore, their power consumption makes battery operation difficult. A solution could be the use of a network of low resolution visual sensors, but their limited resolution could degrade the accuracy of the maps. In this paper we used simulations to determine the minimum required resolution needed for deriving accurate occupancy maps which were then used to track people. Multi-view occupancy maps were computed from foreground silhouettes derived via an analysis of moving edges. Ground occupancies computed from each view were fused in a Dempster-Shafer framework. Tracking was done via a Bayes filter using the occupancy map per time instance as measurement. We found that for a room of 8.8 by 9.2 m, 4 cameras with a resolution as low as 64 by 48 pixels was sufficient to estimate accurate occupancy maps and track up to 4 people. These findings indicate that it is possible to use low resolution visual sensors to build a cheap, power efficient and privacy-friendly system for occupancy monitoring.
ACM Transactions on Sensor Networks | 2014
Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño-Castañeda; Peter Van Hese; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips
Real-time tracking of people has many applications in computer vision, especially in the domain of surveillance. Typically, a network of cameras is used to solve this task. However, real-time tracking remains challenging due to frequent occlusions and environmental changes. Besides, multicamera applications often require a trade-off between accuracy and communication load within a camera network. In this article, we present a real-time distributed multicamera tracking system for the analysis of people in a meeting room. One contribution of the article is that we provide a scalable solution using smart cameras. The system is scalable because it requires a very small communication bandwidth and only light-weight processing on a “fusion center” which produces final tracking results. The fusion center can thus be cheap and can be duplicated to increase reliability. In the proposed decentralized system all low level video processing is performed on smart cameras. The smart cameras transmit a compact high-level description of moving people to the fusion center, which fuses this data using a Bayesian approach. A second contribution in our system is that the camera-based processing takes feedback from the fusion center about the most recent locations and motion states of tracked people into account. Based on this feedback and background subtraction results, the smart cameras generate a best hypothesis for each person. We evaluate the performance (in terms of precision and accuracy) of the tracker in indoor and meeting scenarios where individuals are often occluded by other people and/or furniture. Experimental results are presented based on the tracking of up to 4 people in a meeting room of 9 m by 5 m using 6 cameras. In about two hours of data, our method has only 0.3 losses per minute and can typically measure the position with an accuracy of 21 cm. We compare our approach to state-of-the-art methods and show that our system performs at least as good as other methods. However, our system is capable to run in real-time and therefore produces instantaneous results.
Proceedings of SPIE | 2014
Junzhi Guan; Peter Van Hese; Jorge Oswaldo Niño-Castañeda; Nyan Bo Bo; Sebastian Gruenwedel; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Veelaert; Wilfried Philips
In this paper, we proposes a people tracking system composed of multiple calibrated smart cameras and one fusion server which fuses the information from all cameras. Each smart camera estimates the ground plane positions of people based on the current frame and feedback from the server from the previous time. Correlation coefficient based template matching, which is invariant to illumination changes, is proposed to estimate the position of people in each smart camera. Only the estimated position and the corresponding correlation coefficient are sent to the server. This minimal amount of information exchange makes the system highly scalable with the number of cameras. The paper focuses on creating and updating a good template for the tracked person using feedback from the server. Additionally, a static background image of the empty room is used to improve the results of template matching. We evaluated the performance of the tracker in scenarios where persons are often occluded by other persons or furniture, and illumination changes occur frequently e.g., due to switching the light on or off. For two sequences (one minute for each, one with table in the room, one without table) with frequent illumination changes, the proposed tracker never lose track of the persons. We compare the performance of our tracking system to a state-of-the-art tracking system. Our approach outperforms it in terms of tracking accuracy and people loss.
Proceedings of SPIE | 2014
Nyan Bo Bo; Peter Van Hese; Junzhi Guan; Sebastian Gruenwedel; Jorge Oswaldo Niño-Castañeda; Dimitri Van Cauwelaert; Dirk Van Haerenborgh; Peter Veelaert; Wilfried Philips
Many computer vision based applications require reliable tracking of multiple people under unpredictable lighting conditions. Many existing trackers do not handle illumination changes well, especially sudden changes in illumination. This paper presents a system to track multiple people reliably even under rapid illumination changes using a network of calibrated smart cameras with overlapping views. Each smart camera extracts foreground features by detecting texture changes between the current image and a static background image. The foreground features belonging to each person are tracked locally on each camera but these local estimates are sent to a fusion center which combines them to generate more accurate estimates. The nal estimates are fed back to all smart cameras, which use them as prior information for tracking in the next frame. The texture based approach makes our method very robust to illumination changes. We tested the performance of our system on six video sequences, some containing sudden illumination changes and up to four walking persons. The results show that our tracker can track multiple people accurately with an average tracking error as low as 8 cm even when the illumination varies rapidly. Performance comparison to a state-of-the-art tracking system shows that our method outperforms.
international conference on distributed smart cameras | 2011
Aljosha Demeulemeester; Charles-Frederik Hollemeersch; Peter Lambert; Rik Van de Walle; Vedran Jelaca; Sebastian Gruenwedel; Jorge Niño; Dimitri Van Cauwelaert; Peter Veelaert; Peter Van Hese; Wilfried Philips
This demo paper introduces a flexible 3D visualization framework that can visualize an abstract representation of real-world events such as human movement and human interaction in an immersive way by rendering animated avatars. In the presented demo, events are detected and sent to the visualization by a multi-camera room occupancy monitoring system that uses video analysis to track people in a room. Extracting high-level information about a scene and visualizing the relevant events in a 3D virtual environment can enable future immersive communication systems.
Electronics Letters | 2013
Sebastian Gruenwedel; Nemanja Petrovic; Ljubomir Jovanov; Jorge Oswaldo Niño-Castañeda; Aleksandra Pizurica; Wilfried Philips
Distributed Smart Cameras (ICDSC), 2012 Sixth International Conference on | 2013
Xingzhe Xie; Sebastian Gruenwedel; Vedran Jelaca; Jorge Oswaldo Niño Castañeda; Dirk Van Haerenborgh; Dimitri Van Cauwelaert; Peter Van Hese; Peter Veelaert; Wilfried Philips; Hamid K. Aghajan
Distributed Smart Cameras (ICDSC), 2012 Sixth International Conference on | 2013
Sebastian Gruenwedel; Xingzhe Xie; Wilfried Philips; Chih-Wei Chen; Hamid K. Aghajan