Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Linda Tessens is active.

Publication


Featured researches published by Linda Tessens.


international conference on acoustics, speech, and signal processing | 2007

Extending the Depth of Field in Microscopy Through Curvelet-Based Frequency-Adaptive Image Fusion

Linda Tessens; Alessandro Ledda; Aleksandra Pizurica; Wilfried Philips

Limited depth of field is an important problem in microscopy imaging. 3D objects are often thicker than the depth of field of the microscope, which means that it is optically impossible to make one single sharp image of them. Instead, different images in which each time a different area of the object is in focus have to be fused together. In this work, we propose a curvelet-based image fusion method that is frequency-adaptive. Because of the high directional sensitivity of the curvelet transform (and consequentially, its extreme sparseness), the average performance gain of the new method over state-of-the-art methods is high.


international conference on distributed smart cameras | 2008

Principal view determination for camera selection in distributed smart camera networks

Linda Tessens; Marleen Morbée; Huang Lee; Wilfried Philips; Hamid K. Aghajan

Within a camera network, the contribution of a camera to the observation of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time and the camera configuration might not be fixed, e.g. in a mobile network. In this work, we address the problem of effectively determining the principle viewpoint within a network, i.e. the view that contributes most to the desired observation of a scene. This selection is based on the information from each camerapsilas observations of persons in a scene, and only low data rate information is required to be sent over wireless channels since the image frames are first locally processed by each sensor node before transmission. The principal view, complemented with one or more helper views, constitutes a significantly more efficient scene representation than the totality of the available views. This is of great value for the reduction of the amount of image data that needs to be stored or transmitted over the network.


ACM Transactions on Sensor Networks | 2014

Camera selection for tracking in distributed smart camera networks

Linda Tessens; Marleen Morbée; Hamid K. Aghajan; Wilfried Philips

Tracking persons with multiple cameras with overlapping fields of view instead of with one camera leads to more robust decisions. However, operating multiple cameras instead of one requires more processing power and communication bandwidth, which are limited resources in practical networks. When the fields of view of different cameras overlap, not all cameras are equally needed for localizing a tracking target. When only a selected set of cameras do processing and transmit data to track the target, a substantial saving of resources is achieved. The recent introduction of smart cameras with on-board image processing and communication hardware makes such a distributed implementation of tracking feasible. We present a novel framework for selecting cameras to track people in a distributed smart camera network that is based on generalized information-theory. By quantifying the contribution of one or more cameras to the tracking task, the limited network resources can be allocated appropriately, such that the best possible tracking performance is achieved. With the proposed method, we dynamically assign a subset of all available cameras to each target and track it in difficult circumstances of occlusions and limited fields of view with the same accuracy as when using all cameras.


advanced concepts for intelligent vision systems | 2008

Sub-optimal Camera Selection in Practical Vision Networks through Shape Approximation

Huang Lee; Linda Tessens; Marleen Morbée; Hamid K. Aghajan; Wilfried Philips

Within a camera network, the contribution of a camera to the observations of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted and stored image data. We propose a greedy algorithm for camera selection in practical vision networks where the selection decision has to be taken in real time. The selection criterion is based on the information from each camera sensors observations of persons in a scene, and only low data rate information is required to be sent over wireless channels since the image frames are first locally processed by each sensor node before transmission. Experimental results show that the performance of the proposed greedy algorithm is close to the performance of the optimal selection algorithm. In addition, we propose communication protocols for such camera networks, and through experiments, we show the proposed protocols improve latency and observation frequency without deteriorating the performance.


multimedia signal processing | 2008

Optimal camera selection in vision networks for shape approximation

Marleen Morbée; Linda Tessens; Huang Lee; Wilfried Philips; Hamid K. Aghajan

Within a camera network, the contribution of a camera to the observation of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted or stored image data. In this work, we propose low data rate schemes to select from a vision network a subset of cameras that provides a good frontal observation of the persons in the scene and allows for the best approximation of their 3D shape. We also investigate to what degree low data rates trade off quality of reconstructed 3D shapes.


international conference on distributed smart cameras | 2009

Efficient approximate foreground detection for low-resource devices

Linda Tessens; Marleen Morbée; Wilfried Philips; Richard P. Kleihorst; Hamid K. Aghajan

A broad range of very powerful foreground detection methods exist because this is an essential step in many computer vision algorithms. However, because of memory and computational constraints, simple static background subtraction is very often the technique that is used in practice on a platform with limited resources such as a smart camera. In this paper we propose to apply more powerful techniques on a reduced scan line version of the captured image to construct an approximation of the actual foreground without overburdening the smart camera. We show that the performance of static background subtraction quickly drops outside of a controlled laboratory environment, and that this is not the case for the proposed method because of its ability to update its background model. Furthermore we provide a comparison with foreground detection on a subsampled version of the captured image. We show that with the proposed foreground approximation higher true positive rates can be achieved.


digital television conference | 2007

A Distributed Coding-Based Extension of a Mono-View to a Multi-View Video System

Marleen Morbée; Linda Tessens; Josep Prades-Nebot; Aleksandra Pizurica; Wilfried Philips

Multi-view video systems provide 3D information about the captured scene. This 3D information can be useful for many emerging applications, e.g. 3D TV or virtual reality. However, many current video systems consist only of one camera and consequently do not capture the 3D content of a scene. In this paper, we therefore present an efficient, flexible and low-complexity method for extending an existing mono video system to a 3D system. The main idea is to develop a coding framework that starts from a single camera and that can be flexibly extended by low-complexity cameras to capture 3D video data. These cameras do not perform any motion or disparity estimation, but still good coding efficiency is achieved by relying on distributed video (DV) coding principles, i.e. jointly decoding of the independently encoded frames of the multi-view cameras. If we compare our coding results with the results for low-complexity DV coding of a single video, then higher efficiency is achieved since not only the motion between the frames of the video but also disparity between different views of the array of cameras is exploited at the decoder.


international conference on image processing | 2010

Image fusion using blur estimation

Seyfollah Soleimani; Filip Rooms; Wilfried Philips; Linda Tessens

In this paper, a new wavelet based image fusion method is proposed. In this method, the blur levels of the edge points are estimated for every slice in the stack of images. Then from corresponding edge points in different slices, the sharpest one is brought to the final image and others are eliminated. The intensities of non-edge pixels are assigned by the slice of its nearest neighbor edge. Results are promising and outperform other methods in most cases of the tested methods.


international conference on distributed smart cameras | 2007

A Distributed Coding-Based Content-Aware Multi-View Video System

M. Morhee; Linda Tessens; Hiep Luong; Josep Prades-Nebot; A. Pizurica; Wilfried Philips

Compared to traditional mono-view systems, stereo or in general multi-view systems provide interesting additional information about a captured scene, which can significantly facilitate content extraction. This property makes them very useful for many emerging applications, such as 3D TV and video surveillance. However, the use of such systems has been limited so far because of the processing time and bandwidth requirements for multi-view data. These major drawbacks can only be relieved by the development of dedicated algorithms. In this paper, we present an efficient, flexible and content-aware coding method for a multi-view video system. The framework consists of a central processor and camera, completed by a flexible number of smart Wyner-Ziv cameras. The latter ones provide a content-aware representation of their viewpoint, thus greatly reducing the amount of data to be sent to the central processor. By employing Distributed Video (DV) coding, i.e. joint decoding of the independently encoded frames of the different cameras, we achieve good coding efficiency without inter-camera communication.


Storage and Retrieval for Image and Video Databases | 2006

Spatially adaptive image denoising based on joint image statistics in the curvelet domain

Linda Tessens; A. Pižurica; Alin Alecu; Adrian Munteanu; Wilfried Philips

In this paper, we perform a statistical analysis of curvelet coefficients, making a distinction between two classes of coefficients: those representing useful image content and those dominated by noise. By investigating the marginal statistics, we develop a mixture prior for curvelet coefficients. Through analysis of the joint intra-band statistics, we find that white Gaussian noise is transformed by the curvelet transform into noise that is correlated in one direction and decorrelated in the perpendicular direction. This enables us to develop an appropriate local spatial activity indicator for curvelets. Finally, based on our findings, we develop a novel denoising method, inspired by a recent wavelet domain method ProbShrink. For textured images, the new method outperforms its wavelet-based counterpart and existing curvelet-based methods. For piecewise smooth images, performances are similar as existing methods.

Collaboration


Dive into the Linda Tessens's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adrian Munteanu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alin Alecu

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge