Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Hermans is active.

Publication


Featured researches published by Chris Hermans.


canadian conference on computer and robot vision | 2007

Screen-Camera Calibration using a Spherical Mirror

Yannick Francken; Chris Hermans; Philippe Bekaert

Developments in the consumer market have indicated that the average user of a personal computer is likely to also own a webcam. With the emergence of this new user group will come a new set of applications, which will require a user-friendly way to calibrate the position of the camera with respect to the location of the screen. This paper presents a fully automatic method to calibrate a screen-camera setup, using a single moving spherical mirror. Unlike other methods, our algorithm needs no user intervention other then moving around a spherical mirror. In addition, if the user provides the algorithm with the exact radius of the sphere in millimeters, the scale of the computed solution is uniquely defined.


computer vision and pattern recognition | 2009

Depth from sliding projections

Chris Hermans; Yannick Francken; Tom Cuypers; Philippe Bekaert

In this paper we present a novel method for 3D structure acquisition, based on structured light. Unlike classical structured light methods, in which a static projector illuminates a scene with time-varying illumination patterns, our technique makes use of a moving projector emitting a static striped illumination pattern. This projector is translated at a constant velocity, in the direction of the projectors horizontal axis. Illuminating the object in this manner allows us to perform a per pixel analysis, in which we decompose the recorded illumination sequence into a corresponding set of frequency components. The dominant frequency in this set can be directly converted into a corresponding depth value. This per pixel analysis allows us to preserve sharp edges in the depth image. Unlike classical structured light methods, the quality of our results is not limited by projector or camera resolution, but is solely dependent on the temporal sampling density of the captured image sequence. Additional benefits include a significant robustness against common problems encountered with structured light methods, such as occlusions, specular reflections, subsurface scattering, interreflections, and to a certain extent projector defocus.


Colloids and Surfaces B: Biointerfaces | 2014

Melittin disruption of raft and non-raft-forming biomimetic membranes: A study by quartz crystal microbalance with dissipation monitoring

Patricia Losada-Pérez; Mehran Khorshid; Chris Hermans; T. Robijns; Marloes Peeters; K. L. Jiménez-Monroy; L. T. N. Truong; Patrick Wagner

In this work we examine the role of lateral phase separation in cholesterol-containing biomimetic membranes on the disrupting action of melittin using a label-free surface-sensitive technique, quartz crystal microbalance with dissipation monitoring (QCM-D). Melittin disruption mechanisms depend strongly on the geometry of the lipid layer; however, despite the interplay between layer geometry/thickness and melittin activity, results indicate that the presence of lipid heterogeneity and lateral phase separation greatly influences the disrupting efficiency of melittin. In homogeneous non-raft forming membranes with high cholesterol content, melittin spontaneous activity is strongly delayed compared to heterogeneous raft-forming systems with the same amount of cholesterol. These results confirm the importance of lateral phase separation as a determinant factor in peptide activity. The information provided can be used for the design of more efficient antimicrobial peptides and the possibility of using a label-free approach for tailored-membranes and interactions with other types of peptides, such as amyloid peptides.


asian conference on computer vision | 2010

Image and video decolorization by fusion

Codruta Orniana Ancuti; Cosmin Ancuti; Chris Hermans; Philippe Bekaert

In this paper we present a novel decolorization strategy, based on image fusion principles. We show that by defining proper inputs and weight maps, our fusion-based strategy can yield accurate decolorized images, in which the original discriminability and appearance of the color images are well preserved. Aside from the independent R,G,B channels, we also employ an additional input channel that conserves color contrast, based on the Helmholtz-Kohlrausch effect. We use three different weight maps in order to control saliency, exposure and saturation. In order to prevent potential artifacts that could be introduced by applying the weight maps in a per pixel fashion, our algorithm is designed as a multi-scale approach. The potential of the new operator has been tested on a large dataset of both natural and synthetic images. We demonstrate the effectiveness of our technique, based on an extensive evaluation against the state-of-the-art grayscale methods, and its ability to decolorize videos in a consistent manner.


Computer Graphics Forum | 2008

Augmented Panoramic Video

Chris Hermans; Cedric Vanaken; Tom Mertens; F. Van Reeth; Philippe Bekaert

Many video sequences consist of a locally dynamic background containing moving foreground subjects. In this paper we propose a novel way of re‐displaying these sequences, by giving the user control over a virtual camera frame. Based on video mosaicing, we first compute a static high quality background panorama. After segmenting and removing the foreground subjects from the original video, the remaining elements are merged into a dynamic background panorama, which seamlessly extends the original video footage. We then re‐display this augmented video by warping and cropping the panorama. The virtual camera can have an enlarged field‐of‐view and a controlled camera motion. Our technique is able to process videos with complex camera motions, reconstructing high quality panoramas without parallax artefacts, visible seams or blurring, while retaining repetitive dynamic elements.


canadian conference on computer and robot vision | 2008

Fast Normal Map Acquisition Using an LCD Screen Emitting Gradient Patterns

Yannick Francken; Chris Hermans; Tom Cuypers; Philippe Bekaert

We propose an efficient technique for normal map acquisition, using a cheap and easy to build setup. Our setup consists solely of off-the-shelf components, such as an LCD screen, a digital camera and a linear polarizer filter. The LCD screen is employed as a linearly polarized light source emitting gradient patterns, whereas the digital camera is used to capture the incident illumination reflected off the scanned objects surface. Also, by exploiting the fact that light emitted by an LCD screen has the property of being linearly polarized, we use the filter to surpress any specular highlights. Based on the observed Lambertian reflection of only four different light patterns, we are able to obtain a detailed normal map of the scanned surface. Overall, our techniques produces convincing results, even on weak specular materials.


canadian conference on computer and robot vision | 2007

Extrinsic Recalibration in Camera Networks

Chris Hermans; Maarten Dumont; Philippe Bekaert

This work addresses the practical problem of keeping a camera network calibrated during a recording session. When dealing with real-time applications, a robust calibration of the camera network needs to be assured, without the burden of a full system recalibration at every (un)intended camera displacement. In this paper we present an efficient algorithm to detect when the extrinsic parameters of a camera are no longer valid, and reintegrate the displaced camera into the previously calibrated camera network. When the intrinsic parameters of the cameras are known, the algorithm can also be used to build ad-hoc distributed camera networks, starting from three calibrated cameras. Recalibration is done using pairs of essential matrices, based on image point correspondences. Unlike other approaches, we do not explicitly compute any 3D structure for our calibration purposes.


international conference on computer graphics and interactive techniques | 2010

Layer-based single image dehazing by per-pixel haze detection

Codruta Orniana Ancuti; Cosmin Ancuti; Chris Hermans; Philippe Bekaert

In outdoor environments, light reflected from object surfaces is commonly scattered due to the impurities of the aerosol, or the presence of atmospheric phenomena such as fog and haze. Aside from scattering, the absorption coefficient presents another important factor that attenuates the reflected light of distant objects reaching the camera lens. As a result, images taken in bad weather conditions (or similarly, underwater and aerial photographs) are characterized by poor contrast, lower saturation and additional noise.


international symposium on visual computing | 2009

Depth from Encoded Sliding Projections

Chris Hermans; Yannick Francken; Tom Cuypers; Philippe Bekaert

We present a novel method for 3D shape acquisition, based on mobile structured light. Unlike classical structured light methods, in which a static projector illuminates the scene with dynamic illumination patterns, mobile structured light employs a moving projector translated at a constant velocity in the direction of the projectors horizontal axis, emitting static or dynamic illumination. For our approach, a time multiplexed mix of two signals is used: (1) a wave pattern, enabling the recovery of point-projector distances for each point observed by the camera, and (2) a 2D De Bruijn pattern, used to uniquely encode a sparse subset of projector pixels. Based on this information, retrieved on a per (camera) pixel basis, we are able to estimate a sparse reconstruction of the scene. As this sparse set of 2D-3D camera-scene correspondences is sufficient to recover the camera location and orientation within the scene, we are able to convert the dense set of point-projector distances into a dense set of camera depths, effectively providing us with a dense reconstruction of the observed scene. We have verified our technique using both synthetic and real-world data. Our experiments display the same level of robustness as previous mobile structured light methods, combined with the ability to accurately estimate dense scene structure and accurate camera/projector motion without the need for prior calibration.


asian conference on computer vision | 2010

A fast semi-inverse approach to detect and remove the haze from a single image

Codruta Orniana Ancuti; Cosmin Ancuti; Chris Hermans; Philippe Bekaert

Collaboration


Dive into the Chris Hermans's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge