Jan Hanca
Vrije Universiteit Brussel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Hanca.
international conference on digital signal processing | 2013
Shao-Ping Lu; Jan Hanca; Adrian Munteanu; Peter Schelkens
Depth-based view synthesis can produce novel realistic images of a scene by view warping and image inpainting. This paper presents a depth-based view synthesis approach performing pixel-level image inpainting. The proposed approach provides great flexibility in pixel manipulation and prevents random effects in texture propagation. By analyzing the process generating image holes in view warping, we firstly classify such areas into simple holes and disocclusion areas. Based on depth information constraints and different strategies for random propagation, an approximate nearest-neighbor match based pixel-level inpainting is introduced to complete holes from the two classes. Experimental results demonstrate that the proposed view synthesis method can effectively produce smooth textures and reasonable structure propagation. The proposed depth-based pixel-level inpainting is well suitable to multi-view video and other higher dimensional view synthesis settings.
Spie Newsroom | 2014
Francis Deboeverie; Jan Hanca; Richard P. Kleihorst; Adrian Munteanu; Wilfried Philips
A low-resolution visual sensor network enables monitoring of elderly peoples health and safety at home, postponing institutionalized healthcare.
international conference on distributed smart cameras | 2013
Jan Hanca; Frederik Verbist; Nikos Deligianis; Richard Kleihorsty; Adrian Munteanu
This demonstrator explores the applicability of depth estimation based on stereo video with extremely low resolution, i.e., 30×30 pixels. To handle this resolution, a disparity estimation technique, composed of local correlation-based matching of two low-resolution stereo images followed by segmentation-driven post-processing, is proposed. The demonstrator includes a setup where a stereo visual sensor is connected to a laptop computer, running the proposed depth estimation method in realtime and displaying the resulting disparity maps. In addition, an interface is available to give the user control over the proposed algorithm parameters. To illustrate the superior performance, the results of proposed method can be readily compared to disparity maps generated using a typical global correlation-based depth estimation algorithm.
Journal of Electronic Imaging | 2016
Jan Hanca; Nikos Deligiannis; Adrian Munteanu
Abstract. Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.
international conference on distributed smart cameras | 2013
Geert Braeckman; Weiwei Chen; Jan Hanca; Nikos Deligiannis; Frederik Verbist; Adrian Munteanu
This demonstrator illustrates the performance of our designated intra-frame video codec for low-resolution visual sensors. The encoder of the presented compression system runs on a low-power visual sensor, which captures stereo frames of 30×30 6-bit gray-scale pixels. A laptop computer is connected using a serial connection and runs the decoder in real-time, displaying the resulting visual quality, frame-rate and compression efficiency in terms of bits per pixel. Additionally, a software environment is available to the user for further evaluation of the compression performance of the codec. The user can manipulate the control parameters of the codec at will. The objective and subjective quality of the decoded video and their evolution depending on the complexity of the scene, as well as the chosen settings of the codec, are monitored in real-time.
Proceedings of SPIE | 2013
Jan Hanca; Adrian Munteanu; Peter Schelkens
Efficient depth-map coding is of paramount importance in next generation 3D video applications such as 3DTV and free viewpoint video. In this paper we propose a novel intra depth map coding system employing optimized segmentation procedures suitable for depth maps, followed by lossy or lossless contour coding techniques. In lossy mode, our method performs Hausdorff-distance constrained coding, by which the distance between the actual and decoded contours is upper-bounded by a user-defined bound. The trade-off between contour location accuracy and coding performance is analyzed. Experimental results show that, on average, lossy coding outperforms lossless contour coding and should be considered in all segmentation-based depth map coding systems. The comparison against JPEG-2000 shows that the proposed system is a viable alternative for light intra coding of depth maps.
international conference on distributed smart cameras | 2015
Jan Hanca; Nikos Deligiannis; Adrian Munteanu
This demonstrator illustrates the performance of our feedback-channel-free distributed video coding system for extremely low-resolution visual sensors. The demonstrator includes a setup where a low-power sensor capturing 30 x 30 pixels video data is connected to a laptop PC. The video sequence is encoded, decoded and displayed on the computer screen in real-time for side-by-side comparison between the original input and the reconstructed data. A software environment allows the user to adjust all the control parameters of the video codec and to evaluate the influence of changes on the visual quality. The objective performance of the coding system can be monitored in terms of bits per pixel, decoding delays, decoding speed and decoding failures.
international conference on distributed smart cameras | 2015
Geert Braeckman; Jan Hanca; Richard P. Kleihorst; Adrian Munteanu
The applicability of wireless sensor networks is usually limited by the lifetime of sensor nodes having a restricted energy supply. In case of visual sensors, the amount of collected data increases significantly, resulting in a high power consumption by the transmitter. Efficient video compression algorithms reduce the amount of data which has to be sent. On the other hand, video codecs are known to be computationally demanding. It is unclear whether or not spending more power on compression and less on transmission increases battery lifetime, especially if processing is performed on a general-purpose reprogrammable microcontroller. This paper presents the power profile of a very low-resolution wireless visual sensor node. The node executing a predictive video coding engine is compared against the system transmitting raw data. Both setups are examined while capturing static and dynamic video content in strictly controlled environment conditions. Experimental results prove that video compression executed on the microcontroller prior to the wireless transmission reduces the power consumption of the sensor mote.
international conference on d imaging | 2015
Duc Minh Nguyen; Jan Hanca; Shao-Ping Lu; Adrian Munteanu
Stereo matching has been one of the most active research topics in computer vision domain for many years resulting in a large number of techniques proposed in the literature. Nevertheless, improper combinations of available tools cannot fully utilize the advantages of each method and may even lower the performance of the stereo matching system. Moreover, state-of-the-art techniques are usually optimized to perform well on a certain input dataset. In this paper we propose a framework to combine existing tools into a stereo matching pipeline and three different architectures combining existing processing steps to build stereo matching systems which are not only accurate but also efficient and robust under different operating conditions. Thorough experiments on three well-known datasets confirm the effectiveness of our proposed systems on any input data.
international conference on digital signal processing | 2013
Jan Hanca; Adrian Munteanu; Peter Schelkens
Emerging video technologies, such as 3DTV, increase the demand for high efficiency coding tools. To allow the transmission of a multiview video over the employed transmission channels, a new data format including texture and the corresponding depth information has been proposed. In this paper we present a novel segmentation-based intra codec for depth maps, utilizing the correlation between depth and color data. Our system performs a low-complexity color segmentation on the texture image and encodes the depth values within the segments. The same segmentation is calculated at the decoder, ensuring the closed-loop property of the system. Experimental results show that the proposed system is a viable alternative for 3D-HEVC in low-complexity applications.