Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ionut Schiopu is active.

Publication


Featured researches published by Ionut Schiopu.


IEEE Transactions on Image Processing | 2013

Context Coding of Depth Map Images Under the Piecewise-Constant Image Model Representation

Ioan Tabus; Ionut Schiopu; Jaakko Astola

This paper introduces an efficient method for lossless compression of depth map images, using the representation of a depth image in terms of three entities: 1) the crack-edges; 2) the constant depth regions enclosed by them; and 3) the depth value over each region. The starting representation is identical with that used in a very efficient coder for palette images, the piecewise-constant image model coding, but the techniques used for coding the elements of the representation are more advanced and especially suitable for the type of redundancy present in depth images. Initially, the vertical and horizontal crack-edges separating the constant depth regions are transmitted by 2D context coding using optimally pruned context trees. Both the encoder and decoder can reconstruct the regions of constant depth from the transmitted crack-edge image. The depth value in a given region is encoded using the depth values of the neighboring regions already encoded, exploiting the natural smoothness of the depth variation, and the mutual exclusiveness of the values in neighboring regions. The encoding method is suitable for lossless compression of depth images, obtaining compression of about 10-65 times, and additionally can be used as the entropy coding stage for lossy depth compression.


IEEE Signal Processing Letters | 2013

Lossy Depth Image Compression using Greedy Rate-Distortion Slope Optimization

Ionut Schiopu; Ioan Tabus

We introduce a method to create lossy versions of one image, either by successively merging the constant regions of the original image, or by iteratively splitting the regions from a created lossy image using horizontal or vertical line segments. Merging and split decisions are greedily taken, according to the best slope towards next point in the rate-distortion curve. For each created lossy image, the region contours and the optimal depth values can be entropy coded in three ways: with a new algorithm, or with two existing lossless coding algorithms. The obtained results compare favorably with the existing lossy methods.


international symposium on communications, control and signal processing | 2012

Depth image lossless compression using mixtures of local predictors inside variability constrained regions

Ionut Schiopu; Ioan Tabus

This paper studies the lossless compression of depth images realized by first transmitting contours of suitably chosen regions and subsequently performing predictive coding inside each region and transmitting the prediction residuals. For the large constant depth regions only the contour needs to be transmitted along with the value of the depth inside each region, while for the rest of the image we find suitable regions where the local variation of the depth level from one pixel to another is constrained above. The nonlinear predictors used for each region combine the results of several linear predictors, each fitting optimally a subset of pixels belonging to the local neighborhood. Overall the obtained results exceed by a wide margin the performance of standard image compression algorithms.


international conference on image processing | 2014

Anchor points coding for depth map compression

Ionut Schiopu; Ioan Tabus

The paper deals with encoding the contours of given regions in an image. All contours are represented as a sequence of contour segments, each such segment being defined by an anchor (starting) point and a string of contour edges, equivalent to a string of chain-code symbols. We propose efficient ways for anchor points selection and contour segments generation by analyzing contour crossing points and imposing rules that help in minimizing the number of anchor points and in obtaining chain-code contour sequences with skewed symbol distribution. When possible, part of the anchor points are efficiently encoded relative to the currently available contour segments at the decoder. The remaining anchor points are represented as ones in a sparse binary matrix. Context tree coding is used for all entities to be encoded. The results for depth map compression are similar (in lossless case) or better (in lossy case) than the existing results.


international symposium on signals, circuits and systems | 2015

Lossy-to-lossless progressive coding of depth-maps

Ionut Schiopu; Ioan Tabus

A progressive coding method is proposed for depth-map images, where the bitstream is encoded so that one can generate many lossy versions of the original, encompassing a wide range, from very low resolution up to lossless reconstruction. The partitions into regions of the lossy versions are assumed to be nested, so that a higher resolution image is obtained by splitting some regions of a lower resolution image. The encoder transmits to the decoder information about which regions to split, the extra contour to be added for obtaining the shapes of the more refined regions, and the extra depth values needed inside each new region. The efficient encoding of the anchor points in the progressive scenario, relative to the contour points already encoded, and the depth information recovery, are the main contributions of this paper. The progressive bitstream produced by the proposed method scales well over the whole range of rates, from low rates to lossless, reaching a performance close to that of the non-progressive methods.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015

Parametrizations of planar models for region-merging based lossy depth-map compression

Ionut Schiopu; Ioan Tabus

We present efficient methods for parametrizing planar models, to be used for depth value reconstruction inside selected regions in a depth image. The optimal plane for each region is represented using its quantized heights at three pixel locations. The decoder uses the decoded quantized heights to approximately represent the optimal plane. The three pixel locations are selected so that the distortion due to the approximation of the plane over the region is minimized. The planar reconstructions are used in competition with the piecewise constant reconstruction at the regions obtained through a merging process, where the two regions to be merged are those ensuring the optimal slope in the rate-distortion curve. The lossy depth compression algorithm including the planar modeling obtains a significantly better rate-distortion performance than the algorithm that uses only constant regions, with improvements up to 8 dB.


international conference on telecommunications | 2016

Pothole detection and tracking in car video sequence

Ionut Schiopu; Jukka Saarinen; Lauri Kettunen; Ioan Tabus

In this paper, we propose a low complexity method for detection and tracking of potholes in video sequences taken by a camera placed inside a moving car. The region of interest for the detection of the potholes is selected as the image area where the road is observed with the highest resolution. A threshold-based algorithm generates a set of candidate regions. For each region the following features are extracted: its size, the regularity of the intensity surface, contrast with respect to background model, and the regions contour length and shape. The candidate regions are labeled as putative potholes by a decision tree according to these features, eliminating the false positives due to shadows of wayside objects. The putative potholes that are successfully tracked in consecutive frames are finally declared potholes. Experimental results with real video sequences show a good detection precision.


international conference on d imaging | 2015

Lossy-to-lossless progressive coding of depth-map images using competing constant and planar models

Ionut Schiopu; Jukka Saarinen; ZIoan Tabus

In this paper we propose an extension of our lossy-to-lossless progressive coding method by placing the planar model in a competition with the piecewise constant model during the region reconstruction stage of the algorithm. A sequence of lossy images is generated using an hierarchical segmentation, of the initial image, based on region merging. The progressive coding method is able to compress this sequence of images by encoding the elements that represent the differences between two consecutive images. The method is splitting some regions from the current image segmentation using an encoded set of contours, and it is defining a set of new regions, which are reconstructed using either the piecewise constant model or the planar model. An efficient solution is proposed for encoding the model parameters in a progressive way. Results show an improvement of 3 - 4 dB compared to the baseline method based only on constant regions, and for a wide range it achieves almost similar results with the non-progressive methods.


european signal processing conference | 2012

Lossy and near-lossless compression of depth images using segmentation into constrained regions

Ionut Schiopu; Ioan Tabus


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2017

Lossless compression of subaperture images using context modeling

Ionut Schiopu; Moncef Gabbouj; Atanas P. Gotchev; Miska Hannuksela

Collaboration


Dive into the Ionut Schiopu's collaboration.

Top Co-Authors

Avatar

Ioan Tabus

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Munteanu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Moncef Gabbouj

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexandres Iosifidis

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atanas P. Gotchev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jaakko Astola

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lauri Kettunen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

ZIoan Tabus

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge