Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mihail Georgiev is active.

Publication


Featured researches published by Mihail Georgiev.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

A fast image segmentation algorithm using color and depth map

Emanuele Mirante; Mihail Georgiev; Atanas P. Gotchev

In this paper, a real-time image segmentation algorithm is presented. It utilizes both color and depth information retrieved from a multi-sensor capture system, which combines stereo camera pairs with time-of-flight range sensor. The algorithm targets low complexity and fast implementation which can be achieved through parallelization. Applications, such as immersive videoconferencing and lecturer segmentation for augmented reality lecture presentation, can benefit from the designed algorithm.


international conference on multimedia and expo | 2013

Joint de-noising and fusion of 2D video and depth map sequences sensed by low-powered tof range sensor

Mihail Georgiev; Atanas P. Gotchev; Miska Hannuksela

We propose a joint de-noising and data fusion approach where the fused modalities come from conventional high-resolution photo or video camera and low-resolution range sensor of Time-of-flight (ToF) type, operating in restricted conditions of low-emitting power and low number of sensor elements. Our approach includes identifying the various noise sources and suggesting suitable remedies at particular stages of data sensing and fusion. More specifically, fixed pattern noise and system noise are treated at a preliminary denoising stage working on range data only. In contrast to other 2D video/depth fusion approaches, which suggest working in planar coordinates, our approach includes additional denoising refinement in the space of 3D world coordinate system (i.e. point cloud space). Furthermore, the high-resolution grid resampling is performed as an iterative non-uniform to uniform resampling based on the Richardson method. This improves the performance compared to approaches based on low-to-high grid upsampling and subsequent refinement. We report experimental results where the achieved quality of fused data is the same as if the ToF sensor was operating in normal (low-noise) sensing mode.


international conference on acoustics, speech, and signal processing | 2013

De-noising of distance maps sensed by time-of-flight devices in poor sensing environment

Mihail Georgiev; Atanas P. Gotchev; Miska Hannuksela

We propose a non-local de-noising approach aimed at filtering range data sensed by Photonic Mixer Device sensors. We address specifically the case of poor sensing environment when the reflected signal amplitude is low. In our approach, signal components of phase-delay and amplitude of the sensed signal are regarded as components of a complex-valued variable and processed together in a single step. This imposes better filter adaptivity and similarity weighting. The complex-domain filtering provides additional feedback in the form of improved noise-level confidence, which can be utilized in iterative de-noising schemes. Pre-filtering of individual components is proposed to suppress structural artifacts. Our approach compares favorably with state of the art approaches.


international conference on multimedia and expo | 2013

Real-time denoising of ToF measurements by spatio-temporal non-local mean filtering

Mihail Georgiev; Atanas P. Gotchev; Miska Hannuksela

This work addresses the problem of denoising of range data obtained by ToF continuous signal modulation camera working in low-power mode. The proposed approach is based on non-local mean filtering applied over extensive spatio-temporal block search in complex-valued signal domain. The extensive search allows for using shorter integration times of the range sensors and leads to an effective overcomplete structure suitable for denoising. The filter structure is optimized for real-time operation and achieves O(1) performance for arbitrary patch size by utilizing summed area tables and look-up table data fetching. The experimental results show practically the same performance while compared with state-of-the-art approaches, for greatly improved speed.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014

CPU-efficient free view synthesis based on depth layering

Aleksandra Chuchvara; Mihail Georgiev; Atanas P. Gotchev

In this paper, a new approach for depth-image based rendering (DIBR) based on depth layering is proposed. The approach effectively avoids the non-uniform to uniform resampling stage, which is otherwise inherent for classical DIBR. In contrast, the new approach employs depth layering, which approximates the scene geometry by a multi-planar surface, given the depth is defined within a closed range. Such an approximation facilitates a fast reverse coordinate mapping from virtual to reference view where straightforward resampling on a uniform grid is performed. The proposed rendering approach ensures an automatic z-ordering and disocclusion detection, while being very efficient even for CPU-based implementations. It is also applicable for reference and virtual views with different resolutions and as such can serve depth upsampling, view panning and zooming applications. The experimental results demonstrate its real-time capability, while the quality is comparable with other view synthesis approaches but for lower computational cost.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008

Opengl-Based Control of Semi-Active 3D Display

Atanas Boev; Kalle Raunio; Mihail Georgiev; Atanas P. Gotchev; Karen O. Egiazarian

We present a system for 3D visualisation, which combines user-tracking, used by displays with steerable optics, with generation of multiple views, typical for displays with fixed optical filter. Instead of eye-tracking, typical for the user- tracking approach, we propose a less computationally demanding head tracking, based on face detection. We investigate if the precise delivery of different images to each eye of the observer can be handled by the fixed optics of a multiview 3D display, and if continuous head parallax can be achieved.


IEEE Transactions on Instrumentation and Measurement | 2016

Fixed-Pattern Noise Modeling and Removal in Time-of-Flight Sensing

Mihail Georgiev; Robert Bregovic; Atanas P. Gotchev

In this paper, we discuss the modeling and removal of fixed-pattern noise (FPN) in photonic mixture devices employing the time-of-flight (ToF) principle for range measurements and scene depth estimation. We present a case that arises from low-sensing (LS) conditions caused by either external factors related to scene reflectivity or internal factors related to the power and operation mode of the sensor or both. In such a case, the FPN becomes especially dominating and invalidates previously adopted noise models, which have been used for removal of other noise contaminations in ToF measurements. To tackle LS cases, we propose a noise model specifically addressing the presence of FPN and develop a relevant FPN removal procedure. We demonstrate, by experiments with synthetic and real-world data, that the proper modeling and removing of FPN is substantial for the subsequent Gaussian denoising and yields accurate depth maps comparable to the ones obtainable in normal operating mode.


static analysis symposium | 2015

Fixed-pattern noise suppression in low-sensing environment of Time-of-Flight devices

Mihail Georgiev; Robert Bregovic; Atanas P. Gotchev

In this paper, we emphasize the importance of fixed-pattern noise (FPN) that occurs in Time-of-Flight (ToF) devices that operate in the so-called low-sensing environment. We propose a method for designing FIR filters that can be used for suppressing the FPN. We illustrate by means of two experiments the importance of dealing with the FPN, before conventional denoising algorithms are applied.


international conference on image processing | 2013

A fast and accurate re-calibration technique for misaligned stereo cameras

Mihail Georgiev; Atanas P. Gotchev; Miska Hannuksela

In this paper, we propose a practical approach for robust rectification of stereo camera setups without use of calibration pattern. Our solution simplifies the process to a non-general case of rectification to avoid explicit use of Fundamental Matrix estimation. The solution shows better or comparable robustness than some of recent solutions, but for much lower computational cost and code complexity.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015

Speed-optimized free-viewpoint rendering based on depth layering

Aleksandra Chuchvara; Olli Suominen; Mihail Georgiev; Atanas P. Gotchev

In this paper free-viewpoint rendering is addressed and a new fast approach for virtual views synthesis from view-plus-depth 3D representation is proposed. Depth layering in disparity domain is employed in order to optimally approximate the scene geometry by a set of constant depth layers. This approximation facilitates the use of connectivity information for segment-based forward warping of the reference layer map, producing a complete virtual view layer map containing no cracks or holes. The warped layer map is used to guide the disocclusions inpainting process of the synthesized texture map. For this purpose, a speed-optimized patch-based inpainting approach is proposed. In contrast to the existing methods, patch similarity function is based on local binary patterns descriptors. Such binary representation allows for efficient processing and comparison of patches, as well as compact storage and reuse of previously calculated binary descriptors. The experimental results demonstrate realtime capability of the proposed method even for CPU-based implementation, while the quality is comparable with other view synthesis approaches.

Collaboration


Dive into the Mihail Georgiev's collaboration.

Top Co-Authors

Avatar

Atanas P. Gotchev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atanas Boev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Karen O. Egiazarian

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Aleksandra Chuchvara

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert Bregovic

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Atanas Gotchev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Evgeny Belyaev

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ilian Todorov

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kalle Raunio

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge