Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcin Denkowski is active.

Publication


Featured researches published by Marcin Denkowski.


international conference on conceptual structures | 2013

GPU Accelerated 3D Object Reconstruction

Marcin Denkowski

Abstract Creation of a virtual model, based on a photograph or video images, becomes more and more popular nowadays. Tradi- tionally, these models were obtained using 3D modelling applications or by laser range scanners and other devices that project structured light onto the object. This is a very time-consuming process and the achievable level of detail and realism is limited. We present a real-time modelling system for creating such virtual object models. This system uses voxels as the smallest editable units and is able to generate 3D color convex hull-model based on photo or video images taken around the object. Virtual model can be represented as a volumetric model and a triangulated mesh which constitutes closed isosurface of this volumetric model. Our approach utilizes Graphical processor unit (GPU) and the CUDA to accelerate whole process of reconstruction, making it possible to perform calculations in real-time.


Annales Umcs, Informatica | 2010

Image Sequence Stabilization Through Model Based Registration

Marcin Denkowski; Michał Chlebiej

Acquisition of image series using the digital camera gives a possibility to obtain high resolution/quality animation, much better than while using the digital camcorder. However, there are several problems to deal with when producing animation using such approach. Especially, if motion involves changes in observer position and spatial orientation, the resulting animation may turn out to look choppy and unsmooth. If there is no possibility to provide some hardware based stabilization of the camera during the motion, it is necessary to develop some image processing methods to obtain smooth animation. In this work we deal with the image sequence acquired without stabilization around an object. We propose a method that enables creation of smooth animation using the registration paradigm.


international conference on computer vision | 2008

A New Image Fusion Method for Estimating 3D Surface Depth

Marcin Denkowski; M. Chlebiej; Paweł Mikołajczak

Creation of virtual reality models from photographs is very complex and time-consuming process, that requires special equipment like laser scanners, a large number of photographs and manual interaction. In this work we present a method for generating of surface geometry of photographed scene. Our approach is based on the phenomenon of shallow depth-of-field in close-up photography. Representing such surface details is useful to increase the visual realism in a range of application areas, especially biological structures or microorganisms. For testing purposes a set of images of the same scene is taken from a typical digital camera with macro lenses with a different depth-of-field. Our new image fusion method employs discrete Fourier transform to designate sharp regions in this set of images, combine them together into a fully focused image and finally produce a height field map. Further image processing algorithms approximate three dimensional surface using this height field map and a fused image. Experimental results show that our method works for wide range of cases and gives a good tool for acquiring surfaces from a few photographs.


international conference on computational science | 2008

Modeling of 3D Scene Based on Series of Photographs Taken with Different Depth-of-Field

Marcin Denkowski; M. Chlebiej; Paweł Mikołajczak

This paper presents a method for fusing multifocus images into enhanced depth-of-field composite image and creating a 3D model of a photographed scene. A set of images of the same scene is taken from a typical digital camera with macro lenses with different depth-of-field. The method employs convolution and morphological filters to designate sharp regions in this set of images and combine them together into an image where all regions are properly focused. The presented method consists of several phases including: image registration, height map creation, image reconstruction and final 3D scene reconstruction. In result a 3D model of the photographed object is created.


Annales Umcs, Informatica | 2008

Photography image enhancement by image fusion

Marcin Denkowski; Paweł Mikołajczak

This paper presents an overview of two-dimensional image fusion methods using convolution filters and discrete wavelet transform developed by the authors. Image fusion is a process of combining two or more images of the same object into one extended composite image. It can extract features from source images and provide more information than one image. This technique can be easily used for image restoration and enhancement. In this article the authors focus on multi-exposure images, high dynamic range improvement and depth-of-field enhancement.


Archive | 2011

Estimating 3D Surface Depth Based on Depth-of-Field Image Fusion

Marcin Denkowski; Paweł Mikołajczak; Michał Chlebiej

Image fusion is a process of combining a set of images of the same scene into one composite image. The main objective of this technique is to obtain an image that is more suitable for visual perception. This composite image has reduced uncertainty and minimal redundancy while the essential information is maximized. In other words, image fusion integrates redundant and complementary information frommultiple images into a composite image but also decreases dimensionality. There aremanymethods discovered and discussed in literature that focus on image fusion. They vary with the aim of application used, but they can be mainly categorized due to algorithms used into pyramid techniques (Burt (1984); Toet (1989)), morphological methods (Ishita et al. (2006); Mukopadhyay & Chanda (2001); Matsopoulos et al. (1994)), discrete wavelet transform (Li et al. (1995); Chibani & Houacine (2003); Lewiset al. (2007)) and neural network fusion (Ajjimarangsee & Huntsberger (1988)). The different classification of image fusion involves pixel, feature and symbolic levels (Goshtasby (2007)). Pixel-level algorithms are low level methods and work either in the spatial or in transform domain. This kind of algorithms work as a local operation despite of transform used and can generate undesirable artifacts. These methods can be enhanced by using multiresolution analysis (Burt (1984)) or by complex wavelet transform (Lewiset al. (2007)). Feature-based methods use segmentation algorithms to divide images into relevant patterns and then combine them to create output image by using various properties (Piella (2003)). High-level methods combine image descriptions, typically, in the form of relational graphs (Wiliams et al. (1999)). In this work we use image fusion algorithm to achieve first of our aims, i.e. to obtain the deepest possible depth-of-field in macro-photography using standard digital camera images. Macro photography is a type of close-up photography. In the classical definition it is described as photography in which the image on film or electronic sensor is at least as large as the subject. Therefore, on 35mm film, the camera has to have the ability to focus on an area at least as small as 24× 36mm, equivalent to the image size on film (magnification 1:1). In other words, macro photography means photographing objects at extreme close-ups with magnification ratios from about 1:1 to about 10:1. There are some primary difficulties in macro photography; one of the most crucial is the problem of insufficient lighting. When using some 5


international conference on computer vision | 2008

The Development and Validation of a Method for 4D Motion Reconstruction of a Left Ventricle

Michał Chlebiej; Marcin Denkowski; Krzysztof Nowiński

Echocardiographic technology has currently reached a stage where it can provide 4D visual data revealing details of the real heart motion. Possibility of spatial reconstruction and quantitative description of such motion became very important task in todays cardiology. Unfortunately, because of the low quality such image data does not allow precise measurements. To overcome this problem images need to be processed further and moving structures have to be extracted. In this work we present a method for estimating heart motion from 3D echocardiographic image sequence. We also introduce a novel method for quantitative and qualitative validation of motion reconstruction.


Annales Umcs, Informatica | 2015

Development of the cross-platform framework for the medical image processing

Marcin Denkowski; Michał Chlebiej; Paweł Mikołajczak


Signal, Image and Video Processing | 2017

Sharpening filter for false color imaging of dual-energy X-ray scans

Krzysztof Dmitruk; Marcin Denkowski; Michał Mazur; Paweł Mikołajczak


IFAC-PapersOnLine | 2015

Method for filling and sharpening false colour layers of dual energy X-ray images

Krzysztof Dmitruk; Michał Mazur; Marcin Denkowski; Paweł Mikołajczak

Collaboration


Dive into the Marcin Denkowski's collaboration.

Top Co-Authors

Avatar

Paweł Mikołajczak

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

M. Chlebiej

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Michał Chlebiej

Nicolaus Copernicus University in Toruń

View shared research outputs
Top Co-Authors

Avatar

Krzysztof Dmitruk

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lukasz Sadkowski

Maria Curie-Skłodowska University

View shared research outputs
Researchain Logo
Decentralizing Knowledge