Paweł Mikołajczak
Maria Curie-Skłodowska University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paweł Mikołajczak.
international conference on computational science and its applications | 2006
M. Chlebiej; Paweł Mikołajczak; Krzysztof Nowiński; Piotr Ścisło; Piotr Bała
One of the most challenging problems in the modern cardiology is a correct quantification of the left ventricle contractility and synchronicity. Correct, quantitative assessment of these parameters, which could be changed in a course of many severe diseases of the heart (e.g. coronary artery disease and myocardial infarction, heart failure), is a key factor for the right diagnose and further therapy. Up to date, in clinical daily practice, most of these information is collected by transthoracic two dimensional echocardiography. Assessment of these parameters is difficult and depends on observer experience. However, quantification method of the contractility assessment based on strain and strain analysis are available, these methods still are grounded on 2D analysis. Real time 3D echocardiography gives physicians opportunity for real quantitative analysis of the left ventricle contractility and synchronicity. In this work we present a method for estimating heart motion from 4D (3D+time) echocardiographic images.
Information Technologies in Biomedicine | 2008
Karol Kuczyński; Paweł Mikołajczak
Fractal analysis is a reasonable choice in applications where natural objects are dealt with. Fractal dimension is an essential measure of fractal properties. Differential box-counting method was used for fractal dimension estimation of radiological brain images. It has been documented in this paper that this measure can be used for automatic classification of normal and pathological cases.
Optical Methods, Sensors, Image Processing, and Visualization in Medicine | 2004
Rafał Stęgierski; Paweł Mikołajczak
Method of the face reconstruction based on deformation of the triangle mesh of the model face. It is fast method with space partitioning according to Manchester points.
Optical Methods, Sensors, Image Processing, and Visualization in Medicine | 2004
Karol Kuczyński; Paweł Mikołajczak
Growing popularity of non invasive medical techniques makes accurate and reliable diagnostics based on image data more and more important. It is necessary to implement modern, highly automated image processing techniques. Information theory and statistics provide a means to create such systems. In this paper we discuss application of entropy and mutual information in segmentation and registration of real world medical images.
fuzzy systems and knowledge discovery | 2011
Karol Kuczyński; Maciej Siczeky; Rafał Stęgierski; Paweł Mikołajczak
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a relatively new, promising technique for breast cancer diagnostics. A few series of images of the same body region are rapidly acquired before, during and after injection of paramagnetic contrast agent. Propagation of the contrast agent causes modification of MR signal over time. Its analysis provides information on tissue properties, including tumour status, that is not available with the regular MRI. If a patient unintentionally changes position during the examination, then the consecutive image series are not properly aligned and their analysis is difficult, inaccurate or even impossible. The purpose of this work is to design a registration scheme that could be applied to solve the problem in a routine manner, in standard hospital conditions. The proposed registration framework, composed of B-spline transformation, mean squares metric and LBFGSB optimizer, is able to produce satisfactory results within reasonable time.
international conference on computer vision | 2008
Marcin Denkowski; M. Chlebiej; Paweł Mikołajczak
Creation of virtual reality models from photographs is very complex and time-consuming process, that requires special equipment like laser scanners, a large number of photographs and manual interaction. In this work we present a method for generating of surface geometry of photographed scene. Our approach is based on the phenomenon of shallow depth-of-field in close-up photography. Representing such surface details is useful to increase the visual realism in a range of application areas, especially biological structures or microorganisms. For testing purposes a set of images of the same scene is taken from a typical digital camera with macro lenses with a different depth-of-field. Our new image fusion method employs discrete Fourier transform to designate sharp regions in this set of images, combine them together into a fully focused image and finally produce a height field map. Further image processing algorithms approximate three dimensional surface using this height field map and a fused image. Experimental results show that our method works for wide range of cases and gives a good tool for acquiring surfaces from a few photographs.
international conference on computational science | 2008
Marcin Denkowski; M. Chlebiej; Paweł Mikołajczak
This paper presents a method for fusing multifocus images into enhanced depth-of-field composite image and creating a 3D model of a photographed scene. A set of images of the same scene is taken from a typical digital camera with macro lenses with different depth-of-field. The method employs convolution and morphological filters to designate sharp regions in this set of images and combine them together into an image where all regions are properly focused. The presented method consists of several phases including: image registration, height map creation, image reconstruction and final 3D scene reconstruction. In result a 3D model of the photographed object is created.
Annales Umcs, Informatica | 2008
Marcin Denkowski; Paweł Mikołajczak
This paper presents an overview of two-dimensional image fusion methods using convolution filters and discrete wavelet transform developed by the authors. Image fusion is a process of combining two or more images of the same object into one extended composite image. It can extract features from source images and provide more information than one image. This technique can be easily used for image restoration and enhancement. In this article the authors focus on multi-exposure images, high dynamic range improvement and depth-of-field enhancement.
Annales Umcs, Informatica | 2003
Karol Kuczyński; Paweł Mikołajczak
Due to great value of the time constant of the integrator circuit, a hardware defined PWM (Pulse Width Modulation) signal makes it possible to build a Digital to Analog Converter (DAC) characterized by a relatively long response time. An attempt at creating the software defined PWM signal leads to increased response time of the DAC converter with a coefficient several hundred times longer than its hardware equivalent. Reorganization of the PWM signal allows for its software synthesis, in that the response time of the DAC converter is only several times larger than its classical equivalent.In the paper we present short profile of different compositions of many-layered applications. We compare speed and stability of the different sets of applications. We describe a configuration which can be used to create a server of scientific information. The application consists of MySQL database management system, http server (Apache) and PHP language. All operate in the Linux environment.
Archive | 2011
Marcin Denkowski; Paweł Mikołajczak; Michał Chlebiej
Image fusion is a process of combining a set of images of the same scene into one composite image. The main objective of this technique is to obtain an image that is more suitable for visual perception. This composite image has reduced uncertainty and minimal redundancy while the essential information is maximized. In other words, image fusion integrates redundant and complementary information frommultiple images into a composite image but also decreases dimensionality. There aremanymethods discovered and discussed in literature that focus on image fusion. They vary with the aim of application used, but they can be mainly categorized due to algorithms used into pyramid techniques (Burt (1984); Toet (1989)), morphological methods (Ishita et al. (2006); Mukopadhyay & Chanda (2001); Matsopoulos et al. (1994)), discrete wavelet transform (Li et al. (1995); Chibani & Houacine (2003); Lewiset al. (2007)) and neural network fusion (Ajjimarangsee & Huntsberger (1988)). The different classification of image fusion involves pixel, feature and symbolic levels (Goshtasby (2007)). Pixel-level algorithms are low level methods and work either in the spatial or in transform domain. This kind of algorithms work as a local operation despite of transform used and can generate undesirable artifacts. These methods can be enhanced by using multiresolution analysis (Burt (1984)) or by complex wavelet transform (Lewiset al. (2007)). Feature-based methods use segmentation algorithms to divide images into relevant patterns and then combine them to create output image by using various properties (Piella (2003)). High-level methods combine image descriptions, typically, in the form of relational graphs (Wiliams et al. (1999)). In this work we use image fusion algorithm to achieve first of our aims, i.e. to obtain the deepest possible depth-of-field in macro-photography using standard digital camera images. Macro photography is a type of close-up photography. In the classical definition it is described as photography in which the image on film or electronic sensor is at least as large as the subject. Therefore, on 35mm film, the camera has to have the ability to focus on an area at least as small as 24× 36mm, equivalent to the image size on film (magnification 1:1). In other words, macro photography means photographing objects at extreme close-ups with magnification ratios from about 1:1 to about 10:1. There are some primary difficulties in macro photography; one of the most crucial is the problem of insufficient lighting. When using some 5