Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roman M. Palenichka is active.

Publication


Featured researches published by Roman M. Palenichka.


IEEE Transactions on Geoscience and Remote Sensing | 2010

Automatic Extraction of Control Points for the Registration of Optical Satellite and LiDAR Images

Roman M. Palenichka; Marek B. Zaremba

A novel method for automatic extraction of control points for the registration of optical images with Light Detection And Ranging (LiDAR) data is proposed. It is based on transformation-invariant detection of salient image disks (SIDs), which determine the location of control points as the centers of the corresponding image fragments. The SID is described by a feature vector, which, in addition to the coordinates and diameter, includes intensity descriptors and region shape characteristics of the image fragment. SIDs are effectively extracted using multiscale isotropic matched filtering-a visual attention operator that indicates image locations with high-intensity contrast, homogeneity, and local shape saliency. This paper discusses the extraction of control points from both natural landscapes and structured scenes with man-made objects. Registration experiments conducted on QuickBird imagery with corresponding LiDAR data validated the proposed approach.


IEEE Transactions on Geoscience and Remote Sensing | 2007

Multiscale Isotropic Matched Filtering for Individual Tree Detection in LiDAR Images

Roman M. Palenichka; Marek B. Zaremba

This paper addresses the issue of automated tree detection in remote-sensing imagery, particularly in the case of light detection and ranging (LiDAR) height data. The proposed method consists of multiscale isotropic matched filtering using a nonlinear image operator optimized for object detection and recognition. The method provides a robust scale- and orientation-invariant localization of the objects of interest. The local maxima of the matched-filtering operator are located at the potential centers of the objects of interest such as the trees. The tree verification stage consists of feature extraction at the candidate tree locations and comparison with the feature reference values. Experimental examples of the application of this matched-filtering method to LiDAR images of dense forest stands and sparsely distributed trees in residential areas are provided.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2013

Multi-Scale Segmentation of Forest Areas and Tree Detection in LiDAR Images by the Attentive Vision Method

Roman M. Palenichka; Frédérik Doyon; Ahmed Lakhssassi; Marek B. Zaremba

A scale-adaptive method for object detection and LiDAR image segmentation in forest areas using the attentive vision approach to remote sensing image analysis is proposed. It provides an effective solution to the general task of object segmentation defined as the subdivision of image plan into multiple objects regions against the background region. This method represents a multi-scale analysis of LiDAR images by an attention operator at different scale ranges and for all pixel locations to detect feature points. Besides the initial height image, the operator also uses primitive feature maps (components) to reliably detect objects of interest such as individual trees or entire forest stands. As a result, feature points representing the optimal seed locations for region-growing segmentation are extracted and scale-adaptive region growing is applied at the seed locations. At the second level, the final segmentation by the scale-adaptive region growing provides delineation of individual tree crowns. The conducted experiments confirmed the reliability of the proposed method and showed its high potential in LiDAR image analysis for object detection and segmentation.


knowledge discovery and data mining | 2005

Effective image and video mining: an overview of model-based approaches

Rokia Missaoui; Roman M. Palenichka

This paper is dedicated to revisiting image and video mining techniques from the viewpoint of image modeling approaches, which constitute the theoretical basis for these techniques. The most important areas belonging to image or video mining are: image knowledge extraction, content-based image retrieval, video retrieval, video sequence analysis, change detection, model learning, as well as object recognition. Traditionally, these areas have been developed independently, and hence have not benefited from some common sense approaches which provide potentially optimal and time-efficient solutions. Two different types of input data for knowledge extraction from an image collection or video sequences are considered: original image or symbolic (model) description of the image. Several basic models are described briefly and compared with each other in order to find effective solutions for the image and video mining problems. They include feature-based models and object-related structural models for the representation of spatial and temporal entities (objects, scenes or events).


international conference on pattern recognition | 2002

Multi-scale model-based skeletonization of object shapes using self-organizing maps

Roman M. Palenichka; Marek B. Zaremba

A skeletonization algorithm suitable for the skeletonization of sparse shapes is described. It is based on self-organizing maps (SOM)-a class of neural networks with unsupervised learning. The so-called structured SOM with local shape attributes such as scale and connectivity of vertices are used to determine the object shape in the form of piecewise linear skeletons. The location of each vertex of piecewise linear generating lines on the image plane corresponds to the position of a particular SOM unit. This method makes it possible to extract the object skeletons and to reconstruct the planar shape of sparse objects based on the topological constraints of generating lines and estimation of scales.


Pattern Recognition | 1996

A fast structure-adaptive evaluation of local features in images

Roman M. Palenichka; Peter Zinterhof

Abstract An image model for structure-adaptive evaluation or a feature of a pixel value in images is introduced using notions of a structuring element and structuring regions. Based on this model a fast and adaptive procedure for edge-preserving smoothing and change detection in images has been developed. For the problem of noise filtering and edge detection it cleans out the noise and at the same time does not blur the edges. To evaluate one pixel value the computational complexity of the fast algorithm is reduced to O ( L 2 ) per pixel, compared with the direct implementation complexity O ( L 4 ), where L × L is the window size for feature evaluation.


Journal of Electronic Imaging | 2011

Spatiotemporal attention operator using isotropic contrast and regional homogeneity

Roman M. Palenichka; Ahmed Lakhssassi; Marek B. Zaremba

A multiscale operator for spatiotemporal isotropic attention is proposed to reliably extract attention points during image sequence analysis. Its consecutive local maxima indicate attention points as the centers of image fragments of variable size with high intensity contrast, region homogeneity, regional shape saliency, and temporal change presence. The scale-adaptive estimation of temporal change (motion) and its aggregation with the regional shape saliency contribute to the accurate determination of attention points in image sequences. Multilocation descriptors of an image sequence are extracted at the attention points in the form of a set of multidimensional descriptor vectors. A fast recursive implementation is also proposed to make the operators computational complexity independent from the spatial scale size, which is the window size in the spatial averaging filter. Experiments on the accuracy of attention-point detection have proved the operator consistency and its high potential for multiscale feature extraction from image sequences.


Journal of Electronic Imaging | 2006

Multiscale model-based feature extraction in structural texture images

Roman M. Palenichka; Marek B. Zaremba; Rokia Missaoui

We deal with the problem of time-efficient extraction of structural features in a large class of structural texture images. The proposed approach of multiscale morphological texture modeling describes explicitly and concisely both shape and intensity parameters in the structural texture model. The modeling is based on a morphological skeletal representation of structural texture cells as objects of interest and the genomic growth of a texture region starting from a seed cell. This representation offers the advantage of concise description of texture cells as compared to the existing edge-based or contour-based approaches. A computationally efficient estimation of the structural texture parameters for texture segmentation tasks is proposed. The model parameter estimation and subsequent feature extraction rely on cell localization and scale-based locally adaptive binarization of the localized cells using isotropic matched filtering. The multiscale isotropic matched filter (MIMF) provides a scale- and orientation-invariant detection of structural cells regarded as multiple objects of interest in texture regions. Results of experiments pertaining to the parameter estimation from synthetic and real texture images as well as the segmentation of texture regions based on structural features are also provided.


Lecture Notes in Computer Science | 2002

A Visual Attention Operator Based on Morphological Models of Images and Maximum Likelihood Decision

Roman M. Palenichka

The goal of the image analysis approach presented in this paper was two-fold. Firstly, it is the development of a computational model for visual attention in humans and animals, which is consistent with the known psychophysical experiments and neurology findings in early vision mechanisms. Secondly, it is a model-based design of an attention operator in computer vision, which is capable to detect, locate, and trace objects of interest in images in a fast way. The proposed attention operator, named image relevance function, is an image local operator that has local maximums at the centers of locations of supposed objects of interest or their relevant parts. This approach has several advantageous features in detecting objects in images due to the model-based design of the relevance function and the utilization of the maximum likelihood decision.


machine learning and data mining in pattern recognition | 1999

Extraction of Local Structural Features in Images by Using a Multi-scale Relevance Function

Roman M. Palenichka; Maxim Volgin

Extraction of structural features in radiographic images is considered in the context of flaw detection with application to industrial and medical diagnostics. The known approache, like the histogram-based binarization yield poor detection results for such images, which contain small and low-contrast objects of interest on noisy background. In the presented model-based method, the detection of objects of interest is considered as a consecutive and hierarchical extraction of structural features (primitive patterns) which compose these objects in the form of aggregation of primitive patterns. The concept of relevance function is introduced in order to perform a quick location and identification of primitive patterns by using the binarization of regions of attention. The proposed feature extraction method has been tested on radiographic images in application to defect detection of weld joints and extraction of blood vessels in angiography.

Collaboration


Dive into the Roman M. Palenichka's collaboration.

Top Co-Authors

Avatar

Marek B. Zaremba

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar

Ahmed Lakhssassi

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar

Rokia Missaoui

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frédérik Doyon

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar

Michel Saydé

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar

Maria Petrou

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Dianne Richardson

Canada Centre for Remote Sensing

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Kengne

Université du Québec en Outaouais

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge