Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcin Kopaczka is active.

Publication


Featured researches published by Marcin Kopaczka.


international conference on computer vision theory and applications | 2016

Robust Facial Landmark Detection and Face Tracking in Thermal Infrared Images using Active Appearance Models

Marcin Kopaczka; Kemal Acar; Dorit Merhof

Long wave infrared (LWIR) imaging is an imaging modality currently gaining increasing attention. Facial images acquired with LWIR sensors can be used for illumination invariant person recognition and the contactless extraction of vital signs such as respiratory rate. In order to work properly, these applications require a precise detection of faces and regions of interest such as eyes or nose. Most current facial landmark detectors in the LWIR spectrum localize single salient facial regions by thresholding. These approaches are not robust against out-of-plane rotation and occlusion. To address this problem, we therefore introduce a LWIR face tracking method based on an active appearance model (AAM). The model is trained with a manually annotated database of thermal face images. Additionally, we evaluate the effect of different methods for AAM generation and image preprocessing on the fitting performance. The method is evaluated on a set of still images and a video sequence. Results show that AAMs are a robust method for the detection and tracking of facial landmarks in the LWIR spectrum.


emerging technologies and factory automation | 2016

Automated enhancement and detection of stripe defects in large circular weft knitted fabrics

Marcin Kopaczka; Hanry Ham; Kristina Simonis; Raphael Kolk; Dorit Merhof

Stripes are periodic defects that are difficult to detect during production even by experienced human inspectors. Therefore, we introduce an image processing method for automatically detecting stripe defects in circularly knitted fabric. We show how a barely visible defect can be optically enhanced to improve manual assessment as well as how descriptor-based image processing and machine learning can be used to allow automated stripe detection. Image enhancement is performed by applying gabor and matched filters to histogram-equalized fabric images. Subsequently, we extract image information with different descriptors (LBP, GLCM, HOG) and feed these into random forest and SVM classifiers. The full pipeline is validated by training and testing it on three sets of fabric produced with different knitting machines and parameter settings. Results show that the proposed enhancement combined with a statistics-based descriptor such as GLCM or HOG allows to train both tested classifiers with good classification rates of up to 98.9%.


international conference on pattern recognition applications and methods | 2018

Fully Automatic Faulty Weft Thread Detection using a Camera System and Feature-based Pattern Recognition.

Marcin Kopaczka; Marco Saggiomo; Moritz Güttler; Thomas Gries; Dorit Merhof

In this paper, we present a novel approach for the fully automated detection of faulty weft threads on airjet weaving machines using computer vision. The proposed system consists of a camera array for image acquisition and a classification pipeline in which we use different image processing and machine learning methods to allow precise localization and reliable classification of defects. The camera system is introduced and its advantages over other approaches are discussed. Subsequently, the processing steps are motivated and described in detail, followed by an in-depth analysis of the impact of different system parameters to allow chosing optimal algorithm combinations for the problem of faulty weft yarn detection. To analyze the capabilities of our solution, system performance is thoroughly evaluated under realistic production settings, showing excellent detection rates.


Bildverarbeitung für die Medizin | 2018

Towards Analysis of Mental Stress Using Thermal Infrared Tomography

Marcin Kopaczka; Thomas Jantos; Dorit Merhof

A number of publications has focused on detecting and measuring mental stress using infrared tomography as it is a noninvasive and convenient monitoring method. Several potential facial regions of interest such as forehead, nose and the upper lip in which stress may potentially be detectable have been identified in previous contributions. However, these publications are not comparable since they all rely on different approaches regarding both experiment design (stressor, ground truth/reference measurements) as well as evaluation methodology such as either average temperature monitoring or advanced image processing methods. We therefore focus on two aspects: Designing an experiment that allows a reliable induction of mental stress and measuring temperature changes in all aforementioned regions as well as on introducing and evaluating a GLCM-based method for quantitative analysis of the recorded image data. We show that signals extracted from the upper lip region correspond well with high stress levels, while no correspondence can be shown for the other regions. The suggested GLCM-based method is shown to be more specific towards stress response than established measurements based on average region temperature.


international conference on image analysis and processing | 2017

Face Tracking and Respiratory Signal Analysis for the Detection of Sleep Apnea in Thermal Infrared Videos with Head Movement

Marcin Kopaczka; Özcan Özkan; Dorit Merhof

Infrared Thermography as imaging modality has gained increased attention over the last years. Its main advantages in human action monitoring are illumination invariance and its ability to monitor physiological parameters such as heart and respiratory rates. In our work, we present a novel approach for detecting respiratory-related data, in our case apnea events, from thermal infrared recordings. In contrast to already published methods where the subjects were required not to move, our approach uses state-of-the-art thermal face tracking technology to allow monitoring of subjects showing head movement, which is an important aspect for real-world applications. We implement different methods for apnea detection and face tracking and test them on videos of different subjects against a ground truth acquired with an established breathing rate monitoring system. Results show that our proposed approach allows robust apnea detection with moving subjects. Our methods allow using already presented or novel vital sign monitoring systems under conditions where the monitored persons are note required to keep their heads in a given position.


advanced concepts for intelligent vision systems | 2017

Face Detection in Thermal Infrared Images: A Comparison of Algorithm- and Machine-Learning-Based Approaches

Marcin Kopaczka; Jan Nestler; Dorit Merhof

In recent years, thermal infrared imaging has gained an increasing attention in person monitoring tasks due to its numerous advantages such as illumination invariance and its ability to monitor vital parameters directly. Many of these applications require facial region monitoring. In this context, several methods for face detection in thermal infrared images have been developed. Nearly all of the approaches introduced in this context make use of specific properties of facial images in the thermal infrared domain, such as local temperature maxima in the eye area or the fact that human bodies usually have a higher temperature radiation than the backgrounds used. On the other side, a number of well-performing methods for face detection in the visual spectrum has been introduced in recent years. These approaches use state-of-the-art algorithms from machine learning and feature extraction to detect faces in photographs and videos. So far, only one of these algorithms has been successfully applied to thermal infrared images. In our work, we therefore analyze how a larger number of these algorithms can be adapted to thermal infrared images and show that a wide number of recently introduced algorithms for face detection in the visual spectrum can be trained to work in the thermal spectrum when an appropriate training database is available. Our evaluation shows that these machine-learning based approaches outperform thermal-specific solutions in terms of detection accuracy and false positive rate. In conclusion, we can show that well-performing methods introduced for face detection in the visual spectrum can also be used for face detection in thermal infrared images, making dedicated thermal-specific solutions unnecessary.


BMC Bioinformatics | 2017

An unsupervised learning approach for tracking mice in an enclosed area

Jakob Unger; Mike Mansour; Marcin Kopaczka; Nina Gronloh; Marc Spehr; Dorit Merhof

BackgroundIn neuroscience research, mouse models are valuable tools to understand the genetic mechanisms that advance evidence-based discovery. In this context, large-scale studies emphasize the need for automated high-throughput systems providing a reproducible behavioral assessment of mutant mice with only a minimum level of manual intervention. Basic element of such systems is a robust tracking algorithm. However, common tracking algorithms are either limited by too specific model assumptions or have to be trained in an elaborate preprocessing step, which drastically limits their applicability for behavioral analysis.ResultsWe present an unsupervised learning procedure that is basically built as a two-stage process to track mice in an enclosed area using shape matching and deformable segmentation models. The system is validated by comparing the tracking results with previously manually labeled landmarks in three setups with different environment, contrast and lighting conditions. Furthermore, we demonstrate that the system is able to automatically detect non-social and social behavior of interacting mice. The system demonstrates a high level of tracking accuracy and clearly outperforms the MiceProfiler, a recently proposed tracking software, which serves as benchmark for our experiments.ConclusionsThe proposed method shows promising potential to automate behavioral screening of mice and other animals. Therefore, it could substantially increase the experimental throughput in behavioral assessment automation.


international conference on information visualization theory and applications | 2016

Flattening of the Lung Surface with Temporal Consistency for the Follow-Up Assessment of Pleural Mesothelioma

Peter Faltin; Thomas Kraus; Marcin Kopaczka; Dorit Merhof

Malignant pleural mesothelioma is an aggressive tumor of the lung surrounding membrane. The standardized workflow for the assessment comprises an inspection of 3D CT images to detect pleural thickenings which act as indicators for this tumor. Up to now, the visualization of relevant information from the pleura has only been superficially addressed. Current approaches still utilize a slice-wise visualization which does not allow a global assessment of the lung surface. In this publication, we present an approach which enables a planar 2D visualization of the pleura by flattening its surface. A distortion free mapping to a planar representation is generally not possible. The present method determines a planar representation with low distortions directly from a voxel-based surface. For a meaningful follow-up assessment, the consistent representation of a lung from different points in time is highly important. Therefore, the main focus in this publication is to guarantee a consistent representation of the pleura from the same patient extracted from images taken at two different points in time. This temporal consistency is achieved by our newly proposed link of both surfaces during the flattening process. Additionally, a new initialization method which utilizes a flattened lung prototype speeds up the flattening process.


international conference on image processing | 2016

An automated method for realistic face simulation and facial landmark annotation and its application to active appearance models

Marcin Kopaczka; Carlo Hensel; Dorit Merhof

Algorithms for facial landmark detection in real-world images require manually annotated training databases. However, the task of selecting or creating the images and annotating the data is extremely time-consuming, leaving researchers with the options of investing significant amounts of time for creating annotated images optimized for the given task or resigning from creating such hand-labeled databases and to use one of the few publicly available annotated datasets with potentially limited applicability to the given problem. To allow for an alternative, we introduce a method for automatically generating realistic synthetic face images and accompanying facial landmark annotations. The proposed approach extends the automation capabilities of a commercial face modeling tool and allows large-scale generation of faces that fulfill user-defined requirements. As an additional feature, full facial landmark annotations can be computed during the generation procedure, reducing the amount of manual work required to generate a full training set to a few interactions in a graphical user interface. We describe the generation procedure in detail and demonstrate that the simulated images can be used for advanced computer vision tasks, namely training of an active appearance model that allows the detection of facial landmarks in real-world photographs.


Proceedings of SPIE | 2014

A rib-specific multimodal registration algorithm for fused unfolded rib visualization using PET/CT

Jens N. Kaftan; Marcin Kopaczka; Andreas Wimmer; Günther Platsch; Jerome Declerck

Respiratory motion affects the alignment of PET and CT volumes from PET/CT examinations in a non-rigid manner. This becomes particularly apparent if reviewing fine anatomical structures such as ribs when assessing bone metastases, which frequently occur in many advanced cancers. To make this routine diagnostic task more efficient, a fused unfolded rib visualization for 18F-NaF PET/CT is presented. It allows to review the whole rib cage in a single image. This advanced visualization is enabled by a novel rib-specific registration algorithm that rigidly optimizes the local alignment of each individual rib in both modalities based on a matched filter response function. More specifically, rib centerlines are automatically extracted from CT and subsequently individually aligned to the corresponding bone-specific PET rib uptake pattern. The proposed method has been validated on 20 PET/CT scans acquired at different clinical sites. It has been demonstrated that the presented rib- specific registration method significantly improves the rib alignment without having to run complex deformable registration algorithms. At the same time, it guarantees that rib lesions are not further deformed, which may otherwise affect quantitative measurements such as SUVs. Considering clinically relevant distance thresholds, the centerline portion with good alignment compared to the ground truth improved from 60:6% to 86:7% after registration while approximately 98% can be still considered as acceptably aligned.

Collaboration


Dive into the Marcin Kopaczka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanry Ham

RWTH Aachen University

View shared research outputs
Researchain Logo
Decentralizing Knowledge