Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michal Mackiewicz is active.

Publication


Featured researches published by Michal Mackiewicz.


IEEE Transactions on Medical Imaging | 2008

Wireless Capsule Endoscopy Color Video Segmentation

Michal Mackiewicz; Jeff Berens; Mark Fisher

This paper describes the use of color image analysis to automatically discriminate between oesophagus, stomach, small intestine, and colon tissue in wireless capsule endoscopy (WCE). WCE uses ldquopill-camrdquo technology to recover color video imagery from the entire gastrointestinal tract. Accurately reviewing and reporting this data is a vital part of the examination, but it is tedious and time consuming. Automatic image analysis tools play an important role in supporting the clinician and speeding up this process. Our approach first divides the WCE image into subimages and rejects all subimages in which tissue is not clearly visible. We then create a feature vector combining color, texture, and motion information of the entire image and valid subimages. Color features are derived from hue saturation histograms, compressed using a hybrid transform, incorporating the discrete cosine transform and principal component analysis. A second feature combining color and texture information is derived using local binary patterns. The video is segmented into meaningful parts using support vector or multivariate Gaussian classifiers built within the framework of a hidden Markov model. We present experimental results that demonstrate the effectiveness of this method.


Medical Imaging 2005: Image Processing | 2005

Stomach, intestine, and colon tissue discriminators for wireless capsule endoscopy images

Jeff Berens; Michal Mackiewicz; Duncan Bell

Wireless Capsule Endoscopy (WCE) is a new colour imaging technology that enables close examination of the interior of the entire small intestine. Typically, the WCE operates for ~8 hours and captures ~40,000 useful images. The images are viewed as a video sequence, which generally takes a doctor over an hour to analyse. In order to activate certain key features of the software provided with the capsule, it is necessary to locate and annotate the boundaries between certain gastrointestinal (GI) tract regions (stomach, intestine and colon) in the footage. In this paper we propose a method of automatically discriminating stomach, intestine and colon tissue in order to significantly reduce the video assessment time. We use hue saturation chromaticity histograms which are compressed using a hybrid transform, incorporating the Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). The performance of two classifiers is compared: k-nearest neighbour (kNN) and Support Vector Classifier (SVC). After training the classifier, we applied a narrowing step algorithm to converge to the points in the video where the capsule firstly passes through the pylorus (the valve between the stomach and the intestine) and later the ileocaecal valve (IV, the valve between the intestine and colon). We present experimental results that demonstrate the effectiveness of this method.


international conference on acoustics, speech, and signal processing | 2006

Colour and Texture Based Gastrointestinal Tissue Discrimination

Michal Mackiewicz; Jeff Berens; Mark Fisher; Duncan Bell

Wireless capsule endoscopy is a colour imaging technology that enables close examination of the interior of the entire small intestine. The wireless capsule endoscope (WCE) operates for ~ 8 hours and captures ~ 40,000 useful images. The images are viewed by a clinician as a video sequence, generally taking over an hour to analyse. In this paper we present a method of automatically discriminating stomach and intestine tissue which can significantly speed-up one key part of the video analysis time, namely the process of locating the Pylorus - the valve between the stomach and the intestine. We divide the WCE image into 28 sub-regions and process only those regions where tissue is clearly visible. We create a feature vector using colour and texture information. The colour features are derived from hue saturation chromaticity histograms of the useful regions, compressed using a hybrid transform, incorporating the discrete cosine transform (DCT) and principal component analysis (PCA). The texture features are derived by singular value decomposition of the same tissue regions. After training the support vector classifier, we apply a discriminator algorithm, which scans the video with an increasing step and builds up a classification result sequence. By minimizing the number of misclassifications within this sequence, we predict the most probable position of the Pylorus. We present experimental results that demonstrate the effectiveness of this method


PLOS ONE | 2014

Chromatic Illumination Discrimination Ability Reveals that Human Colour Constancy Is Optimised for Blue Daylight Illuminations

Bradley Pearce; Stuart Crichton; Michal Mackiewicz; Graham D. Finlayson; Anya Hurlbert

The phenomenon of colour constancy in human visual perception keeps surface colours constant, despite changes in their reflected light due to changing illumination. Although colour constancy has evolved under a constrained subset of illuminations, it is unknown whether its underlying mechanisms, thought to involve multiple components from retina to cortex, are optimised for particular environmental variations. Here we demonstrate a new method for investigating colour constancy using illumination matching in real scenes which, unlike previous methods using surface matching and simulated scenes, allows testing of multiple, real illuminations. We use real scenes consisting of solid familiar or unfamiliar objects against uniform or variegated backgrounds and compare discrimination performance for typical illuminations from the daylight chromaticity locus (approximately blue-yellow) and atypical spectra from an orthogonal locus (approximately red-green, at correlated colour temperature 6700 K), all produced in real time by a 10-channel LED illuminator. We find that discrimination of illumination changes is poorer along the daylight locus than the atypical locus, and is poorest particularly for bluer illumination changes, demonstrating conversely that surface colour constancy is best for blue daylight illuminations. Illumination discrimination is also enhanced, and therefore colour constancy diminished, for uniform backgrounds, irrespective of the object type. These results are not explained by statistical properties of the scene signal changes at the retinal level. We conclude that high-level mechanisms of colour constancy are biased for the blue daylight illuminations and variegated backgrounds to which the human visual system has typically been exposed.


IEEE Transactions on Image Processing | 2015

Color Correction Using Root-Polynomial Regression

Graham D. Finlayson; Michal Mackiewicz; Anya Hurlbert

Cameras record three color responses (RGB) which are device dependent. Camera coordinates are mapped to a standard color space, such as XYZ-useful for color measurement-by a mapping function, e.g., the simple 3×3 linear transform (usually derived through regression). This mapping, which we will refer to as linear color correction (LCC), has been demonstrated to work well in the number of studies. However, it can map RGBs to XYZs with high error. The advantage of the LCC is that it is independent of camera exposure. An alternative and potentially more powerful method for color correction is polynomial color correction (PCC). Here, the R, G, and B values at a pixel are extended by the polynomial terms. For a given calibration training set PCC can significantly reduce the colorimetric error. However, the PCC fit depends on exposure, i.e., as exposure changes the vector of polynomial components is altered in a nonlinear way which results in hue and saturation shifts. This paper proposes a new polynomial-type regression loosely related to the idea of fractional polynomials which we call root-PCC (RPCC). Our idea is to take each term in a polynomial expansion and take its kth root of each k-degree term. It is easy to show terms defined in this way scale with exposure. RPCC is a simple (low complexity) extension of LCC. The experiments presented in this paper demonstrate that RPCC enhances color correction performance on real and synthetic data.


Journal of The Optical Society of America A-optics Image Science and Vision | 2015

Reference data set for camera spectral sensitivity estimation.

Maryam Mohammadzadeh Darrodi; Graham D. Finlayson; Teresa Goodman; Michal Mackiewicz

In this article, we describe a spectral sensitivity measurement procedure at the National Physical Laboratory, London, with the aim of obtaining ground truth spectral sensitivity functions for Nikon D5100 and Sigma SD1 Merill cameras. The novelty of our data is that the potential measurement errors are estimated at each wavelength. We determine how well the measured spectral sensitivity functions represent the actual camera sensitivity functions (as a function of wavelength). The second contribution of this paper is to test the performance of various leading sensor estimation techniques implemented from the literature using measured and synthetic data and also evaluate them based on ground truth data for the two cameras. We conclude that the estimation techniques tested are not sufficiently accurate when compared with our measured ground truth data and that there remains significant scope to improve estimation algorithms for spectral estimation. To help in this endeavor, we will make all our data available online for the community.


Archive | 2011

Capsule endoscopy - State of the Technology and Computer Vision Tools after the First decade

Michal Mackiewicz

Wireless Capsule Endoscopy (WCE) is a recent and exciting technology, which involves recording images of the entire Gastrointestinal (GI) tract including the parts of the human body never before seen outside operative surgery. The development of the capsule was first announced in Nature in 2000 by Iddan et al. (2000). Since then a number of different capsules have been launched by different vendors which varied slightly in their purpose, but retained the principle of wireless non-invasive investigation of the GI tract. It is particularly suited for computer-assisted diagnosis, as it records a large quantity of data (mostly, but not exclusively images) from the human gut, which consequently requires a time-consuming visual assessment that can be carried out only by an experienced clinician. The duration of this assessment, which involves the scrutiny of a video comprising approximately 50, 000 frames, varies between one to two hours. Thus, it can be seen that in terms of time requirement, the WCE is a very costly medical imaging procedure. This opens a door for computers to aid the analysis of the WCE footage, by reducing the time required to reach the diagnosis and thus the cost of the procedure, making it a more affordable technique. This view is supported by the leading endoscopists in the United Kingdom:


Machine Vision of Animals and their Behaviour Workshop 2015 | 2015

Convolutional Neural Networks for Counting Fish in Fisheries Surveillance Video

Geoffrey French; Mark Fisher; Michal Mackiewicz; Coby Needle

We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the


Journal of The Optical Society of America A-optics Image Science and Vision | 2014

On calculating metamer sets for spectrally tunable LED illuminators

Graham D. Finlayson; Michal Mackiewicz; Anya Hurlbert; Bradley Pearce; Stuart Crichton

N^4


Plant Methods | 2017

Leaf-GP: an open and automated software application for measuring growth phenotypes for arabidopsis and wheat

Ji Zhou; Christopher Applegate; Albor Dobon Alonso; Daniel Reynolds; Simon Orford; Michal Mackiewicz; Simon Griffiths; Steven Penfield; Nick Pullen

-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2\% to 16\%. We believe this is the first system that is able to handle footage from operational trawlers.

Collaboration


Dive into the Michal Mackiewicz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Fisher

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Geoffrey French

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Berens

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Duncan Bell

Norfolk and Norwich University Hospital

View shared research outputs
Top Co-Authors

Avatar

Teresa Goodman

National Physical Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge