Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Henryk Blasinski is active.

Publication


Featured researches published by Henryk Blasinski.


international conference on image processing | 2015

An iterative algorithm for spectral estimation with spatial smoothing

Henryk Blasinski; Joyce E. Farrell; Brian A. Wandell

Many multispectral imaging systems are computational in nature and require processing of raw data in order to obtain radiance spectra. In this paper, we derive a fast and scalable spectral estimation algorithm based on the Alternating Direction Method of Multipliers (ADMM). Using this approach we solve for the unknown surface spectral reflectance simultaneously for all pixels in the image. This global formulation allows us to incorporate spatial as well as spectral regularizers, such as total variation penalty or non-negativity. We show that the estimates derived with our solver are more accurate and more robust in the presence of noise.


Proceedings of SPIE | 2014

The color of water: using underwater photography to estimatewater quality

John C. Breneman; Henryk Blasinski; Joyce E. Farrell

We describe a model for underwater illumination that is based on how light is absorbed and scattered by water, phytoplankton and other organic and inorganic matter in the water. To test the model, we built a color rig using a commercial point-and-shoot camera in an underwater housing and a calibrated color target. We used the measured spectral reflectance of the calibration color target and the measured spectral sensitivity of the camera to estimate the spectral power of the illuminant at the surface of the water. We then used this information, along with spectral basis functions describing light absorbance by water, phytoplankton, non-algal particles (NAP) and colored dissolved organic matter (CDOM), to estimate the spectral power of the illuminant and the amount of scattered light at each depth. Our results lead to insights about color correction, as well as the limitations of consumer digital cameras for monitoring water quality.


computer vision and pattern recognition | 2017

Designing Illuminant Spectral Power Distributions for Surface Classification

Henryk Blasinski; Joyce E. Farrell; Brian A. Wandell

There are many scientific, medical and industrial imaging applications where users have full control of the scene illumination and color reproduction is not the primary objective For example, it is possible to co-design sensors and spectral illumination in order to classify and detect changes in biological tissues, organic and inorganic materials, and object surface properties. In this paper, we propose two different approaches to illuminant spectrum selection for surface classification. In the supervised framework we formulate a biconvex optimization problem where we alternate between optimizing support vector classifier weights and optimal illuminants. We also describe a sparse Principal Component Analysis (PCA) dimensionality reduction approach that can be used with unlabeled data. We efficiently solve the non-convex PCA problem using a convex relaxation and Alternating Direction Method of Multipliers (ADMM). We compare the classification accuracy of a monochrome imaging sensor with optimized illuminants to the classification accuracy of conventional RGB cameras with natural broadband illumination.


international symposium on biomedical imaging | 2016

Multispectral imaging of tissue ablation

Henryk Blasinski; Jeff Caves; Joyce E. Farrell; Brian Wandelt; Paul J. Wang

We describe a computational spectral imaging system that can acquire data in both the visible and near-infrared bands. We first show that the system is capable of estimating the full spectral reflectance of porcine tissue. We then show that the output of the system can be combined with machine learning algorithms to classify heart tissue that has been ablated for different amounts of time. The results of our analysis can be used to guide the design of an imaging system that can help cardiac electrophysiologists identify the position and efficacy of ablations during surgery.


electronic imaging | 2016

A Three Parameter Underwater Image Formation Model.

Henryk Blasinski; Joyce E. Farrell

We developed an underwater image formation model that describes how light is absorbed and scattered by seawater and its constituents. We use the model to predict digital camera images of a reference target with known spectral reflectance at different distances and depths. We describe an inverse estimation method to derive three model parameters: phytoplankton absorption spectrum, chlorophyll concentration and the amount of color dissolved organic matter or CDOM. The estimated parameters predict the spectral attenuation of light which can be used to color balance the images. In addition parameter estimates can be used to monitor environmental changes turning a consumer digital camera into a scientific measurement device. Introduction The digital camera has become an accessory that most people take with them everywhere, including underwater. Sadly, they are often disappointed with the quality of their underwater images. Backscattered light reduces image contrast and wavelength dependent light absorption by water introduces color changes [3, 20]. No doubt the quality of underwater photography will improve as the low-light sensitivity of imaging sensors increases and as new image processing methods are introduced. Several underwater image correction algorithms operating on RGB images have been proposed [9, 19], but only a few methods analyze the data in the spectral domain [1, 2, 4]. In most cases, the goal of these algorithms is to improve and color rendering, rather than infer biologically relevant quantities [4, 14, 25]. In this paper we consider how to derive scientific data from underwater camera sensor images in order to characterize the ocean seawater environment. We also illustrate how this data can be used to process and improve the perceived quality of underwater images. We developed an underwater image formation model to describe how light is absorbed and scattered by water and its constituents and how light is captured by the imaging sensor in a digital camera. We use our model to simulate the appearance of images captured by digital cameras and to relate the appearance to physically interpretable quantities, such as the type and amount of phytoplankton and other organic and inorganic matter in sea water [16]. We also use the insights we gained from these simulations to improve the way we process underwater images in order to produce more aesthetically pleasing photographs [4]. Our underwater image formation model is composed of three components. First, we use the underwater image formation model of Jaffe and McGlamery [13, 15] to describe light absorption and scattering in units of medium beam absorption and scattering coefficients. Second, we incorporate the results of oceanographic and biological research describing attenuation and scattering coefficients as functions of concentrations of fundamental constituents of sea water: phytoplankton, color dissolved organic matter (CDOM) and non-algal particles (NAP) [16]. Third, we use a full camera simulation package (ISET, [10]) to produce simulated images of underwater targets. We use the underwater image formation model to predict the sensor data captured by a digital camera at a fixed distance and depth from a reference target with known spectral reflectance. With the appropriate parameter settings, we can reproduce the appearance of sensor images captured by real cameras in similar underwater environments. We wish to use the digital camera as a scientific instrument that can measure environmental factors, such as the type and concentration of phytoplankton and other material in the seawater. To accomplish this, we introduce an inverse estimation method that uses the camera sensor data to derive parameters that describe 1) the spectral absorption of light by phytoplankton, 2) the concentration of chlorophyll in phytoplankton, and 3) the amount of color dissolved organic matter or CDOM. We use the inverse estimation method as a metric to evaluate how well any digital camera can be used to measure environmental parameters and consider how these measurements can also be used to improve the perceived quality of underwater images. Image formation model The measurement m produced by an imaging device is linearly related to device’s spectral sensitivity functions p(λ ) and the light radiance ρ(λ ) reaching the photodetector [24] m = ∫ p(λ )ρ(λ )dλ . (1) A ray of light traveling between the source and the scene interacts with the medium in two ways. First, some of the light may be absorbed by the medium, and thus the overall intensity of light is reduced. Second, the direction of propagation of a portion of the light ray may be changed in a phenomenon called scattering. As a consequence these interactions the total radiance along a particular ray of light ρ(λ ) reaching an imaging device can be decomposed into two additive components; direct ρd(λ ) and backscattered ρb(λ ) [3, 13, 15] ρ(λ ) = ρd(λ )+ρb(λ ). (2) The direct component contains all the light rays that, having been emitted by a source, interact with a scene. The backscattered component represents all the light rays whose direction of propagation was changed by the medium before they reached the target, which means they are captured by the imaging device without interacting with the scene (Fig. 1). The McGlamery-Jaffe underwater image formation model [13, 15] describes how the absorption and scattering affect the direct and backscattered radiance components. However, for uniform surfaces at a fixed distance from the camera the radiance of Figure 1: Direct and backscattered components in underwater imaging. Figure 2: The effects of absorption and scattering on the intensity of light traveling through a medium. the direct component depends on the light source spectral power distribution i(λ ), target surface spectral reflectance r(λ ), and the attenuation of light introduced by the medium c(λ ). The relationship is governed by the Beer-Lambert attenuation law [4] ρd(λ ) = r(λ )i(λ )e−dc(λ , (3) where d is the distance light travels through the medium. Total attenuation coefficient The total attenuation coefficient c(λ ) describes how much light at wavelength λ is attenuated as it travels through the medium. Light attenuation depends on how much the medium absorbs light as well as how much light is scattered by. The contributions of these two phenomena, denoted a(λ ) and b(λ ) for absorption and scattering respectively, define the total absorption coefficient c(λ ) c(λ ) = a(λ )+b(λ ). (4) Intuitively, the intensity of a particular ray of light traveling through a medium can be decreased either because photons are absorbed by the medium, or because some of the light starts to propagate in different direction, when it is reflected off small particles suspended in that medium. Along the ray however the net effect of these two distinct phenomena is the same; light intensity is reduced (Fig. 2). Absorption coefficient In underwater environments the absorption coefficient is impacted by the optical properties of pure sea water aw(λ ) and the absorption properties of three seawater constituent particles: phytoplankton aΦ(λ ), colored dissolved organic matter (CDOM), aCDOM(λ ), and non-algal particles (NAP), aNAP(λ ). The total absorption coefficient is given by the sum of absorption properties of the constituents a(λ ) = aw(λ )+aΦ(λ )+aCDOM(λ )+aNAP(λ ). (5) Figure 3 shows the shapes of the spectral absorption coefficient of the four constituents. The absorption properties of each of them 400 450 500 550 600 650 700 0 0.1 0.2 0.3 0.4 0.5 Wavelength, nm a (λ ), a .u . Water


electronic imaging | 2015

Automatically designing an image processing pipeline for a five-band camera prototype using the local, linear, learned (L3) method

Qiyuan Tian; Henryk Blasinski; Steven Lansel; Haomiao Jiang; Munenori Fukunishi; Joyce E. Farrell; Brian A. Wandell

The development of an image processing pipeline for each new camera design can be time-consuming. To speed camera development, we developed a method named L3 (Local, Linear, Learned) that automatically creates an image processing pipeline for any design. In this paper, we describe how we used the L3 method to design and implement an image processing pipeline for a prototype camera with five color channels. The process includes calibrating and simulating the prototype, learning local linear transforms and accelerating the pipeline using graphics processing units (GPUs).


international conference on image processing | 2014

A model for estimating spectral properties of water from RGB images

Henryk Blasinski; John C. Breneman; Joyce E. Farrell

This paper presents an underwater image formation model incorporating scatter and spectral absorption curves of pure water and three other particle classes; phytoplankton, non-algal particles and colored dissolved organic matter. We also describe an algorithm allowing to derive these depth dependent curves from a sequence of RGB images of a known target. Underwater images generated using the model closely resemble the experimental ones acquired in a variety of conditions. Similarly, the estimated absorption curves, and relative particle concentrations agree with common sense expectations.


international conference on computational photography | 2017

Computational multispectral flash

Henryk Blasinski; Joyce E. Farrell

Illumination plays an important role in the image capture process. Too little or too much energy in particular wavelengths can impact the scene appearance in a way that is difficult to manage by color constancy post processing methods. We use an adjustable multispectral flash to modify the spectral illumination of a scene. The flash is composed of a small number of narrowband lights, and the imaging system takes a sequence of images of the scene under each of those lights. Pixel data is used to estimate the spectral power distribution of the ambient light, and to adjust the flash spectrum either to match or to complement the ambient illuminant. The optimized flash spectrum can be used in subsequent captures, or a synthetic image can be computationally rendered from the available data. Under extreme illumination conditions images captured with the matching flash have no color cast, and the complementary flash produces more balanced colors. The proposed system also improves the quality of images captured in underwater environments.


Rundbrief Der Gi-fachgruppe 5.10 Informationssystem-architekturen | 2015

Simulation of Underwater Imaging Systems

Henryk Blasinski; Joyce E. Farrell

We use simulations to show that it is possible to classify coral reef pigments from underwater images captured by consumer RGB cameras. These simulations will help us design experiments in a controlled laboratory environment and to quantify the effect that water temperature has on the spectral signature of coral reef pigments.


Archive | 2015

VISUALIZATION OF HEART WALL TISSUE

Jeffrey M. Caves; Paul J. Wang; Joyce E. Farrell; Brian A. Wandell; Henryk Blasinski

Collaboration


Dive into the Henryk Blasinski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge