Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jarno Mielikainen is active.

Publication


Featured researches published by Jarno Mielikainen.


IEEE Geoscience and Remote Sensing Letters | 2008

Lossless Compression of Hyperspectral Images Using a Quantized Index to Lookup Tables

Jarno Mielikainen; Pekka Toivanen

We propose an enhancement to the algorithm for lossless compression of hyperspectral images using lookup tables (LUTs). The original LUT method searched the previous band for a pixel of equal value to the pixel colocalized with the one to be predicted. The pixel in the same position as the obtained pixel in the current band is used as a predictor. LUTs were used to speed up the search. The LUT method has also been extended into a method called Locally Averaged Interband Scaling (LAIS)-LUT that uses two LUTs per band. One of the two LUT predictors that is the closest one to the LAIS estimate is chosen as the predictor for the current pixel. We propose the uniform quantization of the colocated pixels before using them for indexing the LUTs. The use of quantization reduces the size of the LUTs by an order of magnitude. The results show that the proposed method outperforms previous methods; a 3% increase in compression efficiency was observed compared to the current state-of-the-art method, LAIS-LUT.


Pattern Recognition Letters | 2003

Edge detection in multispectral images using the self-organizing map

Pekka Toivanen; Jarkko Ansamaki; Jussi Parkkinen; Jarno Mielikainen

In this paper, two new methods for edge detection in multispectral images are presented. They are based on the use of the self-organizing map (SOM) and a grayscale edge detector. With the 2-dimensional SOM the ordering of pixel vectors is obtained by applying the Peano scan, whereas this can be omitted using the 1-dimensional SOM. It is shown that using the R-ordering based methods some parts of the edges may be missed. However, they can be found using the proposed methods. Using them it is also possible to find edges in images which consist of metameric colors. Finally, it is shown that the proposed methods find the edges properly from real multispectral airplane images The size of the SOM determines the amount of found edges. If the SOM is taught using a large color vector database, the same SOM can be utilized for numerous images.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

Improved GPU/CUDA Based Parallel Weather and Research Forecast (WRF) Single Moment 5-Class (WSM5) Cloud Microphysics

Jarno Mielikainen; Bormin Huang; Hung-Lung Allen Huang; Mitchell D. Goldberg

The Weather Research and Forecasting (WRF) model is an atmospheric simulation system which is designed for both operational and research use. WRF is currently in operational use at the National Oceanic and Atmospheric Administration (NOAA)s national weather service as well as at the air force weather agency and meteorological services worldwide. Getting weather predictions in time using latest advances in atmospheric sciences is a challenge even on the fastest super computers. Timely weather predictions are particularly useful for severe weather events when lives and property are at risk. Microphysics is a crucial but computationally intensive part of WRF. WRF Single Moment 5-class (WSM5) microphysics scheme represents fallout of various types of precipitation, condensation and thermodynamics effects of latent heat release. Therefore, to expedite the computation process, Graphics Processing Units (GPUs) appear an attractive alternative to traditional CPU architectures. In this paper, we accelerate the WSM5 microphysics scheme on GPUs and obtain a considerable speedup thereby significantly reducing the processing time. Such high performance and computationally efήcient GPUs allow us to use higher resolution WRF forecasts. The use of high resolution WRF enables us to compute microphysical processes for increasingly small clouds and water droplets. To implement WSM5 scheme on GPUs, the WRF code was rewritten into CUDA C, a high level data-parallel programming language used on NVIDIA GPU. We observed a reduction in processing time from 16928 ms on CPU to 43.5 ms on a Graphics Processing Unit (GPU). We obtained a speedup of 389× without I/O using a single GPU. Taking I/O transfer times into account, the speedup obtained is 206×. The speedup was further increased by using four GPUs, speedup being 1556× and 357× for without I/O and with I/O, respectively.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

GPU Acceleration of the Updated Goddard Shortwave Radiation Scheme in the Weather Research and Forecasting (WRF) Model

Jarno Mielikainen; Bormin Huang; Hung-Lung Allen Huang; Mitchell D. Goldberg

Next-generation mesoscale numerical weather prediction system, the Weather Research and Forecasting (WRF) model, is a designed for dual use for forecasting and research. WRF offers multiple physics options that can be combined in any way. One of the physics options is radiance computation. The major source for energy for the earths climate is solar radiation. Thus, it is imperative to accurately model horizontal and vertical distribution of the heating. Goddard solar radiative transfer model includes the absorption duo to water vapor, O3, O2, CO2, clouds and aerosols. The model computes the interactions among the absorption and scattering by clouds, aerosols, molecules and surface. Finally, fluxes are integrated over the entire shortwave spectrum from 0.175 μm to 10 μm. In this paper, we develop an efficient graphics processing unit (GPU) based Goddard shortwave radiative scheme. The GPU-based Goddard shortwave scheme was compared to a CPU-based single-threaded counterpart on a computational domain of 422 × 297 horizontal grid points with 34 vertical levels. Both the original FORTRAN code on CPU and CUDA C code on GPU use double precision floating point values for computation. Processing time for Goddard shortwave radiance on CPU is 22106 ms. GPU accelerated Goddard shortwave radiance on 4 GPUs can be computed in 208.8 ms and 157.1 ms with and without I/O, respectively. Thus, the speedups are 116 × with data I/O and 141× without I/O on two NVIDIA GTX 590 s . Using single precision arithmetic and less accurate arithmetic modes the speedups are increased to 536× and 259×, with and without I/O, respectively.


Journal of Applied Remote Sensing | 2010

Constant coefficients linear prediction for lossless compression of ultraspectral sounder data using a graphics processing unit

Jarno Mielikainen; Risto Honkanen; Bormin Huang; Pekka Toivanen; Chulhee Lee

The amount of data generated by ultraspectral sounders is so large that considerable savings in data storage and transmission bandwidth can be achieved using data compression. Due to this large amount of data, the data compression time is of utmost importance. Increasing the programmability of the commodity Graphics Processing Units (GPUs) offer potential for considerable increases in computation speeds in applications that are data parallel. In our experiments, we implemented a spectral image data compression method called Linear Prediction with Constant Coefficients (LP-CC) using NVIDIAs CUDA parallel computing architecture. LP-CC compression method represents a current state-of-the-art technique in lossless compression of ultraspectral sounder data. The method showed an average compression ratio of 3.39 when applied to publicly available NASA AIRS data. We achieved a speed-up of 86 compared to a single threaded CPU version. Thus, the commodity GPU was able to significantly decrease the computational time of a compression algorithm based on a constant coefficient linear prediction.


Computers & Geosciences | 2013

Compute unified device architecture (CUDA)-based parallelization of WRF Kessler cloud microphysics scheme

Jarno Mielikainen; Bormin Huang; Jun Wang; H.-L. Allen Huang; Mitchell D. Goldberg

In recent years, graphics processing units (GPUs) have emerged as a low-cost, low-power and a very high performance alternative to conventional central processing units (CPUs). The latest GPUs offer a speedup of two-to-three orders of magnitude over CPU for various science and engineering applications. The Weather Research and Forecasting (WRF) model is the latest-generation numerical weather prediction model. It has been designed to serve both operational forecasting and atmospheric research needs. It proves useful for a broad spectrum of applications for domain scales ranging from meters to hundreds of kilometers. WRF computes an approximate solution to the differential equations which govern the air motion of the whole atmosphere. Kessler microphysics module in WRF is a simple warm cloud scheme that includes water vapor, cloud water and rain. Microphysics processes which are modeled are rain production, fall and evaporation. The accretion and auto-conversion of cloud water processes are also included along with the production of cloud water from condensation. In this paper, we develop an efficient WRF Kessler microphysics scheme which runs on Graphics Processing Units (GPUs) using the NVIDIA Compute Unified Device Architecture (CUDA). The GPU-based implementation of Kessler microphysics scheme achieves a significant speedup of 70x over its CPU based single-threaded counterpart. When a 4 GPU system is used, we achieve an overall speedup of 132x as compared to the single thread CPU version.


IEEE Signal Processing Letters | 2015

Lossless Compression of Hyperspectral Imagery via Clustered Differential Pulse Code Modulation with Removal of Local Spectral Outliers

Jiaji Wu; Wanqiu Kong; Jarno Mielikainen; Bormin Huang

A high-order clustered differential pulse code modulation method with removal of local spectral outliers (C-DPCM-RLSO) is proposed for the lossless compression of hyperspectral images. By adaptively removing the local spectral outliers, the C-DPCM-RLSO method improves the prediction accuracy of the high-order regression predictor and reduces the residuals between the predicted and the original images. The experiment on a set of the NASA Airborne Visible Infrared Imaging Spectrometer (AVIRIS) test images show that the C-DPCM-RLSO method has a comparable average compression gain but a much reduced execution time as compared with the previous lossless methods.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2014

GPU-Accelerated Longwave Radiation Scheme of the Rapid Radiative Transfer Model for General Circulation Models (RRTMG)

Erik Price; Jarno Mielikainen; Melin Huang; Bormin Huang; Hung-Lung Allen Huang; Tsengdar Lee

Atmospheric radiative transfer models calculate radiative transfer of electromagnetic radiation through a planetary atmosphere. One of such models is the rapid radiative transfer model (RRTM), which evaluates longwave and shortwave atmospheric radiative fluxes and heating rates. The RRTM for general circulation models (GCMs), RRTMG, is an accelerated version based on the single-column reference of RRTM. The longwave radiation scheme of RRTM for GCMs (RRTMG_LW) is one model that utilizes the correlated-k approach to calculate longwave fluxes and heating rates for application to GCMs. In this paper, the feasibility of using graphics processing units (GPUs) to accelerate the in weather research and forecasting (WRF) model is examined. GPUs allow a substantial performance improvement in RRTMG_LW with a large number of parallel compute cores at low cost and power. Our GPU version of RRTMG_LW yields the bit-exact outputs as its original Fortran code. Our results show that NVIDIAs K40 GPU achieves a speedup of x as compared to its CPU counterpart running on one CPU core of Intel Xeon E5-2603, whereas the speedup for one CPU socket (4 cores) of the Xeon E5-2603 with respect to one CPU core is only 3.2×.


scandinavian conference on image analysis | 2005

Fractal dimension analysis and statistical processing of paper surface images towards surface roughness measurement

Toni Kuparinen; Oleg Rodionov; Pekka Toivanen; Jarno Mielikainen; Vladimir Bochko; Ate Korkalainen; Juha Parviainen; Erik M. Vartiainen

In this paper we present a method for optical paper surface roughness measurement, which overcomes the disadvantages of the traditional methods. Airflow-based roughness measurement methods and profilometer require expensive special equipment, essential laboratory conditions, are contact-based and slow and unsuitable for on-line control purposes methods. We employed an optical microscope with a built-in CCD-camera to take images of paper surface. The obtained image is considered as a texture. We applied statistical brightness measures and fractal dimension analysis for texture analysis. We have found a strong correlation between the roughness and a fractal dimension. Our method is non-contact–based, fast and is suitable for on-line control measurements in the paper industry.


IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing | 2012

GPU Implementation of Stony Brook University 5-Class Cloud Microphysics Scheme in the WRF

Jarno Mielikainen; Bormin Huang; Hung-Lung Allen Huang; Mitchell D. Goldberg

The Weather Research and Forecasting (WRF) model is a next-generation mesoscale numerical weather prediction system. It is designed to serve the needs of both operational forecasting and atmospheric research for a broad spectrum of applications across scales ranging from meters to thousands of kilometers. Microphysics plays an important role in weather and climate prediction. Microphysics includes explicitly resolved water vapor, cloud, and precipitation processes. Several bulk water microphysics schemes are available within the WRF, with different numbers of simulated hydrometeor classes and methods for estimating their size, fall speeds, distributions and densities. Stony Brook University scheme is a 5-class scheme with riming intensity predicted to account for the mixed-phase processes. In this paper, we develop an efficient Graphics Processing Unit (GPU) based Stony Brook University scheme. The GPU-based Stony Brook University scheme was compared to a CPU-based single-threaded counterpart on a computational domain of 422 × 297 horizontal grid points with 34 vertical levels. The original Fortran code was first rewritten into a standard C code. After that, C code was verified against Fortran code and CUDA C extensions were added for data parallel execution on GPUs. On a single GPU, we achieved a speed-up of 213× with data I/O and 896 × without I/O on NVIDIA GTX 590. Using multiple GPUs, a speed-up of 352 × is achieved with I/O for 4 GPUs. We will also discuss how data I/O will be less cumbersome if we ran the complete WRF model on GPUs.

Collaboration


Dive into the Jarno Mielikainen's collaboration.

Top Co-Authors

Avatar

Bormin Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Allen Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Hung-Lung Allen Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Pekka Toivanen

University of Eastern Finland

View shared research outputs
Top Co-Authors

Avatar

Mitchell D. Goldberg

National Oceanic and Atmospheric Administration

View shared research outputs
Top Co-Authors

Avatar

Melin Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

H.-L. Allen Huang

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Erik Price

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge