Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hossein Ziaei Nafchi is active.

Publication


Featured researches published by Hossein Ziaei Nafchi.


IEEE Signal Processing Letters | 2015

FSITM: A Feature Similarity Index For Tone-Mapped Images

Hossein Ziaei Nafchi; Atena Shahkolaei; Reza Farrahi Moghaddam; Mohamed Cheriet

In this work, based on the local phase information of images, an objective index, called the feature similarity index for tone-mapped images (FSITM), is proposed. To evaluate a tone mapping operator (TMO), the proposed index compares the locally weighted mean phase angle map of an original high dynamic range (HDR) to that of its associated tone-mapped image calculated using the output of the TMO method. In experiments on two standard databases, it is shown that the proposed FSITM method outperforms the state-of-the-art index, the tone mapped quality index (TMQI). In addition, a higher performance is obtained by combining the FSITM and TMQI indices.


international conference on document analysis and recognition | 2013

An Efficient Ground Truthing Tool for Binarization of Historical Manuscripts

Hossein Ziaei Nafchi; Seyed Morteza Ayatollahi; Reza Farrahi Moghaddam; Mohamed Cheriet

For the purpose of facilitating benchmark contributions for binarization methods, a new fast ground truthing approach, called the PhaseGT, is proposed. This approach is used for building the 1st groundtruthed Persian Heritage Image Binarization Dataset (PHIBD 2012). The PhaseGT is a semiautomatic approach to ground truthing of images of any language, especially designed for historical document images. The main goal of the PhaseGT is to accelerate the ground truthing process and reduce the manual ground truthing effort. It uses the phase congruency features to preprocess the input image and to provide a more accurate initial binarization to the human expert who performs the manual part. This preprocessing is in turn based on a priori knowledge that is provided by human user. The PHIBD 2012 dataset contains 15 historical document images with their corresponding ground truth binary images. The historical images in the dataset suffer from various types of degradation. It has been also divided into two subsets of training and testing images for those binarization methods that use learning approaches.


pacific-rim symposium on image and video technology | 2013

Global Haar-Like Features: A New Extension of Classic Haar Features for Efficient Face Detection in Noisy Images

Mahdi Rezaei; Hossein Ziaei Nafchi; Sandino Morales

This paper addresses the problem of detecting human faces in noisy images. We propose a method that includes a denoising preprocessing step, and a new face detection approach based on a novel extension of Haar-like features. Preprocessing of the input images is focused on the removal of different types of noise while preserving the phase data. For the face detection process, we introduce the concept of global and dynamic global Haar-like features, which are complementary to the well known classical Haar-like features. Matching dynamic global Haar-like features is faster than that of the traditional approach. Also, it does not increase the computational burden in the learning process. Experimental results obtained using images from the MIT-CMU dataset are promising in terms of detection rate and the false alarm rate in comparison with other competing algorithms.


IEEE Access | 2016

Mean Deviation Similarity Index: Efficient and Reliable Full-Reference Image Quality Evaluator

Hossein Ziaei Nafchi; Atena Shahkolaei; Rachid Hedjam; Mohamed Cheriet

Applications of perceptual image quality assessment (IQA) in image and video processing, such as image acquisition, image compression, image restoration, and multimedia communication, have led to the development of many IQA metrics. In this paper, a reliable full reference IQA model is proposed that utilize gradient similarity (GS), chromaticity similarity (CS), and deviation pooling (DP). By considering the shortcomings of the commonly used GS to model the human visual system (HVS), a new GS is proposed through a fusion technique that is more likely to follow HVS. We propose an efficient and effective formulation to calculate the joint similarity map of two chromatic channels for the purpose of measuring color changes. In comparison with a commonly used formulation in the literature, the proposed CS map is shown to be more efficient and provide comparable or better quality predictions. Motivated by a recent work that utilizes the standard DP, a general formulation of the DP is presented in this paper and used to compute a final score from the proposed GS and CS maps. This proposed formulation of DP benefits from the Minkowski pooling and a proposed power pooling as well. The experimental results on six data sets of natural images, a synthetic data set, and a digitally retouched dataset show that the proposed index provides comparable or better quality predictions than the most recent and competing state-of-the-art IQA metrics in the literature, it is reliable and has low complexity. The MATLAB source code of the proposed metric is available at https://dl.dropboxusercontent.com/u/74505502/MDSI.m.


IEEE Transactions on Image Processing | 2015

Influence of Color-to-Gray Conversion on the Performance of Document Image Binarization: Toward a Novel Optimization Problem

Rachid Hedjam; Hossein Ziaei Nafchi; Margaret Kalacska; Mohamed Cheriet

This paper presents a novel preprocessing method of color-to-gray document image conversion. In contrast to the conventional methods designed for natural images that aim to preserve the contrast between different classes in the converted gray image, the proposed conversion method reduces as much as possible the contrast (i.e., intensity variance) within the text class. It is based on learning a linear filter from a predefined data set of text and background pixels that: 1) when applied to background pixels, minimizes the output response and 2) when applied to text pixels, maximizes the output response, while minimizing the intensity variance within the text class. Our proposed method (called learning-based color-to-gray) is conceived to be used as preprocessing for document image binarization. A data set of 46 historical document images is created and used to evaluate subjectively and objectively the proposed method. The method demonstrates drastically its effectiveness and impact on the performance of state-of-the-art binarization methods. Four other Web-based image data sets are created to evaluate the scalability of the proposed method.


arXiv: Computer Vision and Pattern Recognition | 2013

Persian heritage image binarization competition (PHIBC 2012)

Seyed Morteza Ayatollahi; Hossein Ziaei Nafchi

The first competition on the binarization of historical Persian documents and manuscripts (PHIBC 2012) has been organized in conjunction with the first Iranian conference on pattern recognition and image analysis (PRIA 2013). The main objective of PHIBC 2012 is to evaluate performance of the binarization methodologies, when applied on the Persian heritage images. This paper provides a report on the methodology and performance of the three submitted algorithms based on evaluation measures has been used.


IEEE Transactions on Geoscience and Remote Sensing | 2016

Iterative Classifiers Combination Model for Change Detection in Remote Sensing Imagery

Rachid Hedjam; Margaret Kalacska; Max Mignotte; Hossein Ziaei Nafchi; Mohamed Cheriet

In this paper, we propose a new unsupervised change detection method designed to analyze multispectral remotely sensed image pairs. It is formulated as a segmentation problem to discriminate the changed class from the unchanged class in the difference images. The proposed method is in the category of the committee machine learning model that utilizes an ensemble of classifiers (i.e., the set of segmentation results obtained by several thresholding methods) with a dynamic structure type. More specifically, in order to obtain the final “change/no-change” output, the responses of several classifiers are combined by means of a mechanism that involves the input data (the difference image) under an iterative Bayesian-Markovian framework. The proposed method is evaluated and compared to previously published results using satellite imagery.


international conference on document analysis and recognition | 2015

ICDAR 2015 contest on MultiSpectral Text Extraction (MS-TEx 2015)

Rachid Hedjam; Hossein Ziaei Nafchi; Reza Farrahi Moghaddam; Margaret Kalacska; Mohamed Cheriet

The first competition on the MultiSpectral Text Extraction (MS-TEx) from historical document images has been organized in conjunction with the ICDAR 2015 conference. The goal of this contest is evaluation of the most recent advances in text extraction from historical document images captured by a multispectral imaging system. The MS-TEx 2015 dataset contains 10 handwritten and machine-printed historical document images along with eight spectral images for each image. This paper provides a report on the methodology and performance of the five submitted algorithms by various research groups across the world. The objective evaluation and ranking was performed by using well-known evaluation metrics of binarization and classification.


IEEE Signal Processing Letters | 2017

CorrC2G: Color to Gray Conversion by Correlation

Hossein Ziaei Nafchi; Atena Shahkolaei; Rachid Hedjam; Mohamed Cheriet

In this letter, a novel decolorization method is proposed to convert color images into grayscale. The proposed method, called CorrC2G, estimates the three global linear weighting parameters of the color to gray conversion by correlation. These parameters are estimated directly from the correlations between each channel of the RGB image and a contrast image. The proposed method works directly on the RGB channels; it does not use any edge information nor any optimization or training. The objective and subjective experimental results on three available benchmark datasets of color to gray conversion, e.g., Cadik, CSDD, and Color250, show that the proposed decolorization method is highly efficient and comparable to recent state-of-the-art decolorization methods. The MATLAB source code of the proposed method is available at: http://www.synchromedia.ca/system/files/CorrC2G.m.


Procedia Computer Science | 2012

A set of criteria for face detection preprocessing

Hossein Ziaei Nafchi; Seyed Morteza Ayatollahi

The goal of this paper is to provide a robust set of preprocessing steps to be used with any face detection system. Usually, the purpose of using preprocessing steps in face detection system is to speed up the detection process and reducing false positives. A preprocessing step should reject an acceptable amount of non-face windows. First proposed criterion is based on linear image transform (LIT) which ignores scanning a number of non-face windows. Second criterion utilizes regional minima (RM) to reject non-face windows. The last one uses a modified adaptive thresholding (ADT) technique to convert input image into a binary representation and perform an exclusion process on the latter form. The proposed criteria have been used in conjunction with a version of Viola-Jones face detector. Experimental results show significant advantage against early exclusion criterion or variance classifier in terms of speed and rejection rate. CMU-MIT and BioID datasets have been used in the experiments.

Collaboration


Dive into the Hossein Ziaei Nafchi's collaboration.

Top Co-Authors

Avatar

Mohamed Cheriet

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Reza Farrahi Moghaddam

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar

Atena Shahkolaei

École de technologie supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Max Mignotte

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge