Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaojun Qi is active.

Publication


Featured researches published by Xiaojun Qi.


Pattern Recognition | 2007

Incorporating multiple SVMs for automatic image annotation

Xiaojun Qi; Yutao Han

In this paper, a novel automatic image annotation system is proposed, which integrates two sets of support vector machines (SVMs), namely the multiple instance learning (MIL)-based and global-feature-based SVMs, for annotation. The MIL-based bag features are obtained by applying MIL on the image blocks, where the enhanced diversity density (DD) algorithm and a faster searching algorithm are applied to improve the efficiency and accuracy. They are further input to a set of SVMs for finding the optimum hyperplanes to annotate training images. Similarly, global color and texture features, including color histogram and modified edge histogram, are fed into another set of SVMs for categorizing training images. Consequently, two sets of image features are constructed for each test image and are, respectively, sent to the two sets of SVMs, whose outputs are incorporated by an automatic weight estimation method to obtain the final annotation results. Our proposed annotation approach demonstrates a promising performance for an image database of 12000 general-purpose images from COREL, as compared with some current peer systems in the literature.


Signal Processing | 2007

A robust content-based digital image watermarking scheme

Xiaojun Qi; Ji Qi

This paper presents a content-based digital image-watermarking scheme, which is robust against a variety of common image-processing attacks and geometric distortions. The image content is represented by important feature points obtained by our image-texture-based adaptive Harris corner detector. These important feature points are geometrically significant and therefore are capable of determining the possible geometric attacks with the aid of the Delaunay-tessellation-based triangle matching method. The watermark is encoded by both the error correcting codes and the spread spectrum technique to improve the detection accuracy and ensure a large measure of security against unintentional or intentional attacks. An image-content-based adaptive embedding scheme is applied in discrete Fourier transform (DFT) domain of each perceptually high textured subimage to ensure better visual quality and more robustness. The watermark detection decision is based on the number of matched bits between the recovered and embedded watermarks in embedding subimages. The experimental results demonstrate the robustness of the proposed method against any combination of the geometric distortions and various common image-processing operations such as JPEG compression, filtering, enhancement, and quantization. Our proposed system also yields a better performance as compared with some peer systems in the literature.


Signal Processing | 2007

A DCT-based Mod4 steganographic method

KokSheik Wong; Xiaojun Qi; Kiyoshi Tanaka

This paper presents a novel Mod4 steganographic method in discrete cosine transform (DCT) domain. Mod4 is a blind steganographic method. A group of 2x2 spatially adjacent quantized DCT coefficients (GQC) is selected as the valid message carrier. The modulus 4 arithmetic operation is then applied to the valid GQC to embed a pair of bits. When modification is required for data embedding, the shortest route modification scheme is applied to reduce distortion as compared to the ordinary direct modification scheme. Mod4 is capable in embedding information into both uncompressed and JPEG-compressed image. To compare Mod4 with other existing methods, carrier capacity, stego image quality, and results of blind steganalysis for 500 various images are shown. Visual comparison of three additional metrics is also presented to show the relative performance of Mod4 among other existing methods.


Journal of Visual Communication and Image Representation | 2015

A singular-value-based semi-fragile watermarking scheme for image content authentication with tamper localization

Xiaojun Qi; Xing Xin

Utilize relationships of singular values to extract content-dependent watermark.Merge relationships of singular values to choose adaptive quantizer for each block.Apply adaptive quantization to embed secure watermark in the wavelet domain.Define a 3-level authentication process to detect authenticity and prove tampering.Use five measures to compensate misclassification and capture distortions. This paper presents a singular-value-based semi-fragile watermarking scheme for image content authentication. The proposed scheme generates secure watermark by performing a logical operation on content-dependent watermark generated by a singular-value-based sequence and content-independent watermark generated by a private-key-based sequence. It next employs the adaptive quantization method to embed secure watermark in approximation subband of each 4i?4 block to generate the watermarked image. The watermark extraction process then extracts watermark using the parity of quantization results from the probe image. The authentication process starts with regenerating secure watermark following the same process. It then constructs error maps to compute five authentication measures and performs a three-level process to authenticate image content and localize tampered areas. Extensive experimental results show that the proposed scheme outperforms five peer schemes and its two variant systems and is capable of identifying intentional tampering, incidental modification, and localizing tampered regions under mild to severe content-preserving modifications.


Information Sciences | 2005

A progressive transmission capable diagnostically lossless compression scheme for 3D medical image sets

Xiaojun Qi; John M. Tyler

This paper presents a novel and efficient diagnostically lossless compression for 3D medical image sets. This compression scheme provides the 3D medical image sets with a progressive transmission capability. An automated filter-and-threshold based preprocessing technique is used to remove noise outside the diagnostic region. Then a wavelet decomposition feature vector based approach is applied to determine the reference image for the entire 3D medical image set. The selected reference image contains the most discernible anatomical structures within a relative large diagnostic region. It is progressively encoded by a lossless embedded zerotree wavelet method so the validity of an entire set can be determined early. This preprocessing technique is followed by an optimal predictor plus a 1st-level integer wavelet transform to de-correlate the 3D medical image set. Run-length and arithmetic coding are used to further remove coding redundancy. This diagnostically lossless compression method achieves an average compression of 2.1038, 2.4292, and 1.6826 bits per pixel for three types of 3D magnetic resonance image sets. The integrated progressive transmission capability degrades the compression performance by an average of 7.25%, 6.60%, and 4.49% for the above three types. Moreover, our compression without and with progressive transmission achieves better compression than the state-of-the-art.


Iet Computer Vision | 2014

Face recognition under varying illumination based on adaptive homomorphic eight local directional patterns

Mohammad Reza Faraji; Xiaojun Qi

This study proposes an illumination-invariant face-recognition method called adaptive homomorphic eight local directional pattern (AH-ELDP). AH-ELDP first uses adaptive homomorphic filtering to reduce the influence of illumination from an input face image. It then applies an interpolative enhancement function to stretch the filtered image. Finally, it produces eight directional edge images using Kirsch compass masks and uses all the directional information to create an illumination-insensitive representation. The authors extensive experiments show that the AH-ELDP technique achieves the best face recognition accuracy of 99.45% for CMU-PIE face images, 96.67% for Yale B face images and 84.42% for Extended Yale B face images using one image per subject for training when compared to seven representative state-of-the-art techniques.


IEEE Signal Processing Letters | 2014

Face Recognition under Varying Illumination with Logarithmic Fractal Analysis

Mohammad Reza Faraji; Xiaojun Qi

Face recognition under illumination variations is a challenging research area. This paper presents a new method based on the log function and the fractal analysis (FA) to produce a logarithmic fractal dimension (LFD) image which is illumination invariant. The proposed FA feature-based method is a very effective edge enhancer technique to extract and enhance facial features such as eyes, eyebrows, nose, and mouth. Our extensive experiments show the proposed method achieves the best recognition accuracy using one image per subject for training when compared to six recently proposed state-of-the-art methods.


Neurocomputing | 2016

Face recognition under varying illuminations using logarithmic fractal dimension-based complete eight local directional patterns

Mohammad Reza Faraji; Xiaojun Qi

Face recognition under illumination is really challenging. This paper proposes an effective method to produce illumination-invariant features for images with various levels of illumination. The proposed method seamlessly combines adaptive homomorphic filtering, simplified logarithmic fractal dimension, and complete eight local directional patterns to produce illumination-invariant representations. Our extensive experiments show that the proposed method outperforms two of its variant methods and nine state-of-the-art methods, and achieves the overall face recognition accuracy of 99.47%, 94.55%, 99.53%, and 86.63% on Yale B, extended Yale B, CMU-PIE, and AR face databases, respectively, when using one image per subject for training. It also outperforms the compared methods on the Honda UCSD video database using five images per subject for training and considering all necessary steps including face detection, landmark localization, face normalization, and face matching to recognize faces. Our evaluations using receiver operating characteristic (ROC) curves also verify the proposed method has the best verification and discrimination ability compared with other peer methods.


international conference on image analysis and recognition | 2007

Image retrieval using transaction-based and SVM-based learning in relevance feedback sessions

Xiaojun Qi; Ran Chang

This paper introduces a composite relevance feedback approach for image retrieval using transaction-based and SVM-based learning. A transaction repository is dynamically constructed by applying these two learning techniques on positive and negative session-term feedback. This repository semantically relates each database image to the query images having been used to date. The query semantic feature vector can then be computed using the current feedback and the semantic values in the repository. The correlation measures the semantic similarity between the query image and each database image. Furthermore, the SVM is applied on the session-term feedback to learn the hyperplane for measuring the visual similarity between the query image and each database image. These two similarity measures are normalized and combined to return the retrieved images. Our extensive experimental results show that the proposed approach offers average retrieval precision as high as 88.59% after three iterations. Comprehensive comparisons with peer systems reveal that our system yields the highest retrieval accuracy after two iterations.


international conference on acoustics, speech, and signal processing | 2006

A Short-Term and Long-Term Learning Approach for Content-Based Image Retrieval

Michael Wacht; Juan Shan; Xiaojun Qi

This paper proposes a short-term and long-term learning approach for content-based image retrieval. The proposed system integrates the users positive and negative feedback from all iterations to construct a semantic space to remember the users intent in terms of the high-level hidden semantic features. The short-term learning further refines the query by updating its associated weight vector using both positive and negative examples together with the long-term-learning-based semantic space. The similarity score is computed as the dot product between the query weight vector and the high-level features of each image stored in the semantic space. Our proposed retrieval approach demonstrates a promising retrieval performance for an image database of 6000 general-purpose images from COREL, as compared with the conventional retrieval systems

Collaboration


Dive into the Xiaojun Qi's collaboration.

Top Co-Authors

Avatar

Ran Chang

Utah State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ji Qi

Utah State University

View shared research outputs
Top Co-Authors

Avatar

John M. Tyler

Louisiana State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yutao Han

Utah State University

View shared research outputs
Top Co-Authors

Avatar

Lifu Xiao

Utah State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge