Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qianni Zhang is active.

Publication


Featured researches published by Qianni Zhang.


conference on image and video retrieval | 2006

A multi-feature optimization approach to object-based image classification

Qianni Zhang; Ebroul Izquierdo

This paper proposes a novel approach for the construction and use of multi-feature spaces in image classification. The proposed technique combines low-level descriptors and defines suitable metrics. It aims at representing and measuring similarity between semantically meaningful objects within the defined multi-feature space. The approach finds the best linear combination of predefined visual descriptor metrics using a Multi-Objective Optimization technique. The obtained metric is then used to fuse multiple non-linear descriptors is be achieved and applied in image classification.


IEEE Journal of Biomedical and Health Informatics | 2013

Histology Image Retrieval in Optimized Multifeature Spaces

Qianni Zhang; Ebroul Izquierdo

Content-based histology image retrieval systems have shown great potential in supporting decision making in clinical activities, teaching, and biological research. In content-based image retrieval, feature combination plays a key role. It aims at enhancing the descriptive power of visual features corresponding to semantically meaningful queries. It is particularly valuable in histology image analysis where intelligent mechanisms are needed for interpreting varying tissue composition and architecture into histological concepts. This paper presents an approach to automatically combine heterogeneous visual features for histology image retrieval. The aim is to obtain the most representative fusion model for a particular keyword that is associated with multiple query images. The core of this approach is a multiobjective learning method, which aims to understand an optimal visual-semantic matching function by jointly considering the different preferences of the group of query images. The task is posed as an optimization problem, and a multiobjective optimization strategy is employed in order to handle potential contradictions in the query images associated with the same keyword. Experiments were performed on two different collections of histology images. The results show that it is possible to improve a system for content-based histology image retrieval by using an appropriately defined multifeature fusion model, which takes careful consideration of the structure and distribution of visual features.


EURASIP Journal on Advances in Signal Processing | 2007

Combining Low-Level Features for Semantic Extraction in Image Retrieval

Qianni Zhang; Ebroul Izquierdo

An object-oriented approach for semantic-based image retrieval is presented. The goal is to identify key patterns of specific objects in the training data and to use them as object signature. Two important aspects of semantic-based image retrieval are considered: retrieval of images containing a given semantic concept and fusion of different low-level features. The proposed approach splits the image into elementary image blocks to obtain block regions close in shape to the objects of interest. A multiobjective optimization technique is used to find a suitable multidescriptor space in which several low-level image primitives can be fused. The visual primitives are combined according to a concept-specific metric, which is learned from representative blocks or training data. The optimal linear combination of single descriptor metrics is estimated by applying the Pareto archived evolution strategy. An empirical assessment of the proposed technique was conducted to validate its performance with natural images.


acm multimedia | 2011

Enhanced visualisation of dance performance from automatically synchronised multimodal recordings

Marc Gowing; Philip Kell; Noel E. O'Connor; Cyril Concolato; Slim Essid; Jean Lefeuvre; Robin Tournemenne; Ebroul Izquierdo; Vlado Kitanovski; Xinyu Lin; Qianni Zhang

The Huawei/3DLife Grand Challenge Dataset provides multimodal recordings of Salsa dancing, consisting of audiovisual streams along with depth maps and inertial measurements. In this paper, we propose a system for augmented reality-based evaluations of Salsa dancer performances. An essential step for such a system is the automatic temporal synchronisation of the multiple modalities captured from different sensors, for which we propose efficient solutions. Furthermore, we contribute modules for the automatic analysis of dance performances and present an original software application, specifically designed for the evaluation scenario considered, which enables an enhanced dance visualisation experience, through the augmentation of the original media with the results of our automatic analyses.


Signal Processing-image Communication | 2007

Adaptive salient block-based image retrieval in multi-feature space

Qianni Zhang; Ebroul Izquierdo

In this paper, a new method for object-based image retrieval is proposed. The technique is designed to adaptively and efficiently locate salient blocks in images. Salient blocks are used to represent semantically meaningful objects in images and to perform object-oriented annotation and retrieval. An algorithm is proposed to locate the most suitable blocks of arbitrary size representing the query concept or object of interest in images. To annotate single objects according to human perception, associations between several low-level patterns and semantic concepts are modelled by an optimised multi-descriptor space. The approach starts by dividing the image into blocks partitioned according to several different layouts. Then, a fitting block is selected according to a similarity metric acting on concept-specific multi-feature spaces. The similarity metric is defined as linear combination of single feature space metrics for which the corresponding weights are learned from a group of representative salient blocks using multi-objective optimisation. Relevance Feedback is seamlessly integrated in the retrieval process. In each iteration, the user selects images relevant to the query object, then the corresponding salient blocks in selected images are used as training examples. The proposed technique was thoroughly assessed and selected results are reported in this paper to demonstrate its performance.


Pattern Recognition | 2016

LSI: Latent semantic inference for natural image segmentation

Le Dong; Ning Feng; Qianni Zhang

Abstract We propose a novel label inference approach for segmenting natural images into perceptually meaningful regions. Each pixel is assigned a serial label indicating its category using a Markov Random Field (MRF) model. To this end, we introduce a framework for latent semantic inference of serial labels, called LSI, by integrating local pixel, global region, and scale information of an natural image into a MRF-inspired model. The key difference from traditional MRF based image segmentation methods is that we infer semantic segments in the label space instead of the pixel space. We first design a serial label formation algorithm named Color and Location Density Clustering (CLDC) to capture the local pixel information. Then we propose a label merging strategy to combine global cues of labels in the Cross-Region potential to grasp the contextual information within an image. In addition, to align with the structure of segmentation, a hierarchical label alignment mechanism is designed to formulate the Cross-Scale potential by utilizing the scale information to catch the hierarchy of image at different scales for final segmentation optimization. We evaluate the performance of the proposed approach on the Berkeley Segmentation Dataset and preferable results are achieved.


IEEE Transactions on Multimedia | 2016

Holons Visual Representation for Image Retrieval

Le Dong; Yan Liang; Gaipeng Kong; Qianni Zhang; Xiaochun Cao; Ebroul Izquierdo

Along with the enlargement of image scale, convolutional local features, such as SIFT, are ineffective for representing or indexing and more compact visual representations are required. Due to the intrinsic mechanism, the state-of-the-art vector of locally aggregated descriptors (VLAD) has a few limits. Based on this, we propose a new descriptor named holons visual representation (HVR). The proposed HVR is a derivative mutational self-contained combination of global and local information. It exploits both global characteristics and the statistic information of local descriptors in the image dataset. It also takes advantages of local features of each image and computes their distribution with respect to the entire local descriptor space. Accordingly, the HVR is computed by a two-layer hierarchical scheme, which splits the local feature space and obtains raw partitions, as well as the corresponding refined partitions. Then, according to the distances from the centroids of partition spaces to local features and their spatial correlation, we assign the local features into their nearest raw partitions and refined partitions to obtain the global description of an image. Compared with VLAD, HVR holds critical structure information and enhances the discriminative power of individual representation with a small amount of computation cost, while using the same memory overhead. Extensive experiments on several benchmark datasets demonstrate that the proposed HVR outperforms conventional approaches in terms of scalability as well as retrieval accuracy for images with similar intra local information.


BioMed Research International | 2017

Three-Class Mammogram Classification Based on Descriptive CNN Features

M. Mohsin Jadoon; Qianni Zhang; Ihsan Ul Haq; Sharjeel Butt; Adeel Jadoon

In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.


Fusion in Computer Vision | 2014

Multimodal Fusion in Surveillance Applications

Virginia Fernandez Arguedas; Qianni Zhang; Ebroul Izquierdo

The recent outbreak of vandalism, accidents and criminal activities has increased general public’s awareness about safety and security, demanding improved security measures. Smart surveillance video systems have become an ubiquitous platform which monitors private and public environments, ensuring citizens well-being. Their universal deployment integrates diverse media and acquisition systems, generating daily an enormous amount of multimodal data. Nowadays, numerous surveillance applications exploit multiple types of data and features benefitting from their uncorrelated contributions. Hence, the analysis, standardisation and fusion of complex content, specially visual, have become a fundamental problem to enhance surveillance systems by increasing their accuracy, robustness and reliability. During this chapter, an exhaustive survey of the existing multimodal fusion techniques and their applications in surveillance is provided. Addressing some of the revealed challenges from the state of the art, this chapter focuses on the development of a multimodal fusion technique for automatic surveillance object classification. The proposed fusion technique exploits the benefits of a Bayesian inference scheme to enhance surveillance systems’ performance. The chapter ends with an evaluation of the proposed Bayesian-based multimodal object classifier against two state-of-the-art object classifiers to demonstrate the benefits of multimodal fusion in surveillance applications.


workshop on image analysis for multimedia interactive services | 2013

Blending real with virtual in 3DLife

Konstantinos C. Apostolakis; Dimitrios S. Alexiadis; Petros Daras; David S. Monaghan; Noel E. O'Connor; Benjamin Prestele; Peter Eisert; Gaël Richard; Qianni Zhang; Ebroul Izquierdo; Maher Ben Moussa; Nadia Magnenat-Thalmann

Part of 3DLifes major goal to bring the 3D media Internet to life, concerns the development and wide-spread distribution of online tele-immersive (TI) virtual environments. As the techniques powering challenging tasks for user reconstruction and activity tracking within a virtual environment are maturing, along with consumer-grade availability of specialized hardware, this paper focuses on the simple practices used to make real-time tele-immersion within a networked virtual world a reality.

Collaboration


Dive into the Qianni Zhang's collaboration.

Top Co-Authors

Avatar

Ebroul Izquierdo

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Krishna Chandramouli

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Le Dong

University of Electronic Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Tomas Piatrik

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Xinyu Lin

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Petros Daras

Information Technology Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vlado Kitanovski

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Evaggelos Spyrou

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Yannis S. Avrithis

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge