Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zakria Hussain is active.

Publication


Featured researches published by Zakria Hussain.


Neurocomputing | 2014

Manifold-preserving graph reduction for sparse semi-supervised learning

Shiliang Sun; Zakria Hussain; John Shawe-Taylor

Representing manifolds using fewer examples has the advantages of eliminating the influence of outliers and noisy points and simultaneously accelerating the evaluation of predictors learned from the manifolds. In this paper, we give the definition of manifold-preserving sparse graphs as a representation of sparsified manifolds and present a simple and efficient manifold-preserving graph reduction algorithm. To characterize the manifold-preserving properties, we derive a bound on the expected connectivity between a randomly picked point outside of a sparse graph and its closest vertex in the sparse graph. We also bound the approximation ratio of the proposed graph reduction algorithm. Moreover, we apply manifold-preserving sparse graphs to semi-supervised learning and propose sparse Laplacian support vector machines (SVMs). After characterizing the empirical Rademacher complexity of the function class induced by the sparse Laplacian SVMs, which is closely related to their generalization errors, we further report experimental results on multiple data sets which indicate their feasibility for classification.


ieee radar conference | 2010

Compressed Sampling for pulse Doppler radar

Graeme E. Smith; Tom Diethe; Zakria Hussain; John Shawe-Taylor; David R. Hardoon

This paper presents a study of how the Analogue to Digital Converter (ADC) sampling rate in a digital radar can be reduced-without reduction in waveform bandwidth-through the use of Compressed Sampling (CS). Real radar data is used to show that through use of chirp or Gabor dictionaries and Basis Pursuit (BP) the ADC sampling frequency can be reduced by a factor of 128, to under 1 mega sample per second, while the waveform bandwidth remains 40 MHz. The error on the reconstructed fast-time samples is small enough that accurate range-profiles and range-frequency surfaces can be produced.


IEEE Transactions on Information Theory | 2011

Design and Generalization Analysis of Orthogonal Matching Pursuit Algorithms

Zakria Hussain; John Shawe-Taylor; David R. Hardoon; Charanpal Dhanjal

We derive generalization error (loss) bounds for orthogonal matching pursuit algorithms, starting with kernel matching pursuit and sparse kernel principal components analysis. We propose (to the best of our knowledge) the first loss bound for kernel matching pursuit using a novel application of sample compression and Vapnik-Chervonenkis bounds. For sparse kernel principal components analysis, we find that it can be bounded using a standard sample compression analysis, as the subspace it constructs is a compression scheme. We demonstrate empirically that this bound is tighter than previous state-of-the-art bounds for principal components analysis, which use global and local Rademacher complexities. From this analysis we propose a novel sparse variant of kernel canonical correlation analysis and bound its generalization performance using the results developed in this paper. We conclude with a general technique for designing matching pursuit algorithms for other learning domains.


european conference on machine learning | 2010

Exploration-exploitation of eye movement enriched multiple feature spaces for content-based image retrieval

Zakria Hussain; Alex Po Leung; Kitsuchart Pasupa; David R. Hardoon; Peter Auer; John Shawe-Taylor

In content-based image retrieval (CBIR) with relevance feedback we would like to retrieve relevant images based on their content features and the feedback given by users. In this paper we view CBIR as an Exploration-Exploitation problem and apply a kernel version of the LinRel algorithm to solve it. By using multiple feature extraction methods and utilising the feedback given by users, we adopt a strategy of multiple kernel learning to find a relevant feature space for the kernel LinRel algorithm. We call this algorithm LinRelMKL. Furthermore, when we have access to eye movement data of users viewing images we can enrich our (multiple) feature spaces by using a tensor kernel SVM. When learning in this enriched space we show that we can significantly improve the search results over the LinRel and LinRelMKL algorithms. Our results suggest that the use of exploration-exploitation with multiple feature spaces is an efficient way of constructing CBIR systems, and that when eye movement features are available, they should be used to help improve CBIR.


bioinformatics and bioengineering | 2013

Drug screening with Elastic-net multiple kernel learning

Kitsuchart Pasupa; Zakria Hussain; John Shawe-Taylor; Peter Willett

We apply Elastic-net Multiple Kernel Learning (MKL) to the MDL Drug Data Report (MDDR) database for the problem of drug screening. We show that combining a set of kernels constructed from fingerprint descriptors, can significantly improve the accuracy of prediction, against a Support Vector Machine trained on each kernel separately. To the best of our knowledge, this is the first application of MKL to the MDDR database for drug screening.


pp. 147-158. (2010) | 2010

Prediction with the SVM Using Test Point Margins.

Süreyya Özögür-Akyüz; Zakria Hussain; John Shawe-Taylor

Support vector machines (SVMs) carry out binary classification by constructing a maximal margin hyperplane between the two classes of observed (training) examples and then classifying test points according to the half-spaces in which they reside (irrespective of the distances that may exist between the test examples and the hyperplane). Cross-validation involves finding the one SVM model together with its optimal parameters that minimizes the training error and has good generalization in the future. In contrast, in this chapter we collect all of the models found in the model selection phase and make predictions according to the model whose hyperplane achieves the maximum separation from a test point. This directly corresponds to the L ∞ norm for choosing SVM models at the testing stage. Furthermore, we also investigate other more general techniques corresponding to different L p norms and show how these methods allow us to avoid the complex and timeconsuming paradigm of cross-validation. Experimental results demonstrate this advantage, showing significant decreases in computational time as well as competitive generalization error.


european conference on machine learning | 2009

Kernel Polytope Faces Pursuit

Tom Diethe; Zakria Hussain

Polytope Faces Pursuit (PFP) is a greedy algorithm that approximates the sparse solutions recovered by ***1 regularised least-squares (Lasso) [4,10] in a similar vein to (Orthogonal) Matching Pursuit (OMP) [16]. The algorithm is based on the geometry of the polar polytope where at each step a basis function is chosen by finding the maximal vertex using a path-following method. The algorithmic complexity is of a similar order to OMP whilst being able to solve problems known to be hard for (O)MP. Matching Pursuit was extended to build kernel-based solutions to machine learning problems, resulting in the sparse regression algorithm, Kernel Matching Pursuit (KMP) [17]. We develop a new algorithm to build sparse kernel-based solutions using PFP, which we call Kernel Polytope Faces Pursuit (KPFP). We show the usefulness of this algorithm by providing a generalisation error bound [7] that takes into account a natural regression loss and experimental results on several benchmark datasets.


Conformal Prediction for Reliable Machine Learning#R##N#Theory, Adaptations and Applications | 2014

Chapter 7 – Model Selection

David R. Hardoon; Zakria Hussain; John Shawe-Taylor

We investigate the issue of model selection and the use of the nonconformity (strangeness) measure in batch learning. Using the nonconformity measure we propose a new training algorithm that helps avoid the need for Cross-Validation or Leave-One-Out model selection strategies. We provide a new generalization error bound using the notion of nonconformity to upper bound the loss of each test example and show that our proposed approach is comparable to standard model selection methods, but with theoretical guarantees of success and faster convergence. We demonstrate our novel model selection technique using the Support Vector Machine.


international conference on neural information processing | 2007

Using Generalization Error Bounds to Train the Set Covering Machine

Zakria Hussain; John Shawe-Taylor

In this paper we eliminate the need for parameter estimation associated with the set covering machine (SCM) by directly minimizing generalization error bounds. Firstly, we consider a sub-optimal greedy heuristic algorithm termed the bound set covering machine (BSCM). Next, we propose the branch and bound set covering machine (BBSCM) and prove that it finds a classifier producing the smallest generalization error bound. We further justify empirically the BBSCM algorithm with a heuristic relaxation, called BBSCM(i¾?), which guarantees a solution whose bound is within a factor i¾?of the optimal. Experiments comparing against the support vector machine (SVM) and SCM algorithms demonstrate that the approaches proposed can lead to some or all of the following: 1) faster running times, 2) sparser classifiers and 3) competitive generalization error, all while avoiding the need for parameter estimation.


arXiv: Information Retrieval | 2010

Pinview: Implicit Feedback in Content-Based Image Retrieval

Peter Auer; Zakria Hussain; Samuel Kaski; Arto Klami; Jussi Kujala; Jorma Laaksonen; Alex Po Leung; Kitsuchart Pasupa; John Shawe-Taylor

Collaboration


Dive into the Zakria Hussain's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kitsuchart Pasupa

King Mongkut's Institute of Technology Ladkrabang

View shared research outputs
Top Co-Authors

Avatar

Tom Diethe

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ville Viitaniemi

Helsinki University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge