Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ibad Kureshi is active.

Publication


Featured researches published by Ibad Kureshi.


distributed simulation and real time applications | 2015

Towards An Info-Symbiotic Decision Support System for Disaster Risk Management

Ibad Kureshi; Georgios K. Theodoropoulos; Eleni Mangina; Gregory M. P. O'Hare; John Roche

This paper outlines a framework for an info-symbiotic modelling system using cyber-physical sensors to assist in decision-making. Using a dynamic data-driven simulation approach, this system can help with the identification of target areas and resource allocation in emergency situations. Using different natural disasters as exemplars, we will show how cyber-physical sensors can enhance ground level intelligence and aid in the creation of dynamic models to capture the state of human casualties. Using a virtual command & control centre communicating with sensors in the field, up-to-date information of the ground realities can be incorporated in a dynamic feedback loop. Using other information (e.g. Weather models) a complex and rich model can be created. The framework adaptively manages the heterogeneous collection of data resources and uses agent-based models to create what-if scenarios in order to determine the best course of action.


international symposium on computing and networking | 2013

Combining AiG Agents with Unicore Grid for Improvement of User Support

Kamil Lysik; Katarzyna Wasielewska; Marcin Paprzycki; Maria Ganzha; John Brennan; Violetta Holmes; Ibad Kureshi

Grid computing has, in recent history, become an invaluable tool for scientific research. As grid middleware has matured, considerations have extended beyond the core functionality, towards greater usability. The aim of this paper is to consider how resources that are available to the users across the Queens gate Grid (QGG) at the University of Huddersfield (UoH), could be accessed with the help of an ontology-driven interface. The interface is a part of the Agent in Grid (AiG) project under development at the Systems Research Institute Polish Academy of Sciences (SRIPAS). It is to be customized and integrated with the UoH computing environment. The overarching goal is to help users of the grid infrastructure. The secondary goals are: (i) to improve the performance of the system, and (ii) to equalize the distribution of work among resources. Results presented in this paper include the new ontology that is being developed for the grid at the UoH, and the description of issues encountered during the development of a scenario when user searches for an appropriate resource within the Unicore grid middleware and submits job to be executed on such resource.


international conference on conceptual structures | 2015

Developing High Performance Computing Resources for Teaching Cluster and Grid Computing Courses

Violeta Holmes; Ibad Kureshi

Abstract High-Performance Computing (HPC) and the ability to process large amounts of data are of paramount importance for UK business and economy as outlined by Rt Hon David Willetts MP at the HPC and Big Data conference in February 2014. However there is a shortage of skills and available training in HPC to prepare and expand the workforce for the HPC and Big Data research and development. Currently, HPC skills are acquired mainly by students and staff taking part in HPC-related research projects, MSc courses, and at the dedicated training centres such as Edinburgh Universitys EPCC. There are few UK universities teaching the HPC, Clusters and Grid Computing courses at the undergraduate level. To address the issue of skills shortages in the HPC it is essential to provide teaching and training as part of both postgraduate and undergraduate courses. The design and development of such courses is challenging since the technologies and software in the fields of large scale distributed systems such as Cluster, Cloud and Grid computing are undergoing continuous change. The students completing the HPC courses should be proficient in these evolving technologies and equipped with practical and theoretical skills for future jobs in this fast developing area. In this paper we present our experience in developing the HPC, Cluster and Grid modules including a review of existing HPC courses offered at the UK universities. The topics covered in the modules are described, as well as the coursework project based on practical laboratory work. We conclude with an evaluation based on our experience over the last ten years in developing and delivering the HPC modules on the undergraduate courses, with suggestions for future work.


international conference on big data | 2014

Using Hadoop To Implement a Semantic Method Of Assessing The Quality Of Research Medical Datasets

Stephen Bonner; Grigoris Antoniou; Laura Moss; Ibad Kureshi; David Corsair; Illias Tachmazidis

In this paper a system for storing and querying medical RDF data using Hadoop is developed. This approach enables us to create an inherently parallel framework that will scale the workload across a cluster. Unlike existing solutions, our framework uses highly optimised joining strategies to enable the completion of eight separate SPAQL queries, comprised of over eighty distinct joins, in only two Map/Reduce iterations. Results are presented comparing an optimised version of our solution against Jena TDB, demonstrating the superior performance of our system and its viability for assessing the quality of medical data.


distributed simulation and real-time applications | 2014

CDES: An Approach to HPC Workload Modelling

John Brennan; Ibad Kureshi; Violeta Holmes

Computational science and complex system administration relies on being able to model user interactions. When it comes to managing HPC, HTC and grid systems user workloads - their job submission behaviour, is an important metric when designing systems or scheduling algorithms. Most simulators are either inflexible or tied in to proprietary scheduling systems. For system administrators being able to model how a scheduling algorithm behaves or how modifying system configurations can affect the job completion rates is critical. Within computer science research many algorithms are presented with no real description or verification of behaviour. In this paper we are presenting the Cluster Discrete Event Simulator (CDES) as an strong candidate for HPC workload simulation. Built around an open framework, CDES can take system definitions, multi-platform real usage logs and can be interfaced with any scheduling algorithm through the use of an API. CDES has been tested against 3 years of usage logs from a production level HPC system and verified to a greater than 95% accuracy.


international conference on cluster computing | 2012

Hybrid Computer Cluster with High Flexibility

Shuo Liang; Violeta Holmes; Ibad Kureshi

In this paper we present a cluster middleware, designed to implement a Linux-Windows Hybrid HPC Cluster, which not only holds the characteristics of both operating system, but also accepts and schedules jobs in both environments. Beowulf Clusters have become an economical and practical choice for small-and-medium-sized institutions to provide High Performance Computing (HPC)resources. The HPC resources are required for running simulations, image rendering and other calculations, and to support the software requiring a specific operating system. To support the software, small-scale computer clusters would have to be divided in two or more clusters if they are to run on a single operating system. The x86 virtualisation technology would help running multiple operating systems on one computer, but only with the latest hardware which many legacy Beowulf clusters do not have. To aid the institutions, who rely on legacy non-virtualisation-supported facilities rather than high-end HPC resources, we have developed and deployed a bi-stable hybrid system built around Linux CentOS 5.5 with the improved OSCAR middleware, and Windows Server 2008 and Windows HPC 2008 R2. This hybrid cluster is utilised as part of the University of Hudders field campus grid.


international conference on big data | 2016

GFP-X: A parallel approach to massive graph comparison using spark

Stephen Bonner; John Brennan; Georgios K. Theodoropoulos; Ibad Kureshi; Andrew Stephen McGough

The problem of how to compare empirical graphs is an area of great interest within the field of network science. The ability to accurately but efficiently compare graphs has a significant impact in such areas as temporal graph evolution, anomaly detection and protein comparison. The comparison problem is compounded when working with massive graphs containing millions of vertices and edges. This paper introduces a parallel feature extraction based approach for the efficient comparison of large unlabelled graph datasets using Apache Spark. The approach acts by producing a ‘Graph Fingerprint’ which represents both vertex level and global level topological features from a graph. By using Spark we are able to efficiently compare graphs considered unmanageably large to other approaches. The runtime of the approach is shown to scale sub-linearly with the size and complexity of the graphs being fingerprinted. Importantly, the approach is shown to not only be comparable to existing approaches, but on when comparing topology and size, more sensitive at detecting variation between graphs.


international conference on computational science | 2018

A Conceptual Framework for Social Movements Analytics for National Security

Pedro Cárdenas; Georgios Theodoropoulos; Boguslaw Obara; Ibad Kureshi

Social media tools have changed our world due to the way they convey information between individuals; this has led to many social movements either starting on social media or being organised and managed through this medium. At times however, certain human-induced events can trigger Human Security Threats such as Personal Security, Health Security, Economic Security or Political Security. The aim of this paper is to propose a holistic Data Analysis Framework for examining Social Movements and detecting pernicious threats to National Security interests. As a result of this, the proposed framework focuses on three main stages of an event (Detonating Event, Warning Period and Crisis Interpretation) to provide timely additional insights, enabling policy makers, first responders, and authorities to determine the best course of action. The paper also outlines the possible computational techniques utilised to achieve in depth analysis at each stage. The robustness and effectiveness of the framework are demonstrated by dissecting Warning Period scenarios, from real-world events, where the increase of Human Security aspects were key to identifying likely threats to National Security.


international conference on big data | 2017

Evaluating the quality of graph embeddings via topological feature reconstruction

Stephen Bonner; John Brennan; Ibad Kureshi; Georgios K. Theodoropoulos; Andrew Stephen McGough; Boguslaw Obara

In this paper we study three state-of-the-art, but competing, approaches for generating graph embeddings using unsupervised neural networks. Graph embeddings aim to discover the ‘best’ representation for a graph automatically and have been applied to graphs from numerous domains, including social networks. We evaluate their effectiveness at capturing a good representation of a graphs topological structure by using the embeddings to predict a series of topological features at the vertex level. We hypothesise that an ‘ideal’ high quality graph embedding should be able to capture key parts of the graphs topology, thus we should be able to use it to predict common measures of the topology, for example vertex centrality. This could also be used to better understand which topological structures are truly being captured by the embeddings. We first review these three graph embedding techniques and then evaluate how close they are to being ‘ideal’. We provide a framework, with extensive experimental evaluation on empirical and synthetic datasets, to assess the effectiveness of several approaches at creating graph embeddings which capture detailed topological structure.


Software Architecture for Big Data and the Cloud | 2017

Exploring the Evolution of Big Data Technologies

Stephen Bonner; Ibad Kureshi; John Brennan; Georgios K. Theodoropoulos

This chapter explores the rise of “big data” and the computational strategies, both hardware and software, that have evolved to deal with this paradigm. Starting with the concept of data-intensive computing, the different facets of data processing like Map/Reduce, Machine Learning, and Streaming data are explored. The evolution of different frameworks such as Hadoop and Spark are outlined and an assessment of the modular offerings within the frameworks is compared with a detailed analysis of the different functionalities and features. The hardware considerations required to move from compute-intensive to data-intensive are outlined along with the impact of cloud computing on big data. The chapter concludes with the upcoming developments in the near future for big data and how this computing paradigm fits into the road to exascale.

Collaboration


Dive into the Ibad Kureshi's collaboration.

Top Co-Authors

Avatar

Violeta Holmes

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shuo Liang

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David J. Cooke

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar

Yvonne James

University of Huddersfield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Pulley

University of Huddersfield

View shared research outputs
Researchain Logo
Decentralizing Knowledge