Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ashiq Anjum is active.

Publication


Featured researches published by Ashiq Anjum.


ieee/acm international conference utility and cloud computing | 2013

Cloud Based Big Data Analytics for Smart Future Cities

Zaheer Abbas Khan; Ashiq Anjum; Saad Liaquat Kiani

ICT is becoming increasingly pervasive to urban environments and providing the necessary basis for sustainability and resilience of the smart future cities. Often ICT tools for a smart city deal with different application domains e.g. land use, transport, energy, and rarely provide an integrated information perspective to deal with sustainability and socioeconomic growth of the city. Smart cities can benefit from such information using Big, and often real-time cross-thematic, data collection, processing, integration and sharing through inter-operable services deployed in a Cloud environment. However, such information utilisation requires appropriate software tools, services and technologies to collect, store, analyse and visualise large amounts of data from the city environment, citizens and various departments and agencies at city scale. This paper presents a theoretical perspective on the smart cities focused Big data processing and analysis by proposing a Cloud-based analysis service that can be further developed to generate information intelligence and support decision-making in smart future cities context.


Journal of Grid Computing | 2007

Data Intensive and Network Aware (DIANA) Grid Scheduling

Richard McClatchey; Ashiq Anjum; Heinz Stockinger; Arshad Ali; Ian Willers; M. Thomas

In Grids scheduling decisions are often made on the basis of jobs being either data or computation intensive: in data intensive situations jobs may be pushed to the data and in computation intensive situations data may be pulled to the jobs. This kind of scheduling, in which there is no consideration of network characteristics, can lead to performance degradation in a Grid environment and may result in large processing queues and job execution delays due to site overloads. In this paper we describe a Data Intensive and Network Aware (DIANA) meta-scheduling approach, which takes into account data, processing power and network characteristics when making scheduling decisions across multiple sites. Through a practical implementation on a Grid testbed, we demonstrate that queue and execution times of data-intensive jobs can be significantly improved when we introduce our proposed DIANA scheduler. The basic scheduling decisions are dictated by a weighting factor for each potential target location which is a calculated function of network characteristics, processing cycles and data location and size. The job scheduler provides a global ranking of the computing resources and then selects an optimal one on the basis of this overall access and execution cost. The DIANA approach considers the Grid as a combination of active network elements and takes network characteristics as a first class criterion in the scheduling decision matrix along with computations and data. The scheduler can then make informed decisions by taking into account the changing state of the network, locality and size of the data and the pool of available processing cycles.


ieee international conference on cloud computing technology and science | 2015

Towards cloud based big data analytics for smart future cities

Zaheer Abbas Khan; Ashiq Anjum; Kamran Soomro; Muhammad Atif Tahir

A large amount of land-use, environment, socio-economic, energy and transport data is generated in cities. An integrated perspective of managing and analysing such big data can answer a number of science, policy, planning, governance and business questions and support decision making in enabling a smarter environment. This paper presents a theoretical and experimental perspective on the smart cities focused big data management and analysis by proposing a cloud-based analytics service. A prototype has been designed and developed to demonstrate the effectiveness of the analytics service for big data analysis. The prototype has been implemented using Hadoop and Spark and the results are compared. The service analyses the Bristol Open data by identifying correlations between selected urban environment indicators. Experiments are performed using Hadoop and Spark and results are presented in this paper. The data pertaining to quality of life mainly crime and safety & economy and employment was analysed from the data catalogue to measure the indicators spread over years to assess positive and negative trends.


utility and cloud computing | 2011

An Architecture for Integrated Intelligence in Urban Management Using Cloud Computing

Zaheer Abbas Khan; David Ludlow; Richard McClatchey; Ashiq Anjum

With the emergence of new methodologies and technologies it has now become possible to manage large amounts of environmental sensing data and apply new integrated computing models to acquire information intelligence. This paper advocates the application of cloud capacity to support the information, communication and decision making needs of a wide variety of stakeholders in the complex business of the management of urban and regional development. The complexity lies in the interactions and impacts embodied in the concept of the urban-ecosystem at various governance levels. This highlights the need for more effective integrated environmental management systems. This paper offers a user-orientated approach based on requirements for an effective management of the urban-ecosystem and the potential contributions that can be supported by the cloud computing community. Furthermore, the commonality of the influence of the drivers of change at the urban level offers the opportunity for the cloud computing community to develop generic solutions that can serve the needs of hundreds of cities from Europe and indeed globally.


international database engineering and applications symposium | 2007

The Requirements for Ontologies in Medical Data Integration: A Case Study

Ashiq Anjum; Peter Bloodsworth; Andrew Branson; Tamas Hauer; Richard McClatchey; Kamran Munir; Dmitry Rogulin; Jetendr Shamdasani

Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence- based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health- e-Child project.


Concurrency and Computation: Practice and Experience | 2015

Approaching the Internet of things IoT: a modelling, analysis and abstraction framework

Ahsan Ikram; Ashiq Anjum; Richard Hill; Nikolaos Antonopoulos; Lu Liu; Stelios Sotiriadis

The evolution of communication protocols, sensory hardware, mobile and pervasive devices, alongside social and cyber‐physical networks, has made the Internet of things (IoT) an interesting concept with inherent complexities as it is realised. Such complexities range from addressing mechanisms to information management and from communication protocols to presentation and interaction within the IoT. Although existing Internet and communication models can be extended to provide the basis for realising IoT, they may not be sufficiently capable to handle the new paradigms that IoT introduces, such as social communities, smart spaces, privacy and personalisation of devices and information, modelling and reasoning. With interaction models in IoT moving from the orthodox service consumption model, towards an interactive conversational model, nature‐inspired computational models appear to be candidate representations. Specifically, this research contests that the reactive and interactive nature of IoT makes chemical reaction‐inspired approaches particularly well suited to such requirements. This paper presents a chemical reaction‐inspired computational model using the concepts of graphs and reflection, which attempts to address the complexities associated with the visualisation, modelling, interaction, analysis and abstraction of information in the IoT. Copyright


IEEE Transactions on Nuclear Science | 2005

Distributed computing grid experiences in CMS

Julia Andreeva; Ashiq Anjum; Ta Barrass; D. Bonacorsi; J. Bunn; Paolo Capiluppi; Marco Corvo; N. Darmenov; N. De Filippis; F. Donno; G. Donvito; Giulio Eulisse; A. Fanfani; F. Fanzago; A. Filine; C. Grandi; Jose M Hernandez; V. Innocente; A. Jan; S. Lacaprara; I. Legrand; S. Metson; H. B. Newman; D. M. Newbold; A. Pierro; Lucia Silvestris; Conrad Steenberg; Heinz Stockinger; L. Taylor; M. Thomas

The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.


IEEE Transactions on Services Computing | 2018

An Inter-Cloud Meta-Scheduling (ICMS) Simulation Framework: Architecture and Evaluation

Stelios Sotiriadis; Nik Bessis; Ashiq Anjum; Rajkumar Buyya

Inter-cloud is an approach that facilitates scalable resource provisioning across multiple cloud infrastructures. In this paper, we focus on the performance optimization of Infrastructure as a Service (IaaS) using the meta-scheduling paradigm to achieve an improved job scheduling across multiple clouds. We propose a novel inter-cloud job scheduling framework and implement policies to optimize performance of participating clouds. The framework, named as Inter-Cloud Meta-Scheduling (ICMS), is based on a novel message exchange mechanism to allow optimization of job scheduling metrics. The resulting system offers improved flexibility, robustness and decentralization. We implemented a toolkit named “Simulating the Inter-Cloud” (SimIC) to perform the design and implementation of different inter-cloud entities and policies in the ICMS framework. An experimental analysis is produced for job executions in inter-cloud and a performance is presented for a number of parameters such as job execution, makespan, and turnaround times. The results highlight that the overall performance of individual clouds for selected parameters and configuration is improved when these are brought together under the proposed ICMS framework.


Journal of Systems and Software | 2013

Federated broker system for pervasive context provisioning

Saad Liaquat Kiani; Ashiq Anjum; Michael Knappmeyer; Nik Bessis; Nikolaos Antonopoulos

Software systems that provide context-awareness related functions in pervasive computing environments are gaining momentum due to emerging applications, architectures and business models. In most context-aware systems, a central broker performs the functions of context acquisition, processing, reasoning and provisioning to facilitate context-consuming applications, but demonstrations of such prototypical systems are limited to small, focussed domains. In order to develop modern context-aware systems that are capable of accommodating emerging pervasive/ubiquitous computing scenarios, are easily manageable, administratively and geographically scalable, it is desirable to have multiple brokers in the system divided into administrative, network, geographic, contextual or load based domains. Context providers and consumers may be configured to interact only with their nearest, relevant or most convenient broker. This setup demands inter-broker federation so that providers and consumers attached to different brokers can interact seamlessly, but such a federation has not been proposed for context-aware systems. This article analyses the limiting factors in existing context-aware systems, postulates the design and functional requirements that modern context-aware systems need to accommodate, and presents a federated broker based architecture for provisioning of contextual information over large geographical and network spans.


IEEE Transactions on Nuclear Science | 2006

Bulk Scheduling With the DIANA Scheduler

Ashiq Anjum; Richard McClatchey; Arshad Ali; Ian Willers

Results from the research and development of a Data Intensive and Network Aware (DIANA) scheduling engine, to be used primarily for data intensive sciences such as physics analysis, are described. In Grid analyses, tasks can involve thousands of computing, data handling, and network resources. The central problem in the scheduling of these resources is the coordinated management of computation and data at multiple locations and not just data replication or movement. However, this can prove to be a rather costly operation and efficient scheduling can be a challenge if compute and data resources are mapped without considering network costs. We have implemented an adaptive algorithm within the so-called DIANA Scheduler which takes into account data location and size, network performance and computation capability in order to enable efficient global scheduling. DIANA is a performance-aware and economy-guided Meta Scheduler. It iteratively allocates each job to the site that is most likely to produce the best performance as well as optimizing the global queue for any remaining jobs. Therefore, it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results indicate that considerable performance improvements can be gained by adopting the DIANA scheduling approach

Collaboration


Dive into the Ashiq Anjum's collaboration.

Top Co-Authors

Avatar

Richard McClatchey

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

M. Thomas

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Conrad Steenberg

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Harvey B Newman

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

J. Bunn

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arshad Ali

National University of Sciences and Technology

View shared research outputs
Top Co-Authors

Avatar

Irfan Habib

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge