Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kamran Munir is active.

Publication


Featured researches published by Kamran Munir.


Advanced Engineering Informatics | 2016

Big Data in the construction industry

Muhammad Bilal; Lukumon O. Oyedele; Junaid Qadir; Kamran Munir; Saheed O. Ajayi; Olugbenga O. Akinade; Hakeem A. Owolabi; Hafiz A. Alaka; Maruf Pasha

Existing works for Big Data Analytics/Engineering in the construction industry are discussed.It is highlighted that the adoption of Big Data is still at nascent stageOpportunities to employ Big Data technologies in construction sub-domains are highlighted.Future works for Big Data technologies are presented.Pitfalls of Big Data technologies in the construction industry are also pointed out. The ability to process large amounts of data and to extract useful insights from data has revolutionised society. This phenomenon-dubbed as Big Data-has applications for a wide assortment of industries, including the construction industry. The construction industry already deals with large volumes of heterogeneous data; which is expected to increase exponentially as technologies such as sensor networks and the Internet of Things are commoditised. In this paper, we present a detailed survey of the literature, investigating the application of Big Data techniques in the construction industry. We reviewed related works published in the databases of American Association of Civil Engineers (ASCE), Institute of Electrical and Electronics Engineers (IEEE), Association of Computing Machinery (ACM), and Elsevier Science Direct Digital Library. While the application of data analytics in the construction industry is not new, the adoption of Big Data technologies in this industry remains at a nascent stage and lags the broad uptake of these technologies in other fields. To the best of our knowledge, there is currently no comprehensive survey of Big Data techniques in the context of the construction industry. This paper fills the void and presents a wide-ranging interdisciplinary review of literature of fields such as statistics, data mining and warehousing, machine learning, and Big Data Analytics in the context of the construction industry. We discuss the current state of adoption of Big Data in the construction industry and discuss the future potential of such technologies across the multiple domain-specific sub-areas of the construction industry. We also propose open issues and directions for future work along with potential pitfalls associated with Big Data adoption in the industry.


international database engineering and applications symposium | 2007

The Requirements for Ontologies in Medical Data Integration: A Case Study

Ashiq Anjum; Peter Bloodsworth; Andrew Branson; Tamas Hauer; Richard McClatchey; Kamran Munir; Dmitry Rogulin; Jetendr Shamdasani

Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence- based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health- e-Child project.


Knowledge Based Systems | 2012

Ontology-driven relational query formulation using the semantic and assertional capabilities of OWL-DL

Kamran Munir; Mohammed Odeh; Richard McClatchey

This work investigates the extent to which domain knowledge, expressed in a domain ontology, can assist end-users in formulating relational queries that can be executed over a complex relational database. In this regard, an ontology-driven query formulation architectural framework has been devised, namely OntoQF, that implements a two-phased approach - the pre-processing and translation phases. In the pre-processing phase, a new database-to-ontology transformation approach has been synthesised where domain ontology is populated and enriched with problem domain concepts and semantic relationships specified using OWL-DL. Once domain ontology has been formulated, end-users can write sophisticated ontology-based queries that are then translated, in the translation phase, into the corresponding relational query statements. In order to validate the correctness of translating single or multiple OWL-DL constructs into their corresponding relational ones, a set of test cases have been derived from the medical domain. Our results demonstrated that the OntoQF framework enriches domain ontology and its associated algorithms drive the process of relational query formulation without the need to both replicate transactional data into the associated domain ontology and have knowledge of the underlying database schema.


International Journal of Medical Informatics | 2013

Providing traceability for neuroimaging analyses.

Richard McClatchey; Andrew Branson; Ashiq Anjum; Peter Bloodsworth; Irfan Habib; Kamran Munir; Jetendr Shamdasani; Kamran Soomro

INTRODUCTION With the increasingly digital nature of biomedical data and as the complexity of analyses in medical research increases, the need for accurate information capture, traceability and accessibility has become crucial to medical researchers in the pursuance of their research goals. Grid- or Cloud-based technologies, often based on so-called Service Oriented Architectures (SOA), are increasingly being seen as viable solutions for managing distributed data and algorithms in the bio-medical domain. For neuroscientific analyses, especially those centred on complex image analysis, traceability of processes and datasets is essential but up to now this has not been captured in a manner that facilitates collaborative study. PURPOSE AND METHOD Few examples exist, of deployed medical systems based on Grids that provide the traceability of research data needed to facilitate complex analyses and none have been evaluated in practice. Over the past decade, we have been working with mammographers, paediatricians and neuroscientists in three generations of projects to provide the data management and provenance services now required for 21st century medical research. This paper outlines the finding of a requirements study and a resulting system architecture for the production of services to support neuroscientific studies of biomarkers for Alzheimers disease. RESULTS The paper proposes a software infrastructure and services that provide the foundation for such support. It introduces the use of the CRISTAL software to provide provenance management as one of a number of services delivered on a SOA, deployed to manage neuroimaging projects that have been studying biomarkers for Alzheimers disease. CONCLUSIONS In the neuGRID and N4U projects a Provenance Service has been delivered that captures and reconstructs the workflow information needed to facilitate researchers in conducting neuroimaging analyses. The software enables neuroscientists to track the evolution of workflows and datasets. It also tracks the outcomes of various analyses and provides provenance traceability throughout the lifecycle of their studies. As the Provenance Service has been designed to be generic it can be applied across the medical domain as a reusable tool for supporting medical researchers thus providing communities of researchers for the first time with the necessary tools to conduct widely distributed collaborative programmes of medical analysis.


International Journal of Sustainable Building Technology and Urban Development | 2015

Analysis of critical features and evaluation of BIM software: towards a plug-in for construction waste minimization using big data

Muhammad Bilal; Lukumon O. Oyedele; Junaid Qadir; Kamran Munir; Olugbenga O. Akinade; Saheed O. Ajayi; Hafiz A. Alaka; Hakeem A. Owolabi

The overall aim of this study is to investigate the potential of Building Information Modelling (BIM) for construction waste minimization. We evaluated the leading BIM design software products and concluded that none of them currently support construction waste minimization. This motivates the development of a plug-in for predicting and minimizing construction waste. After a rigorous literature review and conducting four focused group interviews (FGIs), 12 imperative BIM factors were identified that should be considered for predicting and designing out construction waste. These factors were categorized into four layers, namely the BIM core features layer, the BIM auxiliary features layer, the waste management criteria layer, and the application layer. Further, a process to carry out BIM-enabled building waste analysis (BWA) is proposed. We have also investigated the usage of big data technologies in the context of waste minimization. We highlight that big data technologies are inherently suitable for BIM due to their support of storing and processing large datasets. In particular, the use of graph-based representation, analysis, and visualization can be employed for advancing the state of the art in BIM technology for construction waste minimization.


Future Generation Computer Systems | 2016

Cloud Market Maker

Barkha Javed; Peter Bloodsworth; Raihan ur Rasool; Kamran Munir; Omer Farooq Rana

Cloud providers commonly incur heavy upfront set up costs which remain almost constant whether they serve a single or many customers. In order to generate a return on this investment, a suitable pricing strategy is required by providers. Established industries such as the airlines employ dynamic pricing to maximize their revenues. In order to increase their resource utilization rates, cloud providers could also use dynamic pricing for their services. At present however most providers use static schemes for pricing their resources. This work presents a new dynamic pricing mechanism for cloud providers. Furthermore, at present no platform exists that provides a dynamic unified view of the different cloud offerings in real-time. Due to a rapidly changing landscape and a limited knowledge of the cloud marketplace, consumers can often end up choosing a cloud provider that is more expensive or does not give them what they really need. This is because some providers spend significantly on advertising their services online. In order to assist cloud customers in the selection of a suitable resource and cloud providers in implementing dynamic pricing, this work describes an automated dynamic pricing marketplace and a decision support system for cloud users. We present a multi-agent multi-auction based system through which such services are delivered. An evaluation has been carried out to determine how effectively the Cloud Market Maker selects the resource, dynamically adjusts the price for the cloud users and the suitability of dynamic pricing for the cloud environment. This paper presents Cloud Market Maker (CMM); a marketplace for cloud users.The system provides a dynamic pricing marketplace for providers to maximize their revenue.It provides users with decision support when choosing a cloud resource.It employs a multi-agent multi-auction approach for creating an automated marketplace for cloud users which were not possible with other existing systems.


Journal of Systems and Information Technology | 2014

Provision of an integrated data analysis platform for computational neuroscience experiments

Kamran Munir; Saad Liaquat Kiani; Khawar Hasham; Richard McClatchey; Andrew Branson; Jetendr Shamdasani

Purpose – The purpose of this paper is to provide an integrated analysis base to facilitate computational neuroscience experiments, following a user-led approach to provide access to the integrated neuroscience data and to enable the analyses demanded by the biomedical research community. Design/methodology/approach – The design and development of the N4U analysis base and related information services addresses the existing research and practical challenges by offering an integrated medical data analysis environment with the necessary building blocks for neuroscientists to optimally exploit neuroscience workflows, large image data sets and algorithms to conduct analyses. Findings – The provision of an integrated e-science environment of computational neuroimaging can enhance the prospects, speed and utility of the data analysis process for neurodegenerative diseases. Originality/value – The N4U analysis base enables conducting biomedical data analyses by indexing and interlinking the neuroimaging and clin...


international database engineering and applications symposium | 2008

Ontology assisted query reformulation using the semantic and assertion capabilities of OWL-DL ontologies

Kamran Munir; Mohammed Odeh; Richard McClatchey

End users of recent biomedical information systems are often unaware of the storage structure and access mechanisms of the underlying data sources and can require simplified mechanisms for writing domain specific complex queries. This research aims to assist users and their applications in formulating queries without requiring complete knowledge of the information structure of the underlying data sources. To achieve this, query reformulation techniques and algorithms have been developed that can interpret ontology-based search criteria and associated domain knowledge in order to reformulate a relational query. These query reformulation algorithms exploit the semantic relationships and assertion capabilities of OWL-DL based domain ontologies. In this paper, this approach is applied to the integrated database schema of the EU funded Health-e-Child (HeC) project with the aim of providing ontology assisted query reformulation techniques to simplify the global access that is needed to millions of medical records across the UK and Europe.


Neurocomputing | 2013

Intelligent grid enabled services for neuroimaging analysis

Richard McClatchey; Irfan Habib; Ashiq Anjum; Kamran Munir; Andrew Branson; Peter Bloodsworth; Saad Liaquat Kiani

This paper reports our work in the context of the neuGRID project in the development of intelligent services for a robust and efficient Neuroimaging analysis environment. neuGRID is an EC-funded project driven by the needs of the Alzheimers disease research community that aims to facilitate the collection and archiving of large amounts of imaging data coupled with a set of services and algorithms. By taking Alzheimers disease as an exemplar, the neuGRID project has developed a set of intelligent services and a Grid infrastructure to enable the European neuroscience community to carry out research required for the study of degenerative brain diseases. We have investigated the use of machine learning approaches, especially evolutionary multi-objective meta-heuristics for optimising scientific analysis on distributed infrastructures. The salient features of the services and the functionality of a planning and execution architecture based on an evolutionary multi-objective meta-heuristics to achieve analysis efficiency are presented. We also describe implementation details of the services that will form an intelligent analysis environment and present results on the optimisation that has been achieved as a result of this investigation.


International Journal of Sustainable Building Technology and Urban Development | 2016

Evaluation criteria for construction waste management tools: towards a holistic BIM framework

Olugbenga O. Akinade; Lukumon O. Oyedele; Kamran Munir; Muhammad Bilal; Saheed O. Ajayi; Hakeem A. Owolabi; Hafiz A. Alaka; Sururah A. Bello

AbstractThis study identifies evaluation criteria with the goal of appraising the performance of existing construction waste management tools and employing the results in the development of a holistic building information modelling (BIM) framework for construction waste management. Based on the literature, this paper identifies 32 construction waste management tools in five categories: (a) waste management plan templates and guides, (b) waste data collection and audit tools (c) waste quantification models, (d) waste prediction tools, and (e) geographic information system (GIS)-enabled waste tools. After reviewing these tools and conducting four focus-group interviews (FGIs), the findings revealed six categories of evaluation criteria (a) waste prediction; (b) waste data; (c) commercial and procurement; (d) BIM; (e) design; and (f) technological. The performance of the tools is assessed using the evaluation criteria and the result reveals that the existing tools are not robust enough to tackle construction...

Collaboration


Dive into the Kamran Munir's collaboration.

Top Co-Authors

Avatar

Richard McClatchey

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Jetendr Shamdasani

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Andrew Branson

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Bloodsworth

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Khawar Hasham

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Mohammed Odeh

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Hafiz A. Alaka

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Hakeem A. Owolabi

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Hanene Boussi Rahmouni

University of the West of England

View shared research outputs
Researchain Logo
Decentralizing Knowledge