Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin McAreavey is active.

Publication


Featured researches published by Kevin McAreavey.


International Journal of Semantic Computing | 2011

Measuring Inconsistency in a Network Intrusion Detection Rule Set Based on Snort

Kevin McAreavey; Weiru Liu; Paul C. Miller; Kedian Mu

In this preliminary study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which are based on Snort and incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. We measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the informativeness of these measures. Finally, we propose a new measure of inconsistency for prioritized knowledge which incorporates the normalized number of atoms in a language involved in inconsistency to provide a deeper inspection of inconsistent formulae. We conclude that such measures are useful for the network intrusion domain assuming that introducing expert knowledge for correlation of rules is feasible.


International Journal of Approximate Reasoning | 2014

Computational approaches to finding and measuring inconsistency in arbitrary knowledge bases

Kevin McAreavey; Weiru Liu; Paul Ian Miller

There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.


starting ai researchers' symposium | 2012

Tools for Finding Inconsistencies in Real-world Logic-based Systems

Kevin McAreavey; Weiru Liu; Paul C. Miller; Chris Meenan

Currently there is extensive theoretical work on inconsistencies in logic-based systems. Recently, algorithms for identifying inconsistent clauses in a single conjunctive formula have demonstrated that practical application of this work is possible. However, these algorithms have not been extended for full knowledge base systems and have not been applied to real-world knowledge. To address these issues, we propose a new algorithm for finding the inconsistencies in a knowledge base using existing algorithms for finding inconsistent clauses in a formula. An implementation of this algorithm is then presented as an automated tool for finding inconsistencies in a knowledge base and measuring the inconsistency of formulae. Finally, we look at a case study of a network security rule set for exploit detection (QRadar) and suggest how these automated tools can be applied.


international conference on agents and artificial intelligence | 2016

Risk-aware Planning in BDI Agents

Ronan Killough; Kim Bauters; Kevin McAreavey; Weiru Liu; Jun Hong

The ability of an autonomous agent to select rational actions is vital in enabling it to achieve its goals. To do so effectively in a high-stakes setting, the agent must be capable of considering the risk and potential reward of both immediate and future actions. In this paper we provide a novel method for calculating risk alongside utility in online planning algorithms. We integrate such a risk-aware planner with a BDI agent, allowing us to build agents that can set their risk aversion levels dynamically based on their changing beliefs about the environment. To guide the design of a risk-aware agent we propose a number of principles which such an agent should adhere to and show how our proposed framework satisfies these principles. Finally, we evaluate our approach and demonstrate that a dynamically risk-averse agent is capable of achieving a higher success rate than an agent that ignores risk, while obtaining a higher utility than an agent with a static risk attitude.


4th International Workshop on Combinations of Intelligent Methods and Applications (CIMA14) | 2016

Probabilistic Planning in AgentSpeak using the POMDP framework.

Kim Bauters; Kevin McAreavey; Jun Hong; Yingke Chen; Weiru Liu; Lluís Godo; Carles Sierra

AgentSpeak is a logic-based programming language, based on the Belief-Desire-Intention paradigm, suitable for building complex agent-based systems. To limit the computational complexity, agents in AgentSpeak rely on a plan library to reduce the planning problem to the much simpler problem of plan selection. However, such a plan library is often inadequate when an agent is situated in an uncertain environment. In this work, we propose the \(\text {AgentSpeak}^+\) framework, which extends AgentSpeak with a mechanism for probabilistic planning. The beliefs of an \(\text {AgentSpeak}^+\) agent are represented using epistemic states to allow an agent to reason about its uncertain observations and the uncertain effects of its actions. Each epistemic state consists of a POMDP, used to encode the agent’s knowledge of the environment, and its associated probability distribution (or belief state). In addition, the POMDP is used to select the optimal actions for achieving a given goal, even when faced with uncertainty.


Knowledge and Information Systems | 2017

Context-dependent combination of sensor information in Dempster---Shafer theory for BDI

Sarah Calderwood; Kevin McAreavey; Weiru Liu; Jun Hong

There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 2015

Game-­theoretic Resource Allocation with Real-­time Probabilistic Surveillance Information

Wenjun Ma; Weiru Liu; Kevin McAreavey

Game-theoretic security resource allocation problems have generated significant interest in the area of designing and developing security systems. These approaches traditionally utilize the Stackelberg game model for security resource scheduling in order to improve the protection of critical assets. The basic assumption in Stackelberg games is that a defender will act first, then an attacker will choose their best response after observing the defender’s strategy commitment (e.g., protecting a specific asset). Thus, it requires an attacker’s full or partial observation of a defender’s strategy. This assumption is unrealistic in real-time threat recognition and prevention. In this paper, we propose a new solution concept (i.e., a method to predict how a game will be played) for deriving the defender’s optimal strategy based on the principle of acceptable costs of minimax regret. Moreover, we demonstrate the advantages of this solution concept by analyzing its properties.


database and expert systems applications | 2011

Measuring Inconsistency in Network Intrusion Rules

Kevin McAreavey; Weiru Liu; Paul C. Miller

In this preliminary case study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. Finally, we measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the in formativeness of these measures. We conclude that such measures are useful for the network intrusion domain assuming that incorporating domain knowledge for correlation of rules is feasible.


scalable uncertainty management | 2018

A Formal Approach to Embedding First-Principles Planning in BDI Agent Systems.

Mengwei Xu; Kim Bauters; Kevin McAreavey; Weiru Liu

The BDI architecture, where agents are modelled based on their beliefs, desires, and intentions, provides a practical approach to developing intelligent agent systems. However, these systems either do not include any capability for first-principles planning (FPP), or they integrate FPP in a rigid and ad-hoc manner that does not define the semantical behaviour. In this paper, we propose a novel operational semantics for incorporating FPP as an intrinsic planning capability to achieve goals in BDI agent systems. To achieve this, we introduce a declarative goal intention to keep track of declarative goals used by FPP and develop a detailed specification of the appropriate operational behaviour when FPP is pursued, succeeded or failed, suspended, or resumed in the BDI agent systems. Furthermore, we prove that BDI agent systems and FPP are theoretically compatible for principled integration in both offline and online planning manner. The practical feasibility of this integration is demonstrated, and we show that the resulting agent framework combines the strengths of both BDI agent systems and FPP, thus substantially improving the performance of BDI agent systems when facing unforeseen situations.


Expert Systems With Applications | 2018

Acceptable costs of minimax regret equilibrium: A Solution to security games with surveillance-driven probabilistic information

Wenjun Ma; Kevin McAreavey; Weiru Liu; Xudong Luo

Abstract We extend the application of security games from offline patrol scheduling to online surveillance-driven resource allocation. An important characteristic of this new domain is that attackers are unable to observe or reliably predict defenders’ strategies. To this end, in this paper we introduce a new solution concept, called acceptable costs of minimax regret equilibrium, which is independent of attackers’ knowledge of defenders. Specifically, we study how a player’s decision making can be influenced by the emotion of regret and their attitude towards loss, formalized by the principle of acceptable costs of minimax regret. We then analyse properties of our solution concept and propose a linear programming formulation. Finally, we prove that our solution concept is robust with respect to small changes in a player’s degree of loss tolerance by a theoretical evaluation and demonstrate its viability for online resource allocation through an experimental evaluation.

Collaboration


Dive into the Kevin McAreavey's collaboration.

Top Co-Authors

Avatar

Weiru Liu

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Jun Hong

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah Calderwood

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul C. Miller

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Wenjun Ma

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Ronan Killough

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Yingke Chen

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Xudong Luo

Guangxi Normal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge