George M. Mohay
Queensland University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by George M. Mohay.
annual computer security applications conference | 2002
Malcolm W. Corney; O. de Vel; Alison Anderson; George M. Mohay
This paper describes an investigation of authorship gender attribution mining from e-mail text documents. We used an extended set of predominantly topic content-free e-mail document features such as style markers, structural characteristics and gender-preferential language features together with a support vector machine learning algorithm. Experiments using a corpus of e-mail documents generated by a large number of authors of both genders gave promising results for author gender categorisation.
availability, reliability and security | 2008
Mehdi Kiani; Andrew J. Clark; George M. Mohay
The ubiquity of Web applications has led to an increased focus on the development of attacks targeting these applications. One particular type of attack that has recently become prominent is the SQL injection attack. SQL injection attacks can potentially result in unauthorized access to confidential information stored in a backend database. In this paper we describe an anomaly based approach which utilizes the character distribution of certain sections of HTTP requests to detect previously unseen SQL injection attacks. Our approach requires no user interaction, and no modification of or access to, either the backend database or the source code of the web application itself. Our practical results suggest that the model proposed in this paper is superior to existing models at detecting SQL injection attacks. We also evaluate the effectiveness of our model at detecting different types of SQL injection attacks.
Digital Investigation | 2006
Bradley Schatz; George M. Mohay; Andrew J. Clark
Establishing the time at which a particular event happened is a fundamental concern when relating cause and effect in any forensic investigation. Reliance on computer generated timestamps for correlating events is complicated by uncertainty as to clock skew and drift, environmental factors such as location and local time zone offsets, as well as human factors such as clock tampering. Establishing that a particular computers temporal behaviour was consistent during its operation remains a challenge. The contributions of this paper are both a description of assumptions commonly made regarding the behaviour of clocks in computers, and empirical results demonstrating that real world behaviour diverges from the idealised or assumed behaviour. We present an approach for inferring the temporal behaviour of a particular computer over a range of time by correlating commonly available local machine timestamps with another source of timestamps. We show that a general characterisation of the passage of time may be inferred from an analysis of commonly available browser records.
acm symposium on applied computing | 2006
Jonathon Abbott; Jim Bell; Andrew J. Clark; Olivier Y. de Vel; George M. Mohay
The authors have previously developed the ECF (Event Correlation for Forensics) framework for scenario matching in the forensic investigation of activity manifested in digital transactional logs. ECF incorporated a suite of log parsers to reduce event records from heterogeneous logs to a canonical form for lodging in an SQL database. This paper presents work since then, the Auto-ECF system, which represents significant advances on ECF. The paper reports on the development and implementation of the new event abstraction and scenario specification methodology and on the development of the Auto-ECF system which builds on that to achieve the automated recognition of event scenarios. The paper also reports on the evaluation of Auto-ECF using three scenarios including one from the well known DARPA test data.
network and parallel computing | 2008
Ejaz Ahmed; Andrew J. Clark; George M. Mohay
The effects of network attacks may result in abrupt changes in network traffic parameters. The speedy identification of these changes is critical for smooth network operation. This paper illustrates a sequential analysis technique for detecting these unknown abrupt changes in asymmetric network traffic. A novel sliding window based adaptive cumulative sum (CUSUM) algorithm is used to detect the cause of such variations in network traffic. The significance of the proposed algorithm is two-fold: (1) automatic adjustment of the change detection threshold while minimising the false alarm rate, and (2) timely detection of an end to the anomalous traffic. The validity of the proposed technique is investigated by experimentation on simulated data and on 18 months of real network traces collected from a class C darknet. Comparative analysis of the proposed technique with a traditional CUSUM method demonstrates its superior performance with high detection accuracy and low false alarm rate.
forensics in telecommunications information and multimedia | 2009
Sriram Raghavan; Andrew J. Clark; George M. Mohay
The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.
availability, reliability and security | 2011
Sajal Bhatia; George M. Mohay; Alan Tickle; Ejaz Ahmed
Distributed Denial-of-Service (DDoS) attacks continue to be one of the most pernicious threats to the delivery of services over the Internet. Not only are DDoS attacks present in many guises, they are also continuously evolving as new vulnerabilities are exploited. Hence accurate detection of these attacks still remains a challenging problem and a necessity for ensuring high-end network security. An intrinsic challenge in addressing this problem is to effectively distinguish these Denial-of-Service attacks from similar looking Flash Events (FEs) created by legitimate clients. A considerable overlap between the general characteristics of FEs and DDoS attacks makes it difficult to precisely separate these two classes of Internet activity. In this paper we propose parameters which can be used to explicitly distinguish FEs from DDoS attacks and analyse two real-world publicly available datasets to validate our proposal. Our analysis shows that even though FEs appear very similar to DDoS attacks, there are several subtle dissimilarities which can be exploited to separate these two classes of events.
annual computer security applications conference | 1997
George M. Mohay; Jeremy Zellers
The verification of the authenticity of software by an executing host has become a vital security issue in recent years with the original postulation and subsequent evolution of computer viruses. The CASS (Computer Architecture for Secure Systems) project addresses this issue by incorporating integrity checking at the operating system level. This paper describes three prototype implementations of the architecture, two of these at the kernel level targetting UNIX SVR4.2 and the Mach 3.0 microkernel, with the third-for reasons of generality-involving the implementation of a specialised shell which is then portable across UNIX-style platforms in general. The paper focusses on a description of the former, viz. the kernel-based implementations, and examines the design and implementation issues which had to be addressed in achieving kernel-based integrity checking of executables for the two platforms.
international conference on internet monitoring and protection | 2009
Ejaz Ahmed; Andrew J. Clark; George M. Mohay
When monitoring unsolicited network traffic automated detection and characterization of abrupt changes in the traffics statistical properties is important. These abrupt changes can either be due to a single or multiple anomalous activities taking place at the same time. The start of a new anomalous activity while another anomalous activity is in operation will result in a new change nested within the previous change. Although detection of abrupt changes to identify malicious activities has received considerable attention in the past, automated detection of nested changes has not been addressed. In this paper a dynamic sliding window cumulative sum (CUSUM) algorithm is proposed to automatically identify these nested changes. The novelty of the proposed technique lies in its ability to automatically detect nested changes, without which interesting activities may go undetected, and its effectiveness in identifying both the start and the end of the individual changes. Using an analysis of real network traces, we show that the identified nested changes were indeed due to distinct malicious behaviours taking place in parallel.
information security conference | 2010
Ejaz Ahmed; George M. Mohay; Alan Tickle; Sajal Bhatia
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.