Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James M. Hogan is active.

Publication


Featured researches published by James M. Hogan.


Computers & Security | 1993

Refereed paper: A chosen plaintext attack on an adaptive arithmetic coding compression algorithm

Helen Bergen; James M. Hogan

The data security provided by an adaptive arithmetic coding compression algorithm is investigated. An analysis is presented of the dependence of the model upon the input text. It is shown that the number of possible states of the model may be greatly reduced by a series of suitably chosen input strings. These form the basis of a successful chosen plaintext attack in which the model is reduced, and a similar model at an interception point adjusted, until decryption is possible. The security is found to have been greatly enhanced by the (fortuitous) effect of some minor implementation details. Security may be improved by regular re-initialization and adjustment of one of the model parameters (the total cumulative frequency). The algorithm provides significant data security, but is vulnerable to a concerted attack.


international conference on data mining | 2010

Parallel User Profiling Based on Folksonomy for Large Scaled Recommender Systems: An Implimentation of Cascading MapReduce

Huizhi Liang; James M. Hogan; Yue Xu

The Large scaled emerging user created information in web 2.0 such as tags, reviews, comments and blogs can be used to profile users¡¯ interests and preferences to make personalized recommendations. To solve the scalability problem of the current user profiling and recommender systems, this paper proposes a parallel user profiling approach and a scalable recommender system. The current advanced cloud computing techniques including Hadoop, MapReduce and Cascading are employed to implement the proposed approaches. The experiments were conducted on Amazon EC2 Elastic MapReduce and S3 with a real world large scaled dataset from Delicious website.


Computational Biology and Chemistry | 2008

Research Article: The cross-species prediction of bacterial promoters using a support vector machine

Michael W. Towsey; Peter Timms; James M. Hogan; Sarah A. Mathews

Due to degeneracy of the observed binding sites, the in silico prediction of bacterial sigma(70)-like promoters remains a challenging problem. A large number of sigma(70)-like promoters has been biologically identified in only two species, Escherichia coli and Bacillus subtilis. In this paper we investigate the issues that arise when searching for promoters in other species using an ensemble of SVM classifiers trained on E. coli promoters. DNA sequences are represented using a tagged mismatch string kernel. The major benefit of our approach is that it does not require a prior definition of the typical -35 and -10 hexamers. This gives the SVM classifiers the freedom to discover other features relevant to the prediction of promoters. We use our approach to predict sigma(A) promoters in B. subtilis and sigma(66) promoters in Chlamydia trachomatis. We extended the analysis to identify specific regulatory features of gene sets in C. trachomatis having different expression profiles. We found a strong -35 hexamer and TGN/-10 associated with a set of early expressed genes. Our analysis highlights the advantage of using TSS-PREDICT as a starting point for predicting promoters in species where few are known.


International Journal of Neural Systems | 2006

THE PREDICTION OF BACTERIAL TRANSCRIPTION START SITES USING SVMS

Michael W. Towsey; James J. Gordon; James M. Hogan

Identifying promoters is the key to understanding gene expression in bacteria. Promoters lie in tightly constrained positions relative to the transcription start site (TSS). In this paper, we address the problem of predicting transcription start sites in Escherichia coli. Knowing the TSS position, one can then predict the promoter position to within a few base pairs, and vice versa. The accepted method for promoter prediction is to use a pair of position weight matrices (PWMs), which define conserved motifs at the sigma-factor binding site. However this method is known to result in a large number of false positive predictions, thereby limiting its usefulness to the experimental biologist. We adopt an alternative approach based on the Support Vector Machine (SVM) using a modified mismatch spectrum kernel. Our modifications involve tagging the motifs with their location, and selectively pruning the feature set. We quantify the performance of several SVM models and a PWM model using a performance metric of area under the detection-error tradeoff (DET) curve. SVM models are shown to outperform the PWM on a biologically realistic TSS prediction task. We also describe a more broadly applicable peak scoring technique which reduces the number of false positive predictions, greatly enhancing the utility of our results.


asia-pacific software engineering conference | 2002

The Java Metrics Reporter - an extensible tool for OO software analysis

Jaspar Cahill; James M. Hogan; Richard Thomas

It has been argued for many years that software engineering lacks the repeatability and well-defined monitoring characteristic of the traditional engineering disciplines. Over time, numerous authors have addressed this issue by proposing a range of software metrics although it is generally agreed that no one measure is sufficient to capture software quality, and a well chosen suite of metrics must be employed. While substantial progress has been made, adoption of metrics has been limited in the software development community, and metrics have long suffered from a lack of comprehensibility. Further, critics have argued that many metrics have been introduced in isolation, with little regard for their relationship to existing measures, and without appropriate validation against a sufficient body of source code. This work introduces the Java Metrics Reporter, a new tool which addresses a number of these concerns in the domain of object oriented languages. The tool provides integrated tutorial support and benchmarking of user code against professional code bases. Moreover, the architecture allows for adaptation to other languages, and extension to other metrics through a straightforward plug-in approach. The paper provides detailed consideration of the architecture and the metrics selected, together with aspects of the tool which lend to its usability and assist in interpretation of metrics. Finally, the paper outlines plans for the further development of the tool, together with its release to the professional and research communities.


asia-pacific software engineering conference | 2002

The Real World Software Process

James M. Hogan; Glenn Smith; Richard Thomas

The industry-wide demand for rapid development in concert with greater process maturity has seen many software development firms adopt tightly structured iterative processes. While a number of commercial vendors offer suitable process infrastructure and tool support, the cost of licensing, configuration and staff training may be prohibitive for the small and medium size enterprises (SMEs) which dominate the Asia-Pacific software industry. This work addresses these problems through the introduction of the Real World Software Process (RWSP), a freely available, Web-based iterative scheme designed specifically for small teams and organisations. RWSP provides a detailed process description, high quality document templates - including code review and inspection guidelines - and the integrated tutorial support necessary for successful usage by inexperienced developers and teams. In particular it is intended that the process be readily usable by software houses which at present do not follow a formal process, and that the free RWSP process infrastructure should be a vehicle for improving industry standards.


Concurrency and Computation: Practice and Experience | 2011

Biomashups: the new world of exploratory bioinformatics?

James M. Hogan; Jiro Sumitomo; Paul Roe; Felicity Newell

Bioinformatics is dominated by online databases and sophisticated web‐accessible tools. As such, it is ideally placed to benefit from the rapid, purpose specific combination of services achievable via web mashups. The recent introduction of a number of sophisticated frameworks has greatly simplified the mashup creation process, making them accessible to scientists with limited programming expertise. In this paper we investigate the feasibility of mashups as a new approach to bioinformatic experimentation, focusing on an exploratory niche between interactive web usage and robust workflows, and attempting to identify the range of computations for which mashups may be employed. While we treat each of the major frameworks, we illustrate the ideas with a series of examples developed under the Popfly framework ‡ . Copyright


Briefings in Bioinformatics | 2017

Alignment-free inference of hierarchical and reticulate phylogenomic relationships

Guillaume Bernard; Cheong Xin Chan; Yao-ban Chan; Xin-Yi Chua; Yingnan Cong; James M. Hogan; Stefan Maetschke; Mark A. Ragan

Abstract We are amidst an ongoing flood of sequence data arising from the application of high-throughput technologies, and a concomitant fundamental revision in our understanding of how genomes evolve individually and within the biosphere. Workflows for phylogenomic inference must accommodate data that are not only much larger than before, but often more error prone and perhaps misassembled, or not assembled in the first place. Moreover, genomes of microbes, viruses and plasmids evolve not only by tree-like descent with modification but also by incorporating stretches of exogenous DNA. Thus, next-generation phylogenomics must address computational scalability while rethinking the nature of orthogroups, the alignment of multiple sequences and the inference and comparison of trees. New phylogenomic workflows have begun to take shape based on so-called alignment-free (AF) approaches. Here, we review the conceptual foundations of AF phylogenetics for the hierarchical (vertical) and reticulate (lateral) components of genome evolution, focusing on methods based on k-mers. We reflect on what seems to be successful, and on where further development is needed.


australian software engineering conference | 2013

Predicting Fault-Prone Software Modules with Rank Sum Classification

Jaspar Cahill; James M. Hogan; Richard Thomas

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.


ieee international conference on escience | 2008

BioMashups: The New World of Exploratory Bioinformatics?

Jiro Sumitomo; James M. Hogan; Felicity Newell; Paul Roe

Bioinformatics is dominated by on-line databases and sophisticated web-accessible tools. As such, it is ideally placed to benefit from the rapid, purpose specific combination of services and tools achievable via web mashups. The recent introduction of a number of sophisticated frameworks has greatly simplified the mashup creation process, making them accessible to scientists with limited programming expertise. We investigate the feasibility of mashups as a new approach to bioinformatic experimentation, focusing on an exploratory niche between interactive web usage and robust workflows and attempting to identify the range of computations for which mashups may be employed. While we treat each of the major frameworks, we illustrate the ideas with a series of examples developed under the Popfly framework.

Collaboration


Dive into the James M. Hogan's collaboration.

Top Co-Authors

Avatar

Lawrence Buckingham

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael W. Towsey

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard Thomas

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Margot Brereton

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Paul Roe

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shlomo Geva

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

D. Johnson

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Rittenbruch

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Timothy Chappell

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wayne Kelly

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge