Lawrence Buckingham
Queensland University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lawrence Buckingham.
Journal of Information Technology Education | 2004
Christine S. Bruce; Lawrence Buckingham; John Hynd; Camille A. McMahon; Michael G. Roggenkamp; Ian D. Stoodley
Research Report: The research reported here investigates variation in first year university students’early experiences of learning to program, with a particular focus on revealingdifferences in how they go about learning to program. A phenomenographic researchapproach was used to reveal variation in relation to the act of learning to program. Semi-structured interviews were conducted with students who had either completed,or were recently completing a university level introductory programming subject. Theanalysis process revealed five different ways in which students go about learning toprogram in introductory university level units. These are captured in categories ofdescription which capture the critical dimensions of what students learn as well ashow they go about learning. Students may go about learning to program by: • Following – where learning to program is experienced as ‘getting through’ the unit. • Coding – where learning to program is experienced as learning to code • Understanding and integrating – where learning to program is experienced as learning to write a program through understanding and integrating concepts • Problem solving – where learning to program is experienced as learning to do what it takes to solve a problem • Participating or enculturation – where learning to program is experienced as discovering what it means to become a programmer The mapping of the variation constitutes a framework within which one aspect of theteaching and learning of introductory programming, how students go about it, may beunderstood. Implications for teaching and learning in introductory university curriculaassociated with each category are discussed. Recommendations for further researchare made.
international conference on conceptual structures | 2015
Xin-Yi Chua; Lawrence Buckingham; James M. Hogan; Pavel S. Novichkov
The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond those of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.
international conference on conceptual structures | 2014
Lawrence Buckingham; James M. Hogan
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
pacific-asia conference on knowledge discovery and data mining | 2000
Shlomo Geva; Lawrence Buckingham
We describe a new oblique decision tree induction algorithm. The VQTree algorithm uses Learning Vector Quantization to form a non-parametric model of the training set, and from that obtains a set of hyperplanes which are used as oblique splits in the nodes of a decision tree. We use a set of public data sets to compare VQTree with two existing decision tree induction algorithms, C5.0 and OCl. Our experiments show that VQTree produces compact decision trees with higher accuracy than either C5.0 or OCl on some datasets.
international conference on e-science | 2017
Lawrence Buckingham; Timothy Chappell; James M. Hogan; Shlomo Geva
Sequence comparison is a fundamental task in computational biology, traditionally dominated by alignment-based methods such as the Smith-Waterman and Needleman-Wunsch algorithms, or by alignment based heuristics such as BLAST, the ubiquitous Basic Local Alignment Search Tool. For more than a decade researchers have examined a range of alignment-free alternatives to these approaches, citing concerns over scalability in the era of Next Generation Sequencing, the emergence of petascale sequence archives, and a lack of robustness of alignment methods in the face of structural sequence rearrangements. While some of these approaches have proven successful for particular tasks, many continue to exhibit a marked decline in sensitivity as closely related sequence sets diverge. Avoiding the alignment step allows the methods to scale to the challenges of modern sequence collections, but only at the cost of noticeably inferior search. In this paper we re-examine the problem of similarity measures for alignment-free sequence comparison, and introduce a new method which we term Similarity Projection. Similarity Projection offers markedly enhanced sensitivity – comparable to alignment based methods – while retaining the scalability characteristic of alignment-free approaches. As before, we rely on collections of k-mers; overlapping substrings of the molecular sequence of length k, collected without reference to position, but similarity relies on variants of the Hausdorff set distance, allowing similarity to be scored more effectively to the reflect those components which match, while lessening the impact of those which do not. Formally, the algorithm generates a large mutual similarity matrix between sequence pairs based on their component fragments; successive reduction steps yield a final score over the sequences. However, only a small fraction of these underlying comparisons need be performed, and by use of an approximate scheme based on vector quantization, we are able to achieve an order of magnitude improvement in execution time over the naive approach. We evaluate the approach on two large protein collections obtained from UniProtKB, showing that Similarity Projection achieves accuracy rivalling, and at times clearly exceeding, that of BLAST, while exhibiting markedly superior execution speed.
Science & Engineering Faculty | 2015
Michael E. Cholette; Lin Ma; Lawrence Buckingham; Lutfiye Allahmanli; Andrew Bannister; Gang Xie
Engineers and asset managers must often make decisions on how to best allocate limited resources amongst different interrelated activities, including repair, renewal, inspection, and procurement of new assets. The presence of project interdependencies and the lack of sufficient information on the true value of an activity often produce complex problems and leave the decision maker guessing about the quality and robustness of their decision. In this paper, a decision support framework for uncertain interrelated activities is presented. The framework employs a methodology for multi-criteria ranking in the presence of uncertainty, detailing the effect that uncertain valuations may have on the priority of a particular activity. The framework employs employing semi-quantitative risk measures that can be tailored to an organisation and enable a transparent and simple-to-use uncertainty specification by the decision maker. The framework is then demonstrated on a real world project set from a major Australian utility provider.
Archive | 2015
Gang Xie; Lawrence Buckingham; Michael E. Cholette; Lin Ma
The expected number of failures is the essential element in cost analysis for a repairable system in engineering asset management. A renewal process is typically used for modelling a repairable system with perfect repairs while a nonhomogeneous Poisson process can be used to model a repairable system with minimal repair. An asset system with imperfect repair will be restored to the state which is somewhere between as bad as old and as good as new. While imperfect repairs are more realistic, it is more challenging to calculate the expected number of failures. In this chapter, we propose an imperfect reparable system assuming decreasing restoration levels conditional on the previous repair actions. Compared with a popular imperfect repairable system settings which assumes a constant discount restoration level after the first failure occurrence, our decreasing restoration levels model may better represent the actual repair-restoration patterns for many asset systems. The likelihood function of the newly proposed model is derived and the model parameters can be estimated based on historical failure time data. We adopt a cumulative hazard function based Monte Carlo simulation approach to calculate the expected number of failures for the newly proposed reparable system model. This new simulation algorithm is demonstrated on both simulated and real data and compared to a popular existing model under a Weibull distribution setting. An advantage of our simulation algorithm is that a bootstrap version confidence band on the estimated expected number of failures can easily be constructed. The modelling and simulation results in the chapter can be used for the development of an engineering reliability analysis and asset management decision making tool.
international conference on computational science | 2008
Lawrence Buckingham; James M. Hogan; Paul Roe; Jiro Sumitomo; Michael W. Towsey
We present a novel, web-accessible scientific workflow system which makes large-scale comparative studies accessible without programming or excessive configuration requirements. GPFlow allows a workflow defined on single input values to be automatically lifted to operate over collections of input values and supports the formation and processing of collections of values without the need for explicit iteration constructs. We introduce a new model for collection processing based on key aggregation and slicing which guarantees processing integrity and facilitates automatic association of inputs, allowing scientific users to manage the combinatorial explosion of data values inherent in large scale comparative studies. The approach is demonstrated using a core task from comparative genomics, and builds upon our previous work in supporting combined interactive and batch operation, through a lightweight web-based user interface.
cluster computing and the grid | 2008
Lawrence Buckingham; James M. Hogan; Paul Roe; Jiro Sumitomo; Michael W. Towsey
This work describes recent extensions to the GPFlow scientific workflow system in development at MQUTeR (www.mquter.qut.edu.au), which facilitate interactive experimentation, automatic lifting of computations from single-case to collection-oriented computation and automatic correlation and synthesis of collections. A GPFlow workflow presents as an acyclic data flow graph, yet provides powerful iteration and collection formation capabilities.
Science & Engineering Faculty | 2014
Lawrence Buckingham; James M. Hogan; Shlomo Geva; Wayne Kelly