Bryan Beresford-Smith
University of Melbourne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bryan Beresford-Smith.
Genome Research | 2011
Luciano Pirola; Aneta Balcerczyk; Richard W. Tothill; Izhak Haviv; Anthony Kaspi; Sebastian Lunke; Mark Ziemann; Tom C. Karagiannis; Stephen Tonna; Adam Kowalczyk; Bryan Beresford-Smith; Geoff Macintyre; Ma Kelong; Zhang Hongyu; Jingde Zhu; Assam El-Osta
Emerging evidence suggests that poor glycemic control mediates post-translational modifications to the H3 histone tail. We are only beginning to understand the dynamic role of some of the diverse epigenetic changes mediated by hyperglycemia at single loci, yet elevated glucose levels are thought to regulate genome-wide changes, and this still remains poorly understood. In this article we describe genome-wide histone H3K9/K14 hyperacetylation and DNA methylation maps conferred by hyperglycemia in primary human vascular cells. Chromatin immunoprecipitation (ChIP) as well as CpG methylation (CpG) assays, followed by massive parallel sequencing (ChIP-seq and CpG-seq) identified unique hyperacetylation and CpG methylation signatures with proximal and distal patterns of regionalization associative with gene expression. Ingenuity knowledge-based pathway and gene ontology analyses indicate that hyperglycemia significantly affects human vascular chromatin with the transcriptional up-regulation of genes involved in metabolic and cardiovascular disease. We have generated the first installment of a reference collection of hyperglycemia-induced chromatin modifications using robust and reproducible platforms that allow parallel sequencing-by-synthesis of immunopurified content. We uncover that hyperglycemia-mediated induction of genes and pathways associated with endothelial dysfunction occur through modulation of acetylated H3K9/K14 inversely correlated with methyl-CpG content.
Bioinformatics | 2012
Thomas C. Conway; Jeremy Wazny; Andrew J. Bromage; Martin Tymms; Dhanya Sooraj; Elizabeth D. Williams; Bryan Beresford-Smith
Motivation: Shotgun sequence read data derived from xenograft material contains a mixture of reads arising from the host and reads arising from the graft. Classifying the read mixture to separate the two allows for more precise analysis to be performed. Results: We present a technique, with an associated tool Xenome, which performs fast, accurate and specific classification of xenograft-derived sequence read data. We have evaluated it on RNA-Seq data from human, mouse and human-in-mouse xenograft datasets. Availability: Xenome is available for non-commercial use from http://www.nicta.com.au/bioinformatics Contact: [email protected]
PLOS Genetics | 2011
Reo Maruyama; Sibgat Choudhury; Adam Kowalczyk; Marina Bessarabova; Bryan Beresford-Smith; Thomas C. Conway; Antony Kaspi; Zhenhua Wu; Tatiana Nikolskaya; Vanessa F. Merino; Pang Kuo Lo; X. Shirley Liu; Yuri Nikolsky; Saraswati Sukumar; Izhak Haviv; Kornelia Polyak
Differentiation is an epigenetic program that involves the gradual loss of pluripotency and acquisition of cell type–specific features. Understanding these processes requires genome-wide analysis of epigenetic and gene expression profiles, which have been challenging in primary tissue samples due to limited numbers of cells available. Here we describe the application of high-throughput sequencing technology for profiling histone and DNA methylation, as well as gene expression patterns of normal human mammary progenitor-enriched and luminal lineage-committed cells. We observed significant differences in histone H3 lysine 27 tri-methylation (H3K27me3) enrichment and DNA methylation of genes expressed in a cell type–specific manner, suggesting their regulation by epigenetic mechanisms and a dynamic interplay between the two processes that together define developmental potential. The technologies we developed and the epigenetically regulated genes we identified will accelerate the characterization of primary cell epigenomes and the dissection of human mammary epithelial lineage-commitment and luminal differentiation.
IEEE/ACM Transactions on Computational Biology and Bioinformatics | 2012
Shanika Kuruppu; Bryan Beresford-Smith; Thomas C. Conway; Justin Zobel
Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, Comrad, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. Comrad compresses the data over multiple passes, which is an expensive process, but allows Comrad to compress large data sets within reasonable time and space. Comrad allows for random access to individual sequences and subsequences without decompressing the whole data set. Comrad has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.
Cell Stem Cell | 2013
Sibgat Choudhury; Vanessa Almendro; Vanessa F. Merino; Zhenhua Wu; Reo Maruyama; Ying Su; Filipe C. Martins; Mary Jo Fackler; Marina Bessarabova; Adam Kowalczyk; Thomas C. Conway; Bryan Beresford-Smith; Geoff Macintyre; Yu Kang Cheng; Zoila Lopez-Bujanda; Antony Kaspi; Rong Hu; Judith Robens; Tatiana Nikolskaya; Vilde D. Haakensen; Stuart J. Schnitt; Pedram Argani; Gabrielle Ethington; Laura Panos; Michael P. Grant; Jason Clark; William Herlihy; S. Joyce Lin; Grace L. Chew; Erik W. Thompson
Early full-term pregnancy is one of the most effective natural protections against breast cancer. To investigate this effect, we have characterized the global gene expression and epigenetic profiles of multiple cell types from normal breast tissue of nulliparous and parous women and carriers of BRCA1 or BRCA2 mutations. We found significant differences in CD44(+) progenitor cells, where the levels of many stem cell-related genes and pathways, including the cell-cycle regulator p27, are lower in parous women without BRCA1/BRCA2 mutations. We also noted a significant reduction in the frequency of CD44(+)p27(+) cells in parous women and showed, using explant cultures, that parity-related signaling pathways play a role in regulating the number of p27(+) cells and their proliferation. Our results suggest that pathways controlling p27(+) mammary epithelial cells and the numbers of these cells relate to breast cancer risk and can be explored for cancer risk assessment and prevention.
Bioinformatics | 2012
Thomas C. Conway; Jeremy Wazny; Andrew J. Bromage; Justin Zobel; Bryan Beresford-Smith
MOTIVATION The de novo assembly of short read high-throughput sequencing data poses significant computational challenges. The volume of data is huge; the reads are tiny compared to the underlying sequence, and there are significant numbers of sequencing errors. There are numerous software packages that allow users to assemble short reads, but most are either limited to relatively small genomes (e.g. bacteria) or require large computing infrastructure or employ greedy algorithms and thus often do not yield high-quality results. RESULTS We have developed Gossamer, an implementation of the de Bruijn approach to assembly that requires close to the theoretical minimum of memory, but still allows efficient processing. Our results show that it is space efficient and produces high-quality assemblies. AVAILABILITY Gossamer is available for non-commercial use from http://www.genomics.csse.unimelb.edu.au/product-gossamer.php.
The Journal of Risk Finance | 2007
Bryan Beresford-Smith; Colin J. Thompson
Purpose - The paper aims to provide a quantitative methodology for dealing with (true) Knightian uncertainty in the management of credit risk based on information-gap decision theory. Design/methodology/approach - Credit risk management assigns clients to credit risk categories with estimated probabilities of default for each category. Since probabilities of default are subject to uncertainty the estimated expected loss given default on a loan-book can be subject to significant uncertainty. Information-gap decision theory is applied to construct optimal loan-book portfolios that are robust against uncertainty. Findings - By choosing optimal interest-rate ratios among the credit risk categories one can simultaneously satisfy regulatory requirements on expected losses and an institutions aspirations on expected profits. Research limitations/implications - In the analysis presented here only defaults over specific time frames have been considered. However, performance requirements expressed in terms of defaults and profits over multiple time frames that allow for transitions of clients between credit risk categories over time could also be incorporated into an information-gap analysis. Practical implications - An additional management analysis tool for applying information-gap modeling to credit risk has been provided. Originality/value - This paper provides a new methodology for analyzing credit risk based on information-gap decision theory.
The Journal of Risk Finance | 2009
Bryan Beresford-Smith; Colin J. Thompson
Purpose - The purpose of this paper is to provide a quantitative methodology based on information-gap decision theory for dealing with (true) Knightian uncertainty in the management of portfolios of assets with uncertain returns. Design/methodology/approach - Portfolio managers aim to maximize returns for given levels of risk. Since future returns on assets are uncertain the expected return on a portfolio of assets can be subject to significant uncertainty. Information-gap decision theory is used to construct portfolios that are robust against uncertainty. Findings - Using the added dimensions of aspirational parameters and performance requirements in information-gap theory, the paper shows that one cannot simultaneously have two robust-optimal portfolios that outperform a specified return and a benchmark portfolio unless one of the portfolios has arbitrarily large long and short positions. Research limitations/implications - The paper has considered only one uncertainty model and two performance requirements in an information-gap analysis over a particular time frame. Alternative uncertainty models could be introduced and benchmarking against proxy portfolios and competitors are examples of additional performance requirements that could be incorporated in an information-gap analysis. Practical implications - An additional methodology for applying information-gap modeling to portfolio management has been provided. Originality/value - This paper provides a new and novel approach for managing portfolios in the face of uncertainties in future asset returns.
international conference on intelligent sensors, sensor networks and information | 2007
Wanzhi Qiu; Khusro Saleem; Minh Pham; Mark E. Halpern; Bryan Beresford-Smith; Anthony Overmars; Kithsiri B. Dassanayake; Gavin Thoms
Wireless sensor networks for irrigation applications normally consist of low-cost and low-power nodes deployed in a harsh environment. It turns out that the radio links between the coordinating node and actuating nodes are more critical and are expected to be more robust to radio interference and node failure than links between the coordinating node and sensing nodes. We present an efficient method to create robust links for irrigation sensor networks built on the ZigBee specification. In particular, multipaths between the coordinating node and actuating nodes are created and used to enhance the critical links. It is shown that, by utilizing the properties of addresses of ZigBee networks, those multipaths can be dynamically created and released adapting to topology changes without any path search activities. Simulation results confirm the robustness of the proposed multipath links.
workshop on algorithms in bioinformatics | 2010
Arun Siddharth Konagurthu; Lloyd Allison; Thomas C. Conway; Bryan Beresford-Smith; Justin Zobel
New genome sequencing technologies are poised to enter the sequencing landscape with significantly higher throughput of read data produced at unprecedented speeds and lower costs per run. However, current in-memory methods to align a set of reads to one or more reference genomes are ill-equipped to handle the expected growth of read-throughput from newer technologies. This paper reports the design of a new out-of-core read mapping algorithm, Syzygy, which can scale to large volumes of read and genome data. The algorithm is designed to run in a constant, user-stipulated amount of main memory - small enough to fit on standard desktops - irrespective of the sizes of read and genome data. Syzygy achieves a superior spatial locality-of-reference that allows all large data structures used in the algorithm to be maintained on disk. We compare our prototype implementation with several popular read alignment programs. Our results demonstrate clearly that Syzygy can scale to very large read volumes while using only a fraction of memory in comparison, without sacrificing performance.