Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Cuff is active.

Publication


Featured researches published by James Cuff.


Bioinformatics | 2004

The Jalview Java alignment editor

Michele Clamp; James Cuff; Stephen M. J. Searle; Geoffrey J. Barton

Multiple sequence alignment remains a crucial method for understanding the function of groups of related nucleic acid and protein sequences. However, it is known that automatic multiple sequence alignments can often be improved by manual editing. Therefore, tools are needed to view and edit multiple sequence alignments. Due to growth in the sequence databases, multiple sequence alignments can often be large and difficult to view efficiently. The Jalview Java alignment editor is presented here, which enables fast viewing and editing of large multiple sequence alignments.


Nucleic Acids Research | 2002

The Ensembl genome database project

Tim Hubbard; Darren Barker; Ewan Birney; Graham Cameron; Yuan Chen; L. Clark; Tony Cox; James Cuff; V. Curwen; Thomas A. Down; Richard Durbin; E. Eyras; James Gilbert; Martin Hammond; L. Huminiecki; Arek Kasprzyk; Heikki Lehväslaiho; Philip Lijnzaad; Craig Melsopp; Emmanuel Mongin; R. Pettett; M. Pocock; Simon Potter; A. Rust; Esther Schmidt; Stephen M. J. Searle; Guy Slater; J. Smith; W. Spooner; A. Stabenau

The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.


Bioinformatics | 1998

JPred: a consensus secondary structure prediction server.

James Cuff; Michele Clamp; Asim S. Siddiqui; M. Finlay; Geoffrey J. Barton

UNLABELLED An interactive protein secondary structure prediction Internet server is presented. The server allows a single sequence or multiple alignment to be submitted, and returns predictions from six secondary structure prediction algorithms that exploit evolutionary information from multiple sequences. A consensus prediction is also returned which improves the average Q3 accuracy of prediction by 1% to 72.9%. The server simplifies the use of current prediction algorithms and allows conservation patterns important to structure and function to be identified. AVAILABILITY http://barton.ebi.ac.uk/servers/jpred.h tml CONTACT [email protected]


Proteins | 2000

Application of multiple sequence alignment profiles to improve protein secondary structure prediction

James Cuff; Geoffrey J. Barton

The effect of training a neural network secondary structure prediction algorithm with different types of multiple sequence alignment profiles derived from the same sequences, is shown to provide a range of accuracy from 70.5% to 76.4%. The best accuracy of 76.4% (standard deviation 8.4%), is 3.1% (Q3) and 4.4% (SOV2) better than the PHD algorithm run on the same set of 406 sequence non‐redundant proteins that were not used to train either method. Residues predicted by the new method with a confidence value of 5 or greater, have an average Q3 accuracy of 84%, and cover 68% of the residues. Relative solvent accessibility based on a two state model, for 25, 5, and 0% accessibility are predicted at 76.2, 79.8, and 86.6% accuracy respectively. The source of the improvements obtained from training with different representations of the same alignment data are described in detail. The new Jnet prediction method resulting from this study is available in the Jpred secondary structure prediction server, and as a stand‐alone computer program from: http://barton.ebi.ac.uk/. Proteins 2000;40:502–511.


Nature | 2011

A high-resolution map of human evolutionary constraint using 29 mammals

Kerstin Lindblad-Toh; Manuel Garber; Or Zuk; Michael F. Lin; Brian J. Parker; Stefan Washietl; Pouya Kheradpour; Jason Ernst; Gregory Jordan; Evan Mauceli; Lucas D. Ward; Craig B. Lowe; Alisha K. Holloway; Michele Clamp; Sante Gnerre; Jessica Alföldi; Kathryn Beal; Jean Chang; Hiram Clawson; James Cuff; Federica Di Palma; Stephen Fitzgerald; Paul Flicek; Mitchell Guttman; Melissa J. Hubisz; David B. Jaffe; Irwin Jungreis; W. James Kent; Dennis Kostka; Marcia Lara

The comparison of related genomes has emerged as a powerful lens for genome interpretation. Here we report the sequencing and comparative analysis of 29 eutherian genomes. We confirm that at least 5.5% of the human genome has undergone purifying selection, and locate constrained elements covering ∼4.2% of the genome. We use evolutionary signatures and comparisons with experimental data sets to suggest candidate functions for ∼60% of constrained bases. These elements reveal a small number of new coding exons, candidate stop codon readthrough events and over 10,000 regions of overlapping synonymous constraint within protein-coding exons. We find 220 candidate RNA structural families, and nearly a million elements overlapping potential promoter, enhancer and insulator regions. We report specific amino acid residues that have undergone positive selection, 280,000 non-coding elements exapted from mobile elements and more than 1,000 primate- and human-accelerated elements. Overlap with disease-associated variants indicates that our findings will be relevant for studies of human biology, health and disease.


Proteins | 1999

Evaluation and improvement of multiple sequence methods for protein secondary structure prediction

James Cuff; Geoffrey J. Barton

A new dataset of 396 protein domains is developed and used to evaluate the performance of the protein secondary structure prediction algorithms DSC, PHD, NNSSP, and PREDATOR. The maximum theoretical Q3 accuracy for combination of these methods is shown to be 78%. A simple consensus prediction on the 396 domains, with automatically generated multiple sequence alignments gives an average Q3 prediction accuracy of 72.9%. This is a 1% improvement over PHD, which was the best single method evaluated. Segment Overlap Accuracy (SOV) is 75.4% for the consensus method on the 396‐protein set. The secondary structure definition method DSSP defines 8 states, but these are reduced by most authors to 3 for prediction. Application of the different published 8‐ to 3‐state reduction methods shows variation of over 3% on apparent prediction accuracy. This suggests that care should be taken to compare methods by the same reduction method. Two new sequence datasets (CB513 and CB251) are derived which are suitable for cross‐validation of secondary structure prediction methods without artifacts due to internal homology. A fully automatic World Wide Web service that predicts protein secondary structure by a combination of methods is available via http://barton.ebi.ac.uk/. Proteins 1999;34:508–519.


Proceedings of the National Academy of Sciences of the United States of America | 2007

Distinguishing protein-coding and noncoding genes in the human genome

Michele Clamp; Ben Fry; Mike Kamal; Xiaohui Xie; James Cuff; Michael F. Lin; Manolis Kellis; Kerstin Lindblad-Toh; Eric S. Lander

Although the Human Genome Project was completed 4 years ago, the catalog of human protein-coding genes remains a matter of controversy. Current catalogs list a total of ≈24,500 putative protein-coding genes. It is broadly suspected that a large fraction of these entries are functionally meaningless ORFs present by chance in RNA transcripts, because they show no evidence of evolutionary conservation with mouse or dog. However, there is currently no scientific justification for excluding ORFs simply because they fail to show evolutionary conservation: the alternative hypothesis is that most of these ORFs are actually valid human genes that reflect gene innovation in the primate lineage or gene loss in the other lineages. Here, we reject this hypothesis by carefully analyzing the nonconserved ORFs—specifically, their properties in other primates. We show that the vast majority of these ORFs are random occurrences. The analysis yields, as a by-product, a major revision of the current human catalogs, cutting the number of protein-coding genes to ≈20,500. Specifically, it suggests that nonconserved ORFs should be added to the human gene catalog only if there is clear evidence of an encoded protein. It also provides a principled methodology for evaluating future proposed additions to the human gene catalog. Finally, the results indicate that there has been relatively little true innovation in mammalian protein-coding genes.


Nucleic Acids Research | 2003

Ensembl 2002: accommodating comparative genomics

Michele Clamp; D. Andrews; Darren Barker; Paul Bevan; Graham Cameron; Yuting Chen; Louise Clark; Tony Cox; James Cuff; Val Curwen; Thomas A. Down; Richard Durbin; Eduardo Eyras; James Gilbert; Martin Hammond; Tim Hubbard; Arek Kasprzyk; Damian Keefe; Heikki Lehväslaiho; Vishwanath R. Iyer; Craig Melsopp; Emmanuel Mongin; Roger Pettett; Simon Potter; Alistair G. Rust; Esther Schmidt; Steve Searle; Guy Slater; James Smith; William Spooner

The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of human, mouse and other genome sequences, available as either an interactive web site or as flat files. Ensembl also integrates manually annotated gene structures from external sources where available. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. These range from sequence analysis to data storage and visualisation and installations exist around the world in both companies and at academic sites. With both human and mouse genome sequences available and more vertebrate sequences to follow, many of the recent developments in Ensembl have focusing on developing automatic comparative genome analysis and visualisation.


Bioinformatics | 2000

ProtEST: protein multiple sequence alignments from expressed sequence tags

James Cuff; Ewan Birney; Michele Clamp; Geoffrey J. Barton

MOTIVATION An automatic sequence searching method (ProtEST) is described which constructs multiple protein sequence alignments from protein sequences and translated expressed sequence tags (ESTs). ProtEST is more effective than a simple TBLASTN search of the query against the EST database, as the sequences are automatically clustered, assembled, made non-redundant, checked for sequence errors, translated into protein and then aligned and displayed. RESULTS A ProtEST search found a non-redundant, translated, error- and length-corrected EST sequence for > 58% of sequences when single sequences from 1407 Pfam-A seed alignments were used as the probe. The average family size of the resulting alignments of translated EST sequences contained > 10 sequences. In a cross-validated test of protein secondary structure prediction, alignments from the new procedure led to an improvement of 3.4% average Q3 prediction accuracy over single sequences. AVAILABILITY The ProtEST method is available as an Internet World Wide Web service http://barton.ebi.ac.uk/servers/protest.html+ ++ The Wise2 package for protein and genomic comparisons and the ProtESTWise script can be found at http://www.sanger.ac.uk/Software/Wise2 CONTACT [email protected]


Journal of the American Medical Informatics Association | 2016

The Medical Science DMZ

Sean Peisert; William K. Barnett; Eli Dart; James Cuff; Robert L. Grossman; Edward Balas; Ari E. Berman; Anurag Shankar; Brian Tierney

Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, packet filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.

Collaboration


Dive into the James Cuff's collaboration.

Top Co-Authors

Avatar

Michele Clamp

Wellcome Trust Sanger Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ewan Birney

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar

Craig Melsopp

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar

Damian Keefe

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar

Emmanuel Mongin

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar

Heikki Lehväslaiho

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar

James Gilbert

Wellcome Trust Sanger Institute

View shared research outputs
Top Co-Authors

Avatar

Martin Hammond

European Bioinformatics Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge