Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian E. Coggins is active.

Publication


Featured researches published by Brian E. Coggins.


Journal of Biomolecular NMR | 2003

PACES: Protein sequential assignment by computer-assisted exhaustive search

Brian E. Coggins; Pei Zhou

A crucial step in determining solution structures of proteins using nuclear magnetic resonance (NMR) spectroscopy is the process of sequential assignment, which correlates backbone resonances to corresponding residues in the primary sequence of a protein, today, typically using data from triple-resonance NMR experiments. Although the development of automated approaches for sequential assignment has greatly facilitated this process, the performance of these programs is usually less satisfactory for large proteins, especially in the cases of missing connectivity or severe chemical shift degeneracy. Here, we report the development of a novel computer-assisted method for sequential assignment, using an algorithm that conducts an exhaustive search of all spin systems both for establishing sequential connectivities and then for assignment. By running the program iteratively with user intervention after each cycle, ambiguities in the assignments can be eliminated efficiently and backbone resonances can be assigned rapidly. The efficiency and robustness of this approach have been tested with 27 proteins of sizes varying from 76 amino acids to 723 amino acids, and with data of varying qualities, using experimental data for three proteins, and published assignments modified with simulated noise for the other 24. The complexity of sequential assignment with regard to the size of the protein, the completeness of NMR data sets, and the uncertainty in resonance positions has been examined.


Nature Structural & Molecular Biology | 2003

Structure of the LpxC deacetylase with a bound substrate-analog inhibitor

Brian E. Coggins; Xuechen Li; Amanda L. McClerren; Ole Hindsgaul; Christian R. H. Raetz; Pei Zhou

The zinc-dependent UDP-3-O-acyl-N-acetylglucosamine deacetylase (LpxC) catalyzes the first committed step in the biosynthesis of lipid A, the hydrophobic anchor of lipopolysaccharide (LPS) that constitutes the outermost monolayer of Gram-negative bacteria. As LpxC is crucial for the survival of Gram-negative organisms and has no sequence homology to known mammalian deacetylases or amidases, it is an excellent target for the design of new antibiotics. The solution structure of LpxC from Aquifex aeolicus in complex with a substrate-analog inhibitor, TU-514, reveals a novel α/β fold, a unique zinc-binding motif and a hydrophobic passage that captures the acyl chain of the inhibitor. On the basis of biochemical and structural studies, we propose a catalytic mechanism for LpxC, suggest a model for substrate binding and provide evidence that mobility and dynamics in structural motifs close to the active site have key roles in the capture of the substrate.


Journal of Biomolecular NMR | 2008

High resolution 4-D spectroscopy with sparse concentric shell sampling and FFT-CLEAN

Brian E. Coggins; Pei Zhou

Recent efforts to reduce the measurement time for multidimensional NMR experiments have fostered the development of a variety of new procedures for sampling and data processing. We recently described concentric ring sampling for 3-D NMR experiments, which is superior to radial sampling as input for processing by a multidimensional discrete Fourier transform. Here, we report the extension of this approach to 4-D spectroscopy as Randomized Concentric Shell Sampling (RCSS), where sampling points for the indirect dimensions are positioned on concentric shells, and where random rotations in the angular space are used to avoid coherent artifacts. With simulations, we show that RCSS produces a very low level of artifacts, even with a very limited number of sampling points. The RCSS sampling patterns can be adapted to fine rectangular grids to permit use of the Fast Fourier Transform in data processing, without an apparent increase in the artifact level. These artifacts can be further reduced to the noise level using the iterative CLEAN algorithm developed in radioastronomy. We demonstrate these methods on the high resolution 4-D HCCH-TOCSY spectrum of protein G’s B1 domain, using only 1.2% of the sampling that would be needed conventionally for this resolution. The use of a multidimensional FFT instead of the slow DFT for initial data processing and for subsequent CLEAN significantly reduces the calculation time, yielding an artifact level that is on par with the level of the true spectral noise.


Progress in Nuclear Magnetic Resonance Spectroscopy | 2010

Radial sampling for fast NMR: Concepts and practices over three decades.

Brian E. Coggins; Ronald A. Venters; Pei Zhou

For almost as long as three- and four-dimensional NMR experiments have been used, NMR spectroscopists have been devising ways to speed them up. Indeed, the publication that is often cited as the very first to demonstrate a 3-D NMR experiment commented, “it has been thought that…high-resolution 3-D NMR experiments are impracticable because of huge data matrices and long measurement times,” but went on to “suggest a technique for reduction of data matrices” using selective excitation pulses [1]. Although both spectrometers and computers have advanced considerably since this 1987 publication, somewhat changing the definitions of huge and long, the fundamental problem of measurement time continues to limit the experiments that can be carried out in practice using conventional multidimensional NMR methodology, and significant effort is still devoted to alleviating this restriction. The problem arises from the very nature of Fourier transform (FT) NMR, which involves the systematic sampling of a signal over time followed by the calculation of the spectrum using the FT [2, 3]. Traditionally, an n-dimensional (n-D) experiment is obtained through the sampling of the time domain on a complete n-D Cartesian grid; since the number of points in an n-D grid grows exponentially with the number of dimensions n, the measurement time needed to record the experiment becomes considerable even for small n. Yet NMR spectra are generally only sparsely populated with signals, suggesting that there is no statistical need for so many observations (other than signal accumulation for sensitivity in some cases), and that a suitable alternative approach to sampling and/or processing might significantly reduce the time requirement, while generating the same or more spectral information. Two trends in biomolecular NMR research have given particular impetus to these efforts: the drive for increased throughput in studies of small proteins—for example in structural genomics—which requires running today’s routine experiments more quickly, and the increasing attention given to large and challenging systems, which require more dimensions and higher resolution than conventional experiments can offer. A variety of methods have been introduced in these efforts to reduce the amount of data needed for multidimensional NMR, but a surprising number of them share in common that they sample the indirect dimensions of the time domain along radial spokes. The measurement of a radial spoke simply means collecting data samples along a line in the time domain that passes through the origin. If multiple radial spokes are sampled, the resulting dataset is equivalent to recording the NMR experiment in cylindrical coordinates or their higher-dimensional equivalent (cylindrical rather than spherical because the directly observed dimension is always sampled conventionally). The potential advantage of this arises from the fact that one can arrange the radial sampling points so as to obtain higher resolution information than with conventional Cartesian sampling, for the same or a smaller number of samples. Depending on the processing method used to extract spectral information from the data, this approach may lead to artifacts or ambiguities—but it also has the potential to provide complete spectral information in considerably less time than required for conventional NMR. The purpose of this article is to review the long history of radial sampling in NMR, from its initial introduction in the “accordion spectroscopy” experiments, through reduced-dimensionality and G-matrix Fourier Transform (GFT) spectroscopy, and on to the projection spectroscopy and projection-reconstruction techniques. Because all of these methods share a common mathematical foundation—despite their sometimes differing vocabularies—we first explain these underlying concepts. We then continue with a chronological survey of the different approaches, describing how they were developed, how they work and how they have been put to use. Particular attention is given to how these methods can be used to reduce the measurement time of the experiment, including the theoretical basis for the time savings and the practical tradeoffs that can result. It is important to note that while this review describes a number of techniques for reducing NMR measurement time, it does not attempt to describe the many methods that have been introduced recently for that purpose which do not use radial sampling. These include random sampling [4–11], concentric ring or shell sampling [12, 13] and other unconventional sampling approaches (e.g. spiral [14]); filter diagonalization analysis to extract high-resolution information from low-resolution conventionally sampled data [15–17]; the measurement of a spectrum in a single scan through the encoding of the spectroscopic frequency information spatially within the sample [18, 19]; Hadamard encoding to measure signal intensities at a small number of directly excited frequencies [20–22]; covariance spectroscopy, which enhances the resolution in the indirect dimensions through a statistical symmetrization with the directly observed dimension [23–26]; and the “minimal sampling” procedure, which involves calculating the possible correlations between the signals on the “first planes” of a multidimensional experiment and resolving any ambiguities by measuring a single additional sampling point [27, 28]. We touch on processing methods such as multidimensional decomposition [7, 29] and maximum entropy reconstruction [30] only to the limited extent that they have been applied to radial sampling experiments. Additionally, we do not discuss methods for reducing experiment time by optimizing the longitudinal relaxation rate to allow a much shorter interscan delay, which could be applicable to any type of sampling [31–34].


Journal of the American Chemical Society | 2012

Rapid Protein Global Fold Determination Using Ultrasparse Sampling, High-Dynamic Range Artifact Suppression, and Time-Shared NOESY

Brian E. Coggins; Jonathan W. Werner-Allen; Anthony K. Yan; Pei Zhou

In structural studies of large proteins by NMR, global fold determination plays an increasingly important role in providing a first look at a targets topology and reducing assignment ambiguity in NOESY spectra of fully protonated samples. In this work, we demonstrate the use of ultrasparse sampling, a new data processing algorithm, and a 4-D time-shared NOESY experiment (1) to collect all NOEs in (2)H/(13)C/(15)N-labeled protein samples with selectively protonated amide and ILV methyl groups at high resolution in only four days, and (2) to calculate global folds from this data using fully automated resonance assignment. The new algorithm, SCRUB, incorporates the CLEAN method for iterative artifact removal but applies an additional level of iteration, permitting real signals to be distinguished from noise and allowing nearly all artifacts generated by real signals to be eliminated. In simulations with 1.2% of the data required by Nyquist sampling, SCRUB achieves a dynamic range over 10000:1 (250× better artifact suppression than CLEAN) and completely quantitative reproduction of signal intensities, volumes, and line shapes. Applied to 4-D time-shared NOESY data, SCRUB processing dramatically reduces aliasing noise from strong diagonal signals, enabling the identification of weak NOE crosspeaks with intensities 100× less than those of diagonal signals. Nearly all of the expected peaks for interproton distances under 5 Å were observed. The practical benefit of this method is demonstrated with structure calculations for 23 kDa and 29 kDa test proteins using the automated assignment protocol of CYANA, in which unassigned 4-D time-shared NOESY peak lists produce accurate and well-converged global fold ensembles, whereas 3-D peak lists either fail to converge or produce significantly less accurate folds. The approach presented here succeeds with an order of magnitude less sampling than required by alternative methods for processing sparse 4-D data.


Journal of Magnetic Resonance | 2010

Fast Acquisition of High Resolution 4-D Amide-Amide NOESY with Diagonal Suppression, Sparse Sampling and FFT-CLEAN

Jon W. Werner-Allen; Brian E. Coggins; Pei Zhou

Amide-amide NOESY provides important distance constraints for calculating global folds of large proteins, especially integral membrane proteins with beta-barrel folds. Here, we describe a diagonal-suppressed 4-D NH-NH TROSY-NOESY-TROSY (ds-TNT) experiment for NMR studies of large proteins. The ds-TNT experiment employs a spin state selective transfer scheme that suppresses diagonal signals while providing TROSY optimization in all four dimensions. Active suppression of the strong diagonal peaks greatly reduces the dynamic range of observable signals, making this experiment particularly suitable for use with sparse sampling techniques. To demonstrate the utility of this method, we collected a high resolution 4-D ds-TNT spectrum of a 23kDa protein using randomized concentric shell sampling (RCSS), and we used FFT-CLEAN processing for further reduction of aliasing artifacts - the first application of these techniques to a NOESY experiment. A comparison of peak parameters in the high resolution 4-D dataset with those from a conventionally-sampled 3-D control spectrum shows an accurate reproduction of NOE crosspeaks in addition to a significant reduction in resonance overlap, which largely eliminates assignment ambiguity. Likewise, a comparison of 4-D peak intensities and volumes before and after application of the CLEAN procedure demonstrates that the reduction of aliasing artifacts by CLEAN does not systematically distort NMR signals.


Analytical Chemistry | 2016

3D TOCSY-HSQC NMR for Metabolic Flux Analysis Using Non-Uniform Sampling

Patrick N. Reardon; Carrie L. Marean-Reardon; Melanie A. Bukovec; Brian E. Coggins; Nancy G. Isern

(13)C-Metabolic Flux Analysis ((13)C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific (13)C enrichment information obtained using NMR spectroscopy is a valuable input for (13)C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for (13)C-MFA frequently hinder assignment and quantitation of site-specific (13)C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for (13)C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in (13)C-MFA experiments. Our data show that the NUS experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the (13)C chemical shift obtained in the experiment. In addition, the experiment reports (13)C-labeling information that reveals the position specific labeling of subsets of isotopomers. The information provided by this technique will enable more accurate estimation of metabolic fluxes in large metabolic networks.


Nature Communications | 2016

Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra.

Qinglin Wu; Brian E. Coggins; Pei Zhou

The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality.


Archive | 2017

Chapter 6:Backprojection and Related Methods

Brian E. Coggins; Pei Zhou

The sampling of the NMR time domain along radial spokes allows one to obtain projections of an NMR spectrum at various angles. These projections encode information about the positions, intensities, and lineshapes of the peaks in the spectrum, and this information can be recovered using suitable reconstruction methods. Here, we describe the technique of radial data collection and outline an intuitive framework for approaching reconstruction. We then survey the theory of tomographic reconstruction, and use that theory to analyze reconstruction methods and the information available from radial sampling. Finally, we survey past applications of projection–reconstruction methodology in NMR, including its use in 3-D and 4-D NMR data including NOESY.


Archive | 2017

Chapter 7:CLEAN

Brian E. Coggins; Pei Zhou

CLEAN is an algorithm for the suppression of artifacts from nonuniform sampling, originally developed in the field of radioastronomy in the 1960s and 1970s. Recognizing similarities between the problem of NMR NUS and astronomical data collection, several NMR groups have applied versions of CLEAN to NMR data. Here, we recount the historical background of CLEAN in astronomy, explain how the algorithm works as applied to NMR, examine the methods theoretical underpinnings, and present examples of its use with NMR experiments. CLEAN shows a number of similarities to methods being used in the compressed sensing community, and recent theoretical results suggest that CLEAN should succeed under the same or similar conditions to those in which convex l1-norm minimization succeeds.

Collaboration


Dive into the Brian E. Coggins's collaboration.

Top Co-Authors

Avatar

Pei Zhou

Nanjing University of Aeronautics and Astronautics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ling Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carrie L. Marean-Reardon

Environmental Molecular Sciences Laboratory

View shared research outputs
Top Co-Authors

Avatar

Doug Kojetin

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge