Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bharath Ramsundar is active.

Publication


Featured researches published by Bharath Ramsundar.


ACS central science | 2017

Low Data Drug Discovery with One-Shot Learning

Han Altae-Tran; Bharath Ramsundar; Aneesh S. Pappu; Vijay S. Pande

Recent advances in machine learning have made significant contributions to drug discovery. Deep neural networks in particular have been demonstrated to provide significant boosts in predictive power when inferring the properties and activities of small-molecule compounds (Ma, J. et al. J. Chem. Inf. Model.2015, 55, 263–27425635324). However, the applicability of these techniques has been limited by the requirement for large amounts of training data. In this work, we demonstrate how one-shot learning can be used to significantly lower the amounts of data required to make meaningful predictions in drug discovery applications. We introduce a new architecture, the iterative refinement long short-term memory, that, when combined with graph convolutional neural networks, significantly improves learning of meaningful distance metrics over small-molecules. We open source all models introduced in this work as part of DeepChem, an open-source framework for deep-learning in drug discovery (Ramsundar, B. deepchem.io. https://github.com/deepchem/deepchem, 2016).


ACS central science | 2017

Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models

Bowen Liu; Bharath Ramsundar; Prasad Kawthekar; Jade Shi; Joseph Gomes; Quang Luu Nguyen; Stephen Ho; Jack Sloane; Paul A. Wender; Vijay S. Pande

We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis.


Journal of Chemical Information and Modeling | 2017

Is Multitask Deep Learning Practical for Pharma

Bharath Ramsundar; Bowen Liu; Zhenqin Wu; Andreas Verras; Matthew Tudor; Robert P. Sheridan; Vijay S. Pande

Multitask deep learning has emerged as a powerful tool for computational drug discovery. However, despite a number of preliminary studies, multitask deep networks have yet to be widely deployed in the pharmaceutical and biotech industries. This lack of acceptance stems from both software difficulties and lack of understanding of the robustness of multitask deep networks. Our work aims to resolve both of these barriers to adoption. We introduce a high-quality open-source implementation of multitask deep networks as part of the DeepChem open-source platform. Our implementation enables simple python scripts to construct, fit, and evaluate sophisticated deep models. We use our implementation to analyze the performance of multitask deep networks and related deep models on four collections of pharmaceutical data (three of which have not previously been analyzed in the literature). We split these data sets into train/valid/test using time and neighbor splits to test multitask deep learning performance under challenging conditions. Our results demonstrate that multitask deep networks are surprisingly robust and can offer strong improvement over random forests. Our analysis and open-source implementation in DeepChem provide an argument that multitask deep networks are ready for widespread use in commercial drug discovery.


Journal of Chemical Information and Modeling | 2016

Computational Modeling of β-Secretase 1 (BACE-1) Inhibitors Using Ligand Based Approaches.

Govindan Subramanian; Bharath Ramsundar; Vijay S. Pande; Rajiah Aldrin Denny

The binding affinities (IC50) reported for diverse structural and chemical classes of human β-secretase 1 (BACE-1) inhibitors in literature were modeled using multiple in silico ligand based modeling approaches and statistical techniques. The descriptor space encompasses simple binary molecular fingerprint, one- and two-dimensional constitutional, physicochemical, and topological descriptors, and sophisticated three-dimensional molecular fields that require appropriate structural alignments of varied chemical scaffolds in one universal chemical space. The affinities were modeled using qualitative classification or quantitative regression schemes involving linear, nonlinear, and deep neural network (DNN) machine-learning methods used in the scientific literature for quantitative-structure activity relationships (QSAR). In a departure from tradition, ∼20% of the chemically diverse data set (205 compounds) was used to train the model with the remaining ∼80% of the structural and chemical analogs used as part of an external validation (1273 compounds) and prospective test (69 compounds) sets respectively to ascertain the model performance. The machine-learning methods investigated herein performed well in both the qualitative classification (∼70% accuracy) and quantitative IC50 predictions (RMSE ∼ 1 log). The success of the 2D descriptor based machine learning approach when compared against the 3D field based technique pursued for hBACE-1 inhibitors provides a strong impetus for systematically applying such methods during the lead identification and optimization efforts for other protein families as well.


PLOS Computational Biology | 2018

Solving the RNA design problem with reinforcement learning

Peter Eastman; Jade Shi; Bharath Ramsundar; Vijay S. Pande

We use reinforcement learning to train an agent for computational RNA design: given a target secondary structure, design a sequence that folds to that structure in silico. Our agent uses a novel graph convolutional architecture allowing a single model to be applied to arbitrary target structures of any length. After training it on randomly generated targets, we test it on the Eterna100 benchmark and find it outperforms all previous algorithms. Analysis of its solutions shows it has successfully learned some advanced strategies identified by players of the game Eterna, allowing it to solve some very difficult structures. On the other hand, it has failed to learn other strategies, possibly because they were not required for the targets in the training set. This suggests the possibility that future improvements to the training protocol may yield further gains in performance.


arXiv: Machine Learning | 2015

Massively Multitask Networks for Drug Discovery

Bharath Ramsundar; Steven Kearnes; Patrick F. Riley; Dale R. Webster; David E. Konerding; Vijay S. Pande


Chemical Science | 2018

MoleculeNet: a benchmark for molecular machine learning

Zhenqin Wu; Bharath Ramsundar; Evan N. Feinberg; Joseph Gomes; Caleb Geniesse; Aneesh S. Pappu; Karl Leswing; Vijay S. Pande


arXiv: Learning | 2017

Atomic Convolutional Networks for Predicting Protein-Ligand Binding Affinity.

Joseph Gomes; Bharath Ramsundar; Evan N. Feinberg; Vijay S. Pande


international conference on machine learning | 2014

Understanding Protein Dynamics with L1-Regularized Reversible Hidden Markov Models

Robert T. McGibbon; Bharath Ramsundar; Mohammad M. Sultan; Gert Kiss; Vijay S. Pande


international conference on artificial intelligence and statistics | 2013

Dynamic Scaled Sampling for Deterministic Constraints

Lei Li; Bharath Ramsundar; Stuart J. Russell

Collaboration


Dive into the Bharath Ramsundar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge