Justin Gilmer
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justin Gilmer.
Journal of Chemical Theory and Computation | 2017
Felix A. Faber; Luke Hutchison; Bing Huang; Justin Gilmer; Samuel S. Schoenholz; George E. Dahl; Oriol Vinyals; Steven Kearnes; Patrick F. Riley; O. Anatole von Lilienfeld
We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of 13 electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to ∼118k distinct molecules. Molecular structures and properties at the hybrid density functional theory (DFT) level of theory come from the QM9 database [ Ramakrishnan et al. Sci. Data 2014 , 1 , 140022 ] and include enthalpies and free energies of atomization, HOMO/LUMO energies and gap, dipole moment, polarizability, zero point vibrational energy, heat capacity, and the highest fundamental vibrational frequency. Various molecular representations have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR), and two types of neural networks, graph convolutions (GC) and gated graph networks (GG). Out-of sample errors are strongly dependent on the choice of representation and regressor and molecular property. Electronic properties are typically best accounted for by MG and GC, while energetic properties are better described by HDAD and KRR. The specific combinations with the lowest out-of-sample errors in the ∼118k training set size limit are (free) energies and enthalpies of atomization (HDAD/KRR), HOMO/LUMO eigenvalue and gap (MG/GC), dipole moment (MG/GC), static polarizability (MG/GG), zero point vibrational energy (HDAD/KRR), heat capacity at room temperature (HDAD/KRR), and highest fundamental vibrational frequency (BAML/RF). We present numerical evidence that ML model predictions deviate from DFT (B3LYP) less than DFT (B3LYP) deviates from experiment for all properties. Furthermore, out-of-sample prediction errors with respect to hybrid DFT reference are on par with, or close to, chemical accuracy. The results suggest that ML models could be more accurate than hybrid DFT if explicitly electron correlated quantum (or experimental) data were available.We investigate the impact of choosing regressors and molecular representations for the construction of fast machine learning (ML) models of thirteen electronic ground-state properties of organic molecules. The performance of each regressor/representation/property combination is assessed using learning curves which report out-of-sample errors as a function of training set size with up to
international conference on machine learning | 2017
Justin Gilmer; Samuel S. Schoenholz; Patrick F. Riley; Oriol Vinyals; George E. Dahl
\sim
arXiv: Computer Vision and Pattern Recognition | 2018
Tom B. Brown; Dandelion Mané; Aurko Roy; Martín Abadi; Justin Gilmer
117k distinct molecules. Molecular structures and properties at hybrid density functional theory (DFT) level of theory used for training and testing come from the QM9 database [Ramakrishnan et al, {\em Scientific Data} {\bf 1} 140022 (2014)] and include dipole moment, polarizability, HOMO/LUMO energies and gap, electronic spatial extent, zero point vibrational energy, enthalpies and free energies of atomization, heat capacity and the highest fundamental vibrational frequency. Various representations from the literature have been studied (Coulomb matrix, bag of bonds, BAML and ECFP4, molecular graphs (MG)), as well as newly developed distribution based variants including histograms of distances (HD), and angles (HDA/MARAD), and dihedrals (HDAD). Regressors include linear models (Bayesian ridge regression (BR) and linear regression with elastic net regularization (EN)), random forest (RF), kernel ridge regression (KRR) and two types of neural net works, graph convolutions (GC) and gated graph networks (GG). We present numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all properties. Furthermore, our out-of-sample prediction errors with respect to hybrid DFT reference are on par with, or close to, chemical accuracy. Our findings suggest that ML models could be more accurate than hybrid DFT if explicitly electron correlated quantum (or experimental) data was available.
international conference on learning representations | 2018
Justin Gilmer; Luke Metz; Fartash Faghri; Sam Schoenholz; Maithra Raghu; Martin Wattenberg
international conference on learning representations | 2017
Samuel S. Schoenholz; Justin Gilmer; Surya Ganguli; Jascha Sohl-Dickstein
neural information processing systems | 2017
Maithra Raghu; Justin Gilmer; Jason Yosinski; Jascha Sohl-Dickstein
arXiv: Chemical Physics | 2017
Felix A. Faber; Luke Hutchison; Bing Huang; Justin Gilmer; Samuel S. Schoenholz; George E. Dahl; Oriol Vinyals; Steven Kearnes; Patrick F. Riley; O. Anatole von Lilienfeld
Archive | 2017
Felix A. Faber; Luke Hutchinson; Huang Bing; Justin Gilmer; Sam Schoenholz; George E. Dahl; Oriol Vinyals; Steven Kearnes; Patrick F. Riley; Anatole von Lilienfeld
arXiv: Learning | 2018
Peter Battaglia; Jessica B. Hamrick; Victor Bapst; Alvaro Sanchez-Gonzalez; Vinícius Flores Zambaldi; Mateusz Malinowski; Andrea Tacchetti; David Raposo; Adam Santoro; Ryan Faulkner; Caglar Gulcehre; Francis Song; Andrew J. Ballard; Justin Gilmer; George E. Dahl; Ashish Vaswani; Kelsey Allen; Charles Nash; Victoria Langston; Chris Dyer; Nicolas Heess; Daan Wierstra; Pushmeet Kohli; Matthew Botvinick; Oriol Vinyals; Yujia Li; Razvan Pascanu
Archive | 2018
Been Kim; Justin Gilmer; Martin Wattenberg; Fernanda Viégas