Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Corrado is active.

Publication


Featured researches published by Greg Corrado.


Nature Neuroscience | 2010

Stimulus onset quenches neural variability: a widespread cortical phenomenon

Mark M. Churchland; Byron M. Yu; John P. Cunningham; Leo P. Sugrue; Marlene R. Cohen; Greg Corrado; William T. Newsome; Andy Clark; Paymon Hosseini; Benjamin B. Scott; David C. Bradley; Matthew A. Smith; Adam Kohn; J. Anthony Movshon; Katherine M. Armstrong; Tirin Moore; Steve W. C. Chang; Lawrence H. Snyder; Stephen G. Lisberger; Nicholas J. Priebe; Ian M. Finn; David Ferster; Stephen I. Ryu; Gopal Santhanam; Maneesh Sahani; Krishna V. Shenoy

Neural responses are typically characterized by computing the mean firing rate, but response variability can exist across trials. Many studies have examined the effect of a stimulus on the mean response, but few have examined the effect on response variability. We measured neural variability in 13 extracellularly recorded datasets and one intracellularly recorded dataset from seven areas spanning the four cortical lobes in monkeys and cats. In every case, stimulus onset caused a decline in neural variability. This occurred even when the stimulus produced little change in mean firing rate. The variability decline was observed in membrane potential recordings, in the spiking of individual neurons and in correlated spiking variability measured with implanted 96-electrode arrays. The variability decline was observed for all stimuli tested, regardless of whether the animal was awake, behaving or anaesthetized. This widespread variability decline suggests a rather general property of cortex, that its state is stabilized by an input.


Nature Reviews Neuroscience | 2005

Choosing the greater of two goods: neural currencies for valuation and decision making

Leo P. Sugrue; Greg Corrado; William T. Newsome

To make adaptive decisions, animals must evaluate the costs and benefits of available options. The nascent field of neuroeconomics has set itself the ambitious goal of understanding the brain mechanisms that are responsible for these evaluative processes. A series of recent neurophysiological studies in monkeys has begun to address this challenge using novel methods to manipulate and measure an animals internal valuation of competing alternatives. By emphasizing the behavioural mechanisms and neural signals that mediate decision making under conditions of uncertainty, these studies might lay the foundation for an emerging neurobiology of choice behaviour.


knowledge discovery and data mining | 2016

Smart Reply: Automated Response Suggestion for Email

Anjuli Kannan; Karol Kurach; Sujith Ravi; Tobias Kaufmann; Andrew Tomkins; Balint Miklos; Greg Corrado; László Lukács; Marina Ganea; Peter Young; Vivek Ramavajjala

In this paper we propose and investigate a novel end-to-end method for automatically generating short email responses, called Smart Reply. It generates semantically diverse suggestions that can be used as complete email responses with just one tap on mobile. The system is currently used in Inbox by Gmail and is responsible for assisting with 10% of all mobile responses. It is designed to work at very high throughput and process hundreds of millions of messages daily. The system exploits state-of-the-art, large-scale deep learning. We describe the architecture of the system as well as the challenges that we faced while building it, like response diversity and scalability. We also introduce a new method for semantic clustering of user-generated content that requires only a modest amount of explicitly labeled data.


arXiv: Computers and Society | 2018

Scalable and accurate deep learning with electronic health records

Alvin Rajkomar; Eyal Oren; Kai Chen; Andrew M. Dai; Nissan Hajaj; Michaela Hardt; Peter J. Liu; Xiaobing Liu; Jake Marcus; Mimi Sun; Patrik Sundberg; Hector Yee; Kun Zhang; Yi Zhang; Gerardo Flores; Gavin E. Duggan; Jamie Irvine; Quoc V. Le; Kurt Litsch; Alexander Mossin; Justin Tansuwan; De Wang; James Wexler; Jimbo Wilson; Dana Ludwig; Samuel L. Volchenboum; Katherine Chou; Michael Pearson; Srinivasan Madabushi; Nigam H. Shah

Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patient’s record. We propose a representation of patients’ entire raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two US academic medical centers with 216,221 adult patients hospitalized for at least 24 h. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting: in-hospital mortality (area under the receiver operator curve [AUROC] across sites 0.93–0.94), 30-day unplanned readmission (AUROC 0.75–0.76), prolonged length of stay (AUROC 0.85–0.86), and all of a patient’s final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed traditional, clinically-used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios. In a case study of a particular prediction, we demonstrate that neural networks can be used to identify relevant information from the patient’s chart.Artificial intelligence: Algorithm predicts clinical outcomes for hospital inpatientsArtificial intelligence outperforms traditional statistical models at predicting a range of clinical outcomes from a patient’s entire raw electronic health record (EHR). A team led by Alvin Rajkomar and Eyal Oren from Google in Mountain View, California, USA, developed a data processing pipeline for transforming EHR files into a standardized format. They then applied deep learning models to data from 216,221 adult patients hospitalized for at least 24 h each at two academic medical centers, and showed that their algorithm could accurately predict risk of mortality, hospital readmission, prolonged hospital stay and discharge diagnosis. In all cases, the method proved more accurate than previously published models. The authors provide a case study to serve as a proof-of-concept of how such an algorithm could be used in routine clinical practice in the future.


Neuroeconomics#R##N#Decision making and the brain | 2009

The Trouble with Choice: Studying Decision Variables in the Brain

Greg Corrado; Leo P. Sugrue; Julian R. Brown; William T. Newsome

Publisher Summary This chapter focuses on challenge of studying the neurobiology of decision-making. Establishing causal links between neural responses and perceptual or cognitive phenomena is a fundamental challenge faced by researchers not only in neuroeconomics, but also in all of cognitive neuroscience. Historically, support for links between anatomy and function has come from patients or experimental animals with lesions restricted to the anatomic area of interest. Indeed, lesion studies first implicated ventromedial prefrontal cortex in value-based decision-making by demonstrating that damage to this region impaired performance on reward-cued reversal learning tasks and other tasks in which the best choice on each trial had to be inferred from the outcomes of earlier choices. Demonstrating neural correlates of a decision variable is, in principle, straightforward; it is substantially more challenging to prove that the correlated neural activity plays a causal role in the brains decision-making process in the manner suggested by the proposed decision variable.


international symposium on multimedia | 2009

Recursive Sparse, Spatiotemporal Coding

Thomas Dean; Richard Washington; Greg Corrado

We present a new approach to learning sparse, spatiotemporal codes in which the number of basis vectors, their orientations, velocities and the size of their receptive fields change over the duration of unsupervised training. The algorithm starts with a relatively small, initial basis with minimal temporal extent. This initial basis is obtained through conventional sparse coding techniques and is expanded over time by recursively constructing a new basis consisting of basis vectors with larger temporal extent that proportionally conserve regions of previously trained weights. These proportionally conserved weights are combined with the result of adjusting newly added weights to represent a greater range of primitive motion features. The size of the current basis is determined probabilistically by sampling from existing basis vectors according to their activation on the training set. The resulting algorithm produces bases consisting of filters that are bandpass, spatially oriented and temporally diverse in terms of their transformations and velocities. The basic methodology borrows inspiration from the layer-by-layer learning of multiple-layer restricted Boltzmann machines developed by Geoff Hinton and his students. Indeed, we can learn multiple-layer sparse codes by training a stack of denoising autoencoders, but we have had greater success using L1 regularized regression in a variation on Olshausen and Field’s original SPARSENET. To accelerate learning and focus attention, we apply a space-time interest-point operator that selects for periodic motion. This attentional mechanism enables us to efficiently compute and compactly represent a broad range of interesting motion. We demonstrate the utility of our approach by using it to recognize human activity in video. Our algorithm meets or exceeds the performance of state-of-the-art activity-recognition methods.


bioRxiv | 2017

Learning Fast And Slow: Deviations From The Matching Law Can Reflect An Optimal Strategy Under Uncertainty

Kiyohito Iigaya; Yashar Ahmadian; Leo Sugrue; Greg Corrado; Yonatan Loewenstein; William T. Newsome; Stefano Fusi

Behavior which deviates from our normative expectations often appears irrational. A classic example concerns the question of how choice should be distributed among multiple alternatives. The so-called matching law predicts that the fraction of choices made to any option should match the fraction of total rewards earned from the option. This choice strategy can maximize reward in a stationary reward schedule. Empirically, however, behavior often deviates from this ideal. While such deviations have often been interpreted as reflecting ‘noisy’, suboptimal, decision-making, here we instead suggest that they reflect a strategy which is adaptive in nonstationary and uncertain environments. We analyze the results of a dynamic foraging task. Animals exhibited significant deviations from matching, and animals turned out to be able to collect more rewards when deviation was larger. We show that this behavior can be understood if one considers that animals had incomplete information about the environments dynamics. In particular, using computational models, we show that in such nonstationary environments, learning on both fast and slow timescales is beneficial. Learning on fast timescales means that an animal can react to sudden changes in the environment, though this inevitably introduces large fluctuations (variance) in value estimates. Concurrently, learning on slow timescales reduces the amplitude of these fluctuations at the price of introducing a bias that causes systematic deviations. We confirm this prediction in data – animals indeed solved the bias-variance tradeoff by combining learning on both fast and slow timescales. Our work suggests that multi-timescale learning could be a biologically plausible mechanism for optimizing decisions under uncertainty.


arXiv: Computation and Language | 2013

Efficient Estimation of Word Representations in Vector Space

Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean


neural information processing systems | 2012

Large Scale Distributed Deep Networks

Jeffrey Dean; Greg Corrado; Rajat Monga; Kai Chen; Matthieu Devin; Mark Mao; Marc'Aurelio Ranzato; Andrew W. Senior; Paul A. Tucker; Ke Yang; Quoc V. Le; Andrew Y. Ng


international conference on machine learning | 2012

Building high-level features using large scale unsupervised learning

Marc'Aurelio Ranzato; Rajat Monga; Matthieu Devin; Kai Chen; Greg Corrado; Jeffrey Dean; Quoc V. Le; Andrew Y. Ng

Collaboration


Dive into the Greg Corrado's collaboration.

Top Co-Authors

Avatar

William T. Newsome

Howard Hughes Medical Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leo P. Sugrue

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge