Appreciating the variety of goals in computational neuroscience
Konrad P. Kording, Gunnar Blohm, Paul Schrater, Kendrick Kay
OO R I G I N A L A R T I C L E
C o m m e n t a r y
Appreciating the variety of goals in computationalneuroscience
Konrad P. Kording PhD | Gunnar Blohm PhD | PaulSchrater PhD | Kendrick Kay PhD Departments of Bioengineering andNeuroscience, University of Pennsylvania,Philadelphia, PA, USA ViKinG Lab, Centre for NeuroscienceStudies, Queen’s University, Kingston, ON,Canada Departments of Psychology and ComputerScience, University of Minnesota,Minneapolis, MN, USA Center for Magnetic Resonance Research,Department of Radiology, University ofMinnesota, Minneapolis, MN, USA
Correspondence
Kendrick Kay PhD, Center for MagneticResonance Research, Department ofRadiology, University of Minnesota,Minneapolis, MN, USAEmail: [email protected]
Funding information
G.B. was supported by the National Scienceand Engineering Research Council (NSERC,Canada).
Within computational neuroscience, informal interactionswith modelers often reveal wildly divergent goals. In thisopinion piece, we explicitly address the diversity of goalsthat motivate and ultimately influence modeling efforts. Weargue that a wide range of goals can be meaningfully takento be of highest importance. A simple informal survey con-ducted on the Internet confirmed the diversity of goals inthe community. However, different priorities or preferencesof individual researchers can lead to divergent model evalu-ation criteria. We propose that many disagreements in eval-uating the merit of computational research stem from differ-ences in goals and not from the mechanics of constructing,describing, and validating models. We suggest that authorsstate explicitly their goals when proposing models so thatothers can judge the quality of the research with respect toits stated goals.
K E Y W O R D S computational neuroscience, metascience, publication criteria a r X i v : . [ q - b i o . N C ] F e b K ONRAD K ORDING ET AL . | INTRODUCTION1.1 | Diversity of modeling goals
Models are essential for progress in neuroscience and exist in a variety of forms and flavors. Here we will follow thedefinition of ‘computational model’ in Merriam-Webster Dictionary (Webster, 2016): “a system of postulates, data, andinferences presented as a mathematical description of an entity or state of affairs”. Within neuroscience, such modelscome in many flavors. Models can summarize existing data. They can jointly describe brain and behavior data. They canexpress relations that can be tested in experiments and predict successful clinical treatments. Formulating a concretemodel can help uncover hidden assumptions and help assess the suitability of hypothesized relationships. Formulatingmodels can provide mathematical insights and simulating them can lead to systems that solve real-world problems.Accordingly, there is a large community of neuroscientists who construct and use models.As computational neuroscientists, we became interested in the goals of modeling when we noticed stark differencesacross models in different papers and fields of neuroscience. For example, when Kendrick studies nonlinearities in thehuman brain, he cares most about macroscopic measurements and model interpretability (Kay and Yeatman, 2017).When Paul studies representations decoded from the brain, he cares most about interpretability and representations(Carlson et al., 2003). When Gunnar writes a paper about linear-systems explanations of eye movements, he cares mostabout behavior, mathematical simplicity, and the real-world relevance of the task (Orban de Xivry et al., 2013). Onemight suspect that these differences in goals stem merely from differences in modeling methodology. However, whenKonrad writes a paper using the same methodology as Gunnar (i.e. linear systems), he cares most about the model beingthe optimal solution to a computational problem (Kording et al., 2007). Thus, there appears to be a diversity of modelinggoals with real impact on the way we organize our research.Despite this diversity, outsiders often perceive computational neuroscience as being homogeneous. What unitescomputational neuroscience is a commitment to an approach that combines mathematical reasoning with computersimulations. However, this approach is applied across a broad array of topics and within each topic, researchers striveto achieve distinct goals. At Society for Neuroscience, computational approaches are often corralled out of easeinto a single section despite differing goals. When experimentalists add computation to papers or grants, they oftendo so without choosing a goal first. Young scientists declare they want to do computation without first committingto a goal. And lastly, when neuroscientists (admittedly insiders) write books, they tend to merge computationalapproaches, despite vastly differing goals. This creates the false illusion of homogeneity of a field, whereas we believethat computational neuroscience is, rather, the accumulation of the computational branches of many different fields.The diversity of goals within computational neuroscience is not without consequence: we propose that manydisagreements in evaluating the merit of computational research stem from differences in goals and not from themechanics of constructing, describing, and validating models. Goals affect the way science is reviewed. They form keycriteria (Blohm et al., 2018; Schrater et al., 2019) that inform both reviewers’ and editors’ decisions. Goals are implicitlyinvoked when consuming and evaluating research, and therefore impact an article’s likelihood of success. Across severaldisciplines, both meta-analyses and editorial comment (Bornmann et al., 2010; Byrne, 2000; Pierson, 2004; Thrower,2012) provide evidence that editors’ and reviewers’ preferred goals are criteria to which authors must conform forsuccess . In our considerable experience as editors, we find that disagreement regarding what constitutes a worthwhile Examples of these preferences are not hard to find. For example, “Research doesn’t add value to the journal. Sometimes the findings of a research aren’tappealing to the journals, especially if those findings do not really contribute to any advancement in their field. If this is the case, it’s likely that the paperwould be rejected.” (Mukherjee, 2018), and “It’s boring. ... The question behind the work is not of interest in the field. The work is not of interest to the readersofthespecificjournals.” (Thrower,2012). Editorsrejecttheorypapersiftheydonotdirectlyexplainempiricaldata. Forexample,at
PLoSComputationalBiology ,the criterion “Significant biological insight and general interest to life scientists” often excludes theory papers that do not prioritize biological realism.
ONRAD K ORDING ET AL . 3 goal for modeling is one of the main drivers of paper rejections. We believe the problem is that editors’ and reviewers’preferred goals are implicit. By making modeling goals explicit, authors, reviewers, and editors can start to find commonground for the merits of a paper. | A short list of modeling goals
To the extent that the goals we choose for modeling matter, an important open question is: what exactly are these goals?Examining a broad range of papers in computational neuroscience, we gleaned a variety of different modeling goals,typically revealed in the Introduction, Methods, and Discussion sections (Blohm et al., 2018). While it is impossible toproduce an exhaustive list, we compile here a list of the most salient and common ones (Schrater et al., 2019). • Useful (can be applied to other domains). Some models of the nervous system are also good at solving real-worldproblems. Models can be evaluated in terms of how good they are at solving such problems. For example, a modelof the visual system might be able to solve challenging problems in computer vision (Fukushima, 1980; Serre andRiesenhuber, 2004). This assumes that the modeled system in the brain is solving a problem that also appears intechnical systems. • Normative (best possible given certain assumptions). Some models provide the optimal solutions to problemsthat exist in the real world (Chater and Oaksford, 2000, 1999; Knill and Richards, 1996; Todorov and Jordan,2002). Models can be evaluated in terms of how well they represent an optimal solution to a meaningful problem.Normative models are thus often used in domains where behavior or neural properties are expected to be optimalor near optimal (Acuña and Schrater, 2010; Dayan and Abbott, 2001; Körding, 2007). For example, a model may askhow well people minimize energy when walking (Selinger et al., 2015). Thus, we might ask whether a model suppliesthe optimal solution to a computational problem faced by the brain and how similar behavior is to these predictions.A normative model can also ask whether the assumed principles underlying the optimality criterion are biologicallyaccurate. This assumes that we can understand the goals of a system and that we gain insight if a system appears tooptimize what it is expected to optimize (Barlow, 1961; Mayr, 2004). • Clinically relevant (helps healthcare). Some models produce insights that are relevant for developing or evaluatingclinical interventions. Models may be evaluated in terms of how well they generalize to medical problems. Forexample, simulating individual differences with respect to electrical stimulation enables us to place electrodes tomaximize stimulation outcome (Bai et al., 2019). Given the potential to reduce human suffering, there is no doubtthat clinical relevance is a meaningful goal. In order for modeling insights to transfer to medicine, a model must besufficiently similar to the real system. • Inspire experiments (untested assumptions, new hypotheses). Some models change the way we think about aproblem and thereby raise interesting new hypotheses via abductive inference (Josephson and Josephson, 1996;Lombrozo, 2012). Models can be evaluated in terms of the richness of potential experiments they inspire. Forexample, a model may suggest that spike timing may affect plasticity and therefore lead to a broad set of tests (Danand Poo, 2004; Gerstner et al., 1996). A formal model might also uncover hidden assumptions that a field makeswhen considering a proposed mechanism. To inspire experiments, a set of potential models must be small enoughsuch that experimental tests are meaningful. • Microscopic realism (looks like the brain). Some models describe the microscopic properties of the brain, suchas synaptic, pharmacological, and cellular-level properties. Models can then be evaluated in terms of how wellthey quantitatively describe those properties. For example, models may predict changes in synapses over time(Zador et al., 1990). Commitment to microscopic realism assumes that microscopic properties can be sufficiently K ONRAD K ORDING ET AL . decoupled from macroscopic properties such that a reductionist understanding of neural properties is possible(Gillett, 2016). • Macroscopic realism (looks like the brain at the population level). Some models describe properties of brain areasand networks. Models can then be evaluated in terms of how well they quantitatively describe those properties.For example, models may predict the population activity of brain areas as measured by EEG (Al-Nashash et al.,2004). Commitment to macroscopic realism assumes that macroscopic properties can be sufficiently decoupledfrom finer-scale, distributed properties (Bennett and Hacker, 2003). • Behavioral realism (looks like real behavior). Some models can faithfully describe and explain behavioral phenom-ena. Models can then be evaluated in terms of how well they quantitatively account for behavior. For example,models can predict the way we move our arm as a function of distance we need to travel (Harris and Wolpert,1998). An approach based on behavioral realism supposes that behavior can be understood without a deeperunderstanding of the brain and that compact models of behavior are possible (Green et al., 2010; Krakauer et al.,2017; Tao et al., 2018). • Representational (codes like the brain). Some models aim to use representations of information that are similarto representations in the brain. Models can then be evaluated in terms of how well they quantitatively describerepresentations. For example, models predict that neurons in motor cortex have cosine tuning (Olshausen andField, 2004). Such modeling assumes that representations can be compactly understood and are the basis of thephenomena we want to understand (Churchland and Sejnowski, 1990). • Compact (few short equations). Some models can be succinctly expressed in mathematical language and/or com-puter code (Burgess, 1998; Li and Vitányi, 2019). Models can then be evaluated in terms of how well they trade offcomplexity against the quality of description of the phenomena. For example, Fitt’s law can compactly describe thebalance between speed and precision during hand movements (Fitts and Radford, 1966). This approach assumesthat the phenomenon of interest has a low-complexity description (Burgess, 1998). • Analytically tractable (exact solutions exist). Some models are understandable through mathematical equationsas opposed to numerical simulations. Models can then be evaluated in terms of how well they can be analyticallysolved. For example, models may allow the combination of cues with neurally realistic properties while beinganalytically solvable (Ma et al., 2006). For scientists with mathematical training, an analytic approach provides amore generalizable understanding compared to numerical models. An implicit assumption is that the system ofinterest is sufficiently similar to the analytically tractable model such that analyzing one provides insights into theother (Parker, 2012). • Interpretable (relates directly to something the brain does). Some models are easily interpreted with respect tohow they work (e.g. what outcomes they predict) and/or how the brain might implement the computations. Modelscan then be evaluated in terms of how well humans can interpret their meaning. For example, units in a simulatedsystem may have receptive fields similar to those of real neurons (Blohm et al., 2009; Olshausen and Field, 2004).For many scientists, prioritizing the interpretability of a model makes the model more relatable to their way ofthinking about the brain. • Beauty (elegant). Some models may be symmetrical, balanced, or resonate well with the way we think. Modelscan then be evaluated in terms of how well they resonate intuitively with their target audience. For example, thesame model can be presented in the languages of physics, math, and biology, and can be distinctly useful for thesedifferent communities (Chandrasekhar, 2013; Russell, 2019).
ONRAD K ORDING ET AL . 5
F I G U R E 1
Survey demographics. Shown are counts of survey participants binned by career stage (left) and countsof papers rated by survey participants binned by journal (right). | METHODS
To assess modeling goals in the computational neuroscience community, we constructed an online survey using GoogleForms. Each of the authors then contacted colleagues via personal e-mails, mailing lists, and Twitter. We collectedsurvey responses for approximately a month, with a survey deadline of August 31, 2018. We told participants that wewould be releasing the responses from this survey as a public resource (with the exception of e-mail addresses, whichwould be kept private). People contacted were free to decline participation in the survey. Only adult scientists wereallowed to participate. The research was approved by the UPenn IRB (Protocol number 830156).The survey asked each participant to choose up to 3 papers they authored or co-authored and to rate each paper onthe 12 modeling goals described above. Participants were instructed to submit papers representative of distinct typesof their research. The full set of survey questions and survey results are available at https://osf.io/pqe f/ . Wenote that the survey results may be useful for answering a variety of additional questions not addressed in this paper.For example, one might be interested to compare one’s prediction of the modeling goals held by a given researcher tothe actual goals held by that researcher. Or as another example, one might be interested to see where one’s goals fallrelative to the group norms.For the analyses performed in this paper, ratings were aggregated across papers (251 papers from 113 distinctauthors; 22 female, 91 male). For Figure 2B, a small amount of Gaussian noise (mean 0, standard deviation 0.5) wasadded to the data prior to computing summary statistics in order to avoid discretization effects. | RESULTS3.1 | A simple survey provides evidence computational researchers have diverse goals
To empirically assess modeling goals, we conducted an informal online survey in which we asked authors to rate theirown modeling work with respect to the goals listed above. Participants rated up to 3 of their authored papers on each ofthe 12 goals, indicating the importance of each goal. We obtained results from 113 distinct authors who rated a total of251 papers (Figure 1). On average, interpretability was rated as the most important modeling goal, whereas clinicalrelevance was rated as least important (Figure 2B, black bars). In addition, we found large variance of ratings acrosspapers (Figure 2B, gray error bars), suggesting that there is, indeed, wide diversity of modeling goals in the neuroscience K ONRAD K ORDING ET AL . F I G U R E 2
Modeling goals in the computational neuroscience community. We conducted an informal survey on theInternet to assess the modeling goals that different researchers held for specific papers that they authored. (A)Histogram of results. For each of 12 modeling goals (dimensions), we plot a histogram of the reported ratings. Valuesrange from 1 (completely irrelevant) to 5 (absolutely essential). Dimensions are ordered according to three conceptualgroups that seem intuitively reasonable. (B) Summary statistics. For each dimension, we plot the median (black bars),interquartile range (gray error bars), and bootstrapped 68% confidence interval on the median (red error bars). (C)Pairwise correlation (Pearson’s r ) of dimensions across all papers.community. Not surprisingly, some goals are highly correlated (Figure 2C), such as compactness and tractability.To better understand the underlying structure of the ratings, we subtracted the average rating of each modelinggoal and identified a lower-dimensional space using probabilistic principal components analysis (Figure 3A). We recon-structed the data in this lower-dimensional space and recomputed the pairwise correlation structure (Figure 3B). Finally,we re-ordered the modeling goals, revealing three groups or clusters (Figure 3C). One simple interpretation of theseclusters is that people are sampling independently mixed contributions from three clusters that somewhat overlap withthe intuitive grouping of Scientific impact , Biological realism , and
Style (as shown in Figure 2A). These differences mightrepresent different subfields of neuroscience having different modeling goals and/or different types of models naturallyfulfilling certain goals more readily than others. However, this lower-dimensional reconstruction accounted for only51% of the total variance in goal ratings. The remaining 49% of the variance reflects diversity of individual preferencesin goals. Thus, just like the contrast between Gunnar’s and Konrad’s linear-systems models, the variability in modelinggoals between researchers appears to be high.To further illustrate diversity, we visualize the location of each paper in the space spanned by the first threeprincipal components (Figure 4). This shows again that despite dimensionality reduction accounting for roughly halfof the variance, there are no discernible clusters or groups in this subspace. In other words, there is a continuum ofpreferences with respect to modeling goals that span the space. Highlighting the authors’ own papers in Figure 4 allows
ONRAD K ORDING ET AL . 7
F I G U R E 3
Groups of modeling goals. We performed probabilistic principal component analysis to characterize thelatent covariance structure of the survey ratings. (A) Component loadings. The top three components (composing anorthonormal basis) are shown; the order of dimensions is the same as in Figure 2. (B) Data reconstruction. Wereconstruct the original data using the three identified components and recompute the pairwise correlation as in Figure2C. The correlation structure is similar to that of the original data, but stronger due to the dimensionality reduction. (C)Grouping of dimensions. We re-plot the results of panel B, re-ordering dimensions to highlight the block structure.Thick gray squares indicate three groups of dimensions that appear to be present in the data.several additional observations: (1) Most of the authors’ papers’ goals are fairly polarized, i.e. they reside at the edgesof the space spanned. (2) Konrad’s and Gunnar’s papers lie in diametrically opposite sides of this space despite someoverlap in modeling techniques used. The same is true for Paul’s and Kendrick’s papers. (3) Somewhat surprisingly,Paul’s and Gunnar’s goals (and Kendrick’s and Konrad’s goals) align fairly well despite apparently different technicalapproaches used. Note that this diversity in goals and approaches does not mean that we do not appreciate each other’sresearch efforts; quite the opposite! We view this diversity as a strength (see Discussion). | Limitations of the survey
Of course, this survey is not intended as a formal scientific instrument to identify modeling goals in the field, but theresults do suggest that it would be worthwhile to invest in more systematic meta-scientific analyses. It would be valuableto identify modeling goals for a much larger sample of papers and to calibrate the survey and analysis methods. Oursample is small and not randomly drawn from the population. The survey questions themselves may not have beenunderstood in exactly the same way by different participants, which may have increased the apparent diversity. But thesurvey is sufficient to make our point: diversity in modeling goals is real and high! | DISCUSSION4.1 | Goals matter to how modeling is done
The choice of goals matters in just about every imaginable way for modeling in neuroscience. Modeling goals affectthe overall utility and interpretation of a model by influencing the evaluation metrics, the choice of model type, andthe way we replace models with newer, better models. For example, if microscopic realism is required, this severelyconstrains the types of modeling techniques that can be used. Some research fields have implicit agreements on a setof desirable modeling criteria. For instance, historically, the eye-movements field has used linear-systems theory tomodel saccades: the field has been most concerned with behavioral realism, usefulness, inspiration of experiments, and K ONRAD K ORDING ET AL . F I G U R E 4
Diversity of modeling goals within and across authors. Here we plot all recorded paper ratings in thespace defined by the first three principal components (see Figure 3A). (A) Space defined by components 1 and 2. Eachgray dot indicates a single paper, and colored markers indicate papers from the authors. Insets show raw ratings forexample papers; in these plots, thin gray lines indicate the mean of each dimension across all papers. Some authors areconsistent across papers they write (Schrater, Kay), whereas other authors show more diversity in their papers(Kording, Blohm). Furthermore, there is high diversity in modeling goals across the four authors of this paper. (B) Spacedefined by components 1 and 3. Same format as panel A.interpretability, but has not placed much value on microscopic realism. If the field had been concerned with microscopicrealism, the linear-systems approach would have likely been inappropriate and a different toolset—such as spikingneural-network models—would have been used instead. Thus, getting clarity on why we model may be just as importantas understanding the mechanics of how to model. | But what are the goals of computational neuroscience?
In shaping our list of goals of computational neuroscience, we drew from our own experience: this is thus an opinionpiece in which we try to provide a meaningful perspective to the field. We combine 7+ years of teaching students thebroad range of modeling techniques useful in movement science and extensively discussing modeling objectives at theSummer School in Computational Sensory-Motor Neuroscience (CoSMo, )and beyond. That being said, while goals in neuroscience are diverse as we have shown, we cannot claim that our list isexhaustive or that other neuroscientists would not structure it differently. Even if our views are wrong, we hope thatour humble paper will jump-start a drive towards clarity in modeling goals in neuroscience. | Authors should state goals; readers should evaluate based on those commitments
Why does it matter that different researchers have different modeling goals? We wish to raise awareness becausediversity often leads to significant tension and misunderstandings between researchers. For example, a reviewermight have a certain set of goals associated with a particular modeling approach and might evaluate a given paperoutside of the authors’ intentions. This out-of-scope evaluation is one of the most frequent and frustrating reasons
ONRAD K ORDING ET AL . 9 for miscommunication in computational neuroscience. Its consequence is often paper rejection. We argue that thisbehavior is detrimental for science.What are some practical steps we can take? As action items, we suggest that (1) authors should explicitly spellout their goals, ideally in multiple instances across the Introduction, Methods, and Discussion sections (Blohm et al.,2018; Schrater et al., 2019), and (2) readers should deliberately evaluate a given paper within those constraints. Forexample, an author might include a section that begins, “In this paper, we sought to satisfy the criteria of X, Y, and Z. It isnot our goal to develop a model that exhibits properties A, B, and C for the following reasons...” We believe that explicitcharacterization of goals leads to more constructive interactions and therefore promotes scientific progress, discovery,and societal impact.This paper is itself a (rudimentary) modeling effort: we sought to characterize the intentions or goals held bycomputational neuroscientists in conducting their research and the relationship of these goals to one another. Thus,in a sense, this is behavioral research on how brains (scientists) function. With respect to the 12 modeling criteria,we sought to characterize the phenomenon at the behavioral level, with some interest in identifying the underlyingrepresentations (latent structure). Interpretability of our results was paramount, and we sought to gather observationsthat may inspire further meta-scientific efforts. On the other hand, our investigation is obviously not intended tounderstand the macroscopic or microscopic neural mechanisms underlying how scientists conduct their research.Likewise, several modeling criteria are clearly not applicable to our study (e.g. normative, clinical). | Diversity of modeling goals is a strength
Modeling aims to generate insight into a phenomenon of interest. Since models in computational neuroscience allrefer to brains, one could argue that they are guaranteed to produce synergistic answers and that the distinctionshighlighted in this paper are not that important. We think this stance is debatable. Models positioned at differentlevels of biological realism (microscopic, macroscopic, behavioral, representational) are not guaranteed to inform eachother, as distinct phenomena may emerge at different levels. Models following different styles (compact, tractable,interpretation, beauty) are optimizing different criteria, so a model that fulfills one criterion may be suboptimal underother criteria. Models aimed towards a specific type of scientific impact (useful, normative, clinical, inspire) often failto deliver other types of impact. Thus, it is our contention that modeling goals are truly diverse and that models incomputational neuroscience are not aimed towards a single coherent class of answers.Should the community attempt to converge on a single set of standards? While this might seem appealing tosome, actual diversity of modeling goals makes it difficult to find a shared community preference function. FromArrow’s Impossibility Theorem (Fishburn, 1970) to Harsanyi’s Aggregation Theorem (Fleurbaey, 2009; Harsanyi, 1979;Weymark, 1993), arriving at a consistent group preference function is known to be hard, to require special conditions,and even when possible, to require trading off individual preferences to allow a non-unique group preference as a pointon a Pareto front. Editor and reviewer preferences for goals are unlikely to represent such a hypothetical aggregatepreference, thus leading to idiosyncratic critiques (Garfunkel et al., 1990). Rather than encourage conformity toparticular preferences, appreciating goal diversity allows exploring possibilities without getting stuck in local minima.For example, 15 years ago, some researchers were claiming that working on machine learning was career suicide;researchers that nonetheless persevered are now superstars in the field.We advocate embracing diversity in modeling goals as a strength for the field. As in other aspects of life, humanityworks best by respecting and not excluding diversity. In terms of scientific progress, diversity balances biases, providesalternative views, encourages discussion, invigorates problem-solving, and facilitates specialization of individual re-searchers, each of whom can make distinct meaningful contributions to the field. Perhaps one day the neuroscience
ONRAD K ORDING ET AL . community will come to consensus on a single framework for describing and understanding the brain. But until that daycomes, embracing diversity and explicitly recognizing each other’s modeling goals will be critical for achieving progress. A C K N O W L E D G E M E N T S
We would like to thank the reviewers and Editor J. Pillow for constructive feedback.
C O N FL I C T O F I N T E R E S T
The authors declare no conflicts of interest.
R E F E R E N C E S
Acuña, D. E. and Schrater, P. (2010) Structure learning in human sequential decision-making.
PLoS computational biology , ,e1001003.Al-Nashash, H., Al-Assaf, Y., Paul, J. and Thakor, N. (2004) EEG signal modeling using adaptive Markov process amplitude. IEEEtransactions on bio-medical engineering , , 744–751.Bai, S., Martin, D., Guo, T., Dokos, S. and Loo, C. (2019) Computational comparison of conventional and novel electroconvul-sive therapy electrode placements for the treatment of depression. European Psychiatry: The Journal of the Association ofEuropean Psychiatrists , , 71–78.Barlow, H. B. (1961) Possible principles underlying the transformation of sensory messages. Sensory communication , , 217–234.Bennett, M. R. and Hacker, P. M. S. (2003) Philosophical Foundations of Neuroscience , vol. 79. Blackwell Oxford.Blohm, G., Keith, G. P. and Crawford, J. D. (2009) Decoding the cortical transformations for visually guided reaching in 3Dspace.
Cerebral Cortex , , 1372–1393.Blohm, G., Kording, K. P. and Schrater, P. R. (2018) A how-to-model guide for Neuroscience. .Bornmann, L., Weymuth, C. and Daniel, H.-D. (2010) A content analysis of referees’ comments: How do comments onmanuscripts rejected by a high-impact journal and later published in either a low-or high-impact journal differ? Sciento-metrics , , 493–506.Burgess, J. (1998) Occam’s razor and scientific method. In The Philosophy of Mathematics Today , 195–214. Clarendon PressOxford.Byrne, D. W. (2000) Common reasons for rejecting manuscripts at medical journals: A survey of editors and peer reviewers.
Science Editor .Carlson, T. A., Schrater, P. and He, S. (2003) Patterns of activity in the categorical representations of objects.
Journal of CognitiveNeuroscience , , 704–717.Chandrasekhar, S. (2013) Truth and Beauty: Aesthetics and Motivations in Science . University of Chicago Press.Chater, N. and Oaksford, M. (1999) Ten years of the rational analysis of cognition.
Trends in Cognitive Sciences , , 57–65.— (2000) The rational analysis of mind and behavior. Synthese , , 93–131.Churchland, P. S. and Sejnowski, T. J. (1990) Neural representation and neural computation. Philosophical Perspectives , , 343–382. ONRAD K ORDING ET AL . 11
Dan, Y. and Poo, M.-M. (2004) Spike timing-dependent plasticity of neural circuits.
Neuron , , 23–30.Dayan, P. and Abbott, L. F. (2001) Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems . MITPress.Fishburn, P. C. (1970) Arrow’s impossibility theorem: Concise proof and infinite voters.
Journal of Economic Theory , , 103–106.Fitts, P. M. and Radford, B. K. (1966) Information capacity of discrete motor responses under different cognitive sets. Journalof Experimental Psychology , , 475–482.Fleurbaey, M. (2009) Two variants of Harsanyi’s aggregation theorem. Economics Letters , , 300–302.Fukushima, K. (1980) Neocognitron: A self organizing neural network model for a mechanism of pattern recognition unaf-fected by shift in position. Biological Cybernetics , , 193–202.Garfunkel, J. M., Ulshen, M. H., Hamrick, H. J. and Lawson, E. E. (1990) Problems identified by secondary review of acceptedmanuscripts. JAMA , , 1369–1371.Gerstner, W., Kempter, R., van Hemmen, J. L. and Wagner, H. (1996) A neuronal learning rule for sub-millisecond temporalcoding. Nature , , 76–81.Gillett, C. (2016) Reduction and Emergence in Science and Philosophy . Cambridge University Press.Green, C. S., Benson, C., Kersten, D. and Schrater, P. (2010) Alterations in choice behavior by manipulations of world model.
Proceedings of the National Academy of Sciences of the United States of America , , 16401–16406.Harris, C. M. and Wolpert, D. M. (1998) Signal-dependent noise determines motor planning. Nature , , 780–784.Harsanyi, J. C. (1979) Bayesian decision theory, rule utilitarianism, and Arrow’s impossibility theorem. Theory and Decision , ,289–317.Josephson, J. R. and Josephson, S. G. (1996) Abductive Inference: Computation, Philosophy, Technology . Cambridge UniversityPress.Kay, K. N. and Yeatman, J. D. (2017) Bottom-up and top-down computations in word- and face-selective cortex. eLife , .Knill, D. C. and Richards, W. (1996) Perception as Bayesian Inference . Cambridge University Press.Körding, K. (2007) Decision theory: What "should" the nervous system do?
Science , , 606–610.Kording, K. P., Tenenbaum, J. B. and Shadmehr, R. (2007) The dynamics of memory as a consequence of optimal adaptation to achanging body. Nature Neuroscience , , 779–786.Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A. and Poeppel, D. (2017) Neuroscience Needs Behavior: Cor-recting a Reductionist Bias. Neuron , , 480–490.Li, M. and Vitányi, P. (2019) An Introduction to Kolmogorov Complexity and Its Applications , vol. 3. Springer.Lombrozo, T. (2012) Explanation and abductive inference. In
Oxford Handbook of Thinking and Reasoning , 260–276. OxfordUniversity Press.Ma, W. J., Beck, J. M., Latham, P. E. and Pouget, A. (2006) Bayesian inference with probabilistic population codes.
Nature Neuro-science , , 1432–1438.Mayr, E. (2004) What Makes Biology Unique?: Considerations on the Autonomy of a Scientific Discipline . Cambridge UniversityPress.
ONRAD K ORDING ET AL . Mukherjee, D. (2018) 11 Reasons Why Research Papers Are Rejected. https://blog.typeset.io/11-reasons-why-research-papers-are-rejected-3e272b633186.Olshausen, B. A. and Field, D. J. (2004) Sparse coding of sensory inputs.
Current opinion in neurobiology , , 481–487.Orban de Xivry, J.-J., Coppe, S., Blohm, G. and Lefèvre, P. (2013) Kalman filtering naturally accounts for visually guided andpredictive smooth pursuit dynamics. The Journal of Neuroscience , , 17301–17313.Parker, W. S. (2012) Computer simulation and philosophy of science. Metascience , , 111–114.Pierson, D. J. (2004) The top 10 reasons why manuscripts are not accepted for publication. Respiratory Care , , 1246–1252.Russell, B. (2019) Mysticism and Logic and Other Essays . Good Press.Schrater, P., Kording, K. and Blohm, G. (2019) Modeling in Neuroscience as a Decision Process. .Selinger, J. C., O’Connor, S. M., Wong, J. D. and Donelan, J. M. (2015) Humans Can Continuously Optimize Energetic Cost duringWalking.
Current biology , , 2452–2456.Serre, T. and Riesenhuber, M. (2004) Realistic Modeling of Simple and Complex Cell Tuning in the HMAXModel, and Implica-tions for Invariant Object Recognition in Cortex. Tech. rep. , Massachusetts Institute of Technology.Tao, G., Khan, A. Z. and Blohm, G. (2018) Corrective response times in a coordinated eye-head-arm countermanding task.
Jour-nal of Neurophysiology , Nature Neuroscience , , 1226–1235.Webster (2016) The Merriam-Webster Dictionary: International Edition . Merriam-Webster.Weymark, J. A. (1993) Harsanyi’s social aggregation theorem and the weak Pareto principle.
Social choice and welfare , , 209–221.Zador, A., Koch, C. and Brown, T. H. (1990) Biophysical model of a Hebbian synapse. Proceedings of the National Academy ofSciences ,87