Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arno Klein is active.

Publication


Featured researches published by Arno Klein.


NeuroImage | 2014

Large-scale evaluation of ANTs and FreeSurfer cortical thickness measurements.

Nicholas J. Tustison; Philip A. Cook; Arno Klein; Gang Song; Sandhitsu R. Das; Jeffrey T. Duda; Benjamin M. Kandel; Niels M. van Strien; James R. Stone; James C. Gee; Brian B. Avants

Many studies of the human brain have explored the relationship between cortical thickness and cognition, phenotype, or disease. Due to the subjectivity and time requirements in manual measurement of cortical thickness, scientists have relied on robust software tools for automation which facilitate the testing and refinement of neuroscientific hypotheses. The most widely used tool for cortical thickness studies is the publicly available, surface-based FreeSurfer package. Critical to the adoption of such tools is a demonstration of their reproducibility, validity, and the documentation of specific implementations that are robust across large, diverse imaging datasets. To this end, we have developed the automated, volume-based Advanced Normalization Tools (ANTs) cortical thickness pipeline comprising well-vetted components such as SyGN (multivariate template construction), SyN (image registration), N4 (bias correction), Atropos (n-tissue segmentation), and DiReCT (cortical thickness estimation). In this work, we have conducted the largest evaluation of automated cortical thickness measures in publicly available data, comparing FreeSurfer and ANTs measures computed on 1205 images from four open data sets (IXI, MMRR, NKI, and OASIS), with parcellation based on the recently proposed Desikan-Killiany-Tourville (DKT) cortical labeling protocol. We found good scan-rescan repeatability with both FreeSurfer and ANTs measures. Given that such assessments of precision do not necessarily reflect accuracy or an ability to make statistical inferences, we further tested the neurobiological validity of these approaches by evaluating thickness-based prediction of age and gender. ANTs is shown to have a higher predictive performance than FreeSurfer for both of these measures. In promotion of open science, we make all of our scripts, data, and results publicly available which complements the use of open image data sets and the open source availability of the proposed ANTs cortical thickness pipeline.


Alzheimers & Dementia | 2016

Crowdsourced estimation of cognitive decline and resilience in Alzheimer's disease

Genevera I. Allen; Nicola Amoroso; Catalina V Anghel; Venkat K. Balagurusamy; Christopher Bare; Derek Beaton; Roberto Bellotti; David A. Bennett; Kevin L. Boehme; Paul C. Boutros; Laura Caberlotto; Cristian Caloian; Frederick Campbell; Elias Chaibub Neto; Yu Chuan Chang; Beibei Chen; Chien Yu Chen; Ting Ying Chien; Timothy W.I. Clark; Sudeshna Das; Christos Davatzikos; Jieyao Deng; Donna N. Dillenberger; Richard Dobson; Qilin Dong; Jimit Doshi; Denise Duma; Rosangela Errico; Guray Erus; Evan Everett

Identifying accurate biomarkers of cognitive decline is essential for advancing early diagnosis and prevention therapies in Alzheimers disease. The Alzheimers disease DREAM Challenge was designed as a computational crowdsourced project to benchmark the current state‐of‐the‐art in predicting cognitive outcomes in Alzheimers disease based on high dimensional, publicly available genetic and structural imaging data. This meta‐analysis failed to identify a meaningful predictor developed from either data modality, suggesting that alternate approaches should be considered for prediction of cognitive performance.


Frontiers in Neuroscience | 2013

Instrumentation bias in the use and evaluation of scientific software: recommendations for reproducible practices in the computational sciences

Nicholas J. Tustison; Hans J. Johnson; Torsten Rohlfing; Arno Klein; Satrajit S. Ghosh; Luis Ibanez; Brian B. Avants

By honest I dont mean that you only tell whats true. But you make clear the entire situation. You make clear all the information that is required for somebody else who is intelligent to make up their mind. n nRichard Feynman n nThe neuroscience community significantly benefits from the proliferation of imaging-related analysis software packages. Established packages such as SPM (Ashburner, 2012), the FMRIB Software Library (FSL) (Jenkinson et al., 2012), Freesurfer (Fischl, 2012), Slicer (Fedorov et al., 2012), and the AFNI toolkit (Cox, 2012) aid neuroimaging researchers around the world in performing complex analyses as part of ongoing neuroscience research. In conjunction with distributing robust software tools, neuroimaging packages also continue to incorporate algorithmic innovation for improvement in analysis tools. n nAs fellow scientists who actively participate in neuroscience research through our contributions to the Insight Toolkit1 (e.g., Johnson et al., 2007; Ibanez et al., 2009; Tustison and Avants, 2012) and other packages such as MindBoggle,2 Nipype3 (Gorgolewski et al., 2011), and the Advanced Normalization Tools (ANTs),4 (Avants et al., 2010, 2011) we notice an increasing number of publications that intend a fair comparison of algorithms which, in principle, is a good thing. Our concern is the lack of detail with which these comparisons are often presented and the corresponding possibility of instrumentation bias (Sackett, 1979) where “defects in the calibration or maintenance of measurement instruments may lead to systematic deviations from true values” (considering software as a type of instrument requiring proper “calibration” and “maintenance” for accurate measurements). Based on our experience (including our own mistakes), we propose a preliminary set of guidelines that seek to minimize such bias with the understanding that the discussion will require a more comprehensive response from the larger neuroscience community. Our intent is to raise awareness in both authors and reviewers to issues that arise when comparing quantitative algorithms. Although herein we focus largely on image registration, these recommendations are relevant for other application areas in biologically-focused computational image analysis, and for reproducible computational science in general. This commentary complements recent papers that highlight statistical bias (Kriegeskorte et al., 2009; Vul and Pashler, 2012), bias induced by registration metrics (Tustison et al., 2012), and registration strategy (Yushkevich et al., 2010) and guideline papers for software development (Prlic and Procter, 2012).


PLOS ONE | 2014

Mapping sleeping bees within their nest: spatial and temporal analysis of worker honey bee sleep.

Barrett A. Klein; Martin Stiegler; Arno Klein; Jürgen Tautz

Patterns of behavior within societies have long been visualized and interpreted using maps. Mapping the occurrence of sleep across individuals within a society could offer clues as to functional aspects of sleep. In spite of this, a detailed spatial analysis of sleep has never been conducted on an invertebrate society. We introduce the concept of mapping sleep across an insect society, and provide an empirical example, mapping sleep patterns within colonies of European honey bees (Apis mellifera L.). Honey bees face variables such as temperature and position of resources within their colonys nest that may impact their sleep. We mapped sleep behavior and temperature of worker bees and produced maps of their nests comb contents as the colony grew and contents changed. By following marked bees, we discovered that individuals slept in many locations, but bees of different worker castes slept in different areas of the nest relative to position of the brood and surrounding temperature. Older worker bees generally slept outside cells, closer to the perimeter of the nest, in colder regions, and away from uncapped brood. Younger worker bees generally slept inside cells and closer to the center of the nest, and spent more time asleep than awake when surrounded by uncapped brood. The average surface temperature of sleeping foragers was lower than the surface temperature of their surroundings, offering a possible indicator of sleep for this caste. We propose mechanisms that could generate caste-dependent sleep patterns and discuss functional significance of these patterns.


Homology, Homotopy and Applications | 2014

DESCRIBING HIGH-ORDER STATISTICAL DEPENDENCE USING \CONCURRENCE TOPOLOGY," WITH APPLICATION TO FUNCTIONAL MRI BRAIN DATA

Steven P. Ellis; Arno Klein


41st Annual Meeting for the Society for Neuroscience | 2011

Brain shape analysis for predicting treatment remission in major depressive disorder

Forrest Sheng Bao; Satrajit S. Ghosh; Joachim Giard; Ramin V. Parsey; Arno Klein


Research Ideas and Outcomes | 2017

Interactive online brain shape visualization

Anisha Keshavan; Arno Klein; Benjamin Cipollini


Research Ideas and Outcomes | 2016

Data-Visual Relationships to Subject Performance and Eye Movements

Arno Klein


Research Ideas and Outcomes | 2016

Concurrence Topology: Finding High-Order Dependence in Neuropsychiatric Data

Arno Klein; Steven P. Ellis


Research Ideas and Outcomes | 2016

Brain Graph Interface

Arno Klein

Collaboration


Dive into the Arno Klein's collaboration.

Top Co-Authors

Avatar

Satrajit S. Ghosh

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joachim Giard

Université catholique de Louvain

View shared research outputs
Top Co-Authors

Avatar

Brian B. Avants

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Barrett A. Klein

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge