Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael D. Linderman is active.

Publication


Featured researches published by Michael D. Linderman.


Nature Biotechnology | 2011

Extracting a Cellular Hierarchy from High-dimensional Cytometry Data with SPADE

Peng Qiu; Erin F. Simonds; Sean C. Bendall; Kenneth D. Gibbs; Robert V. Bruggner; Michael D. Linderman; Karen Sachs; Garry P. Nolan; Sylvia K. Plevritis

The ability to analyze multiple single-cell parameters is critical for understanding cellular heterogeneity. Despite recent advances in measurement technology, methods for analyzing high-dimensional single-cell data are often subjective, labor intensive and require prior knowledge of the biological system. To objectively uncover cellular heterogeneity from single-cell measurements, we present a versatile computational approach, spanning-tree progression analysis of density-normalized events (SPADE). We applied SPADE to flow cytometry data of mouse bone marrow and to mass cytometry data of human bone marrow. In both cases, SPADE organized cells in a hierarchy of related phenotypes that partially recapitulated well-described patterns of hematopoiesis. We demonstrate that SPADE is robust to measurement noise and to the choice of cellular markers. SPADE facilitates the analysis of cellular heterogeneity, the identification of cell types and comparison of functional markers in response to perturbations.


Nature Reviews Genetics | 2010

Computational solutions to large-scale data management and analysis

Eric E. Schadt; Michael D. Linderman; Jon Sorenson; Lawrence Lee; Garry P. Nolan

Today we can generate hundreds of gigabases of DNA and RNA sequencing data in a week for less than US


architectural support for programming languages and operating systems | 2008

Merge: a programming model for heterogeneous multi-core systems

Michael D. Linderman; Jamison D. Collins; Hong Wang; Teresa H. Meng

5,000. The astonishing rate of data generation by these low-cost, high-throughput technologies in genomics is being matched by that of other technologies, such as real-time imaging and mass spectrometry-based flow cytometry. Success in the life sciences will depend on our ability to properly interpret the large-scale, high-dimensional data sets that are generated by these technologies, which in turn requires us to adopt advances in informatics. Here we discuss how we can master the different types of computational environments that exist — such as cloud and heterogeneous computing — to successfully tackle our big data problems.


IEEE Transactions on Biomedical Engineering | 2007

HermesB: A Continuous Neural Recording System for Freely Behaving Primates

Gopal Santhanam; Michael D. Linderman; Vikash Gilja; Afsheen Afshar; Stephen I. Ryu; Teresa H. Meng; Krishna V. Shenoy

In this paper we propose the Merge framework, a general purpose programming model for heterogeneous multi-core systems. The Merge framework replaces current ad hoc approaches to parallel programming on heterogeneous platforms with a rigorous, library-based methodology that can automatically distribute computation across heterogeneous cores to achieve increased energy and performance efficiency. The Merge framework provides (1) a predicate dispatch-based library system for managing and invoking function variants for multiple architectures; (2) a high-level, library-oriented parallel language based on map-reduce; and (3) a compiler and runtime which implement the map-reduce language pattern by dynamically selecting the best available function implementations for a given input and machine configuration. Using a generic sequencer architecture interface for heterogeneous accelerators, the Merge framework can integrate function variants for specialized accelerators, offering the potential for to-the-metal performance for a wide range of heterogeneous architectures, all transparent to the user. The Merge framework has been prototyped on a heterogeneous platform consisting of an Intel Core 2 Duo CPU and an 8-core 32-thread Intel Graphics and Media Accelerator X3000, and a homogeneous 32-way Unisys SMP system with Intel Xeon processors. We implemented a set of benchmarks using the Merge framework and enhanced the library with X3000 specific implementations, achieving speedups of 3.6x -- 8.5x using the X3000 and 5.2x -- 22x using the 32-way system relative to the straight C reference implementation on a single IA32 core.


IEEE Signal Processing Magazine | 2008

Signal Processing Challenges for Neural Prostheses

Michael D. Linderman; Gopal Santhanam; Caleb Kemere; Vikash Gilja; Stephen O'Driscoll; Byron M. Yu; Afsheen Afshar; Stephen I. Ryu; Krishna V. Shenoy; Teresa H. Meng

Chronically implanted electrode arrays have enabled a broad range of advances in basic electrophysiology and neural prosthetics. Those successes motivate new experiments, particularly, the development of prototype implantable prosthetic processors for continuous use in freely behaving subjects, both monkeys and humans. However, traditional experimental techniques require the subject to be restrained, limiting both the types and duration of experiments. In this paper, we present a dual-channel, battery-powered neural recording system with an integrated three-axis accelerometer for use with chronically implanted electrode arrays in freely behaving primates. The recording system called HermesB, is self-contained, autonomous, programmable, and capable of recording broadband neural (sampled at 30 kS/s) and acceleration data to a removable compact flash card for up to 48 h. We have collected long-duration data sets with HermesB from an adult macaque monkey which provide insight into time scales and free behaviors inaccessible under traditional experiments. Variations in action potential shape and root-mean square (RMS) noise are observed across a range of time scales. The peak-to-peak voltage of action potentials varied by up to 30% over a 24-h period including step changes in waveform amplitude (up to 25%) coincident with high acceleration movements of the head. These initial results suggest that spike-sorting algorithms can no longer assume stable neural signals and will need to transition to adaptive signal processing methodologies to maximize performance. During physically active periods (defined by head-mounted accelerometer), significantly reduced 5-25-Hz local field potential (LFP) power and increased firing rate variability were observed. Using a threshold fit to LFP power, 93% of 403 5-min recording blocks were correctly classified as active or inactive, potentially providing an efficient tool for identifying different behavioral contexts in prosthetic applications. These results demonstrate the utility of the HermesB system and motivate using this type of system to advance neural prosthetics and electrophysiological experiments.


Nature Reviews Genetics | 2011

Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology

Eric E. Schadt; Michael D. Linderman; Jon Sorenson; Lawrence Lee; Garry P. Nolan

Cortically controlled prostheses are able to translate neural activity from the cerebral cortex into control signals for guiding computer cursors or prosthetic limbs. While both noninvasive and invasive electrode techniques can be used to measure neural activity, the latter promises considerably higher levels of performance and therefore functionality to patients. The process of translating analog voltages recorded at the electrode tip into control signals for the prosthesis requires sophisticated signal acquisition and processing techniques. In this article we briefly review the current state-of-the-art in invasive, electrode-based neural prosthetic systems, with particular attention to the advanced signal processing algorithms that enable that performance. Improving prosthetic performance is only part of the challenge, however. A clinically viable prosthetic system will need to be more robust and autonomous and, unlike existing approaches that depend on multiple computers and specialized recording units, must be implemented in a compact, implantable prosthetic processor (IPP). In this article we summarize recent results which indicate that state-of-the-art prosthetic systems can be implemented in an IPP using current semiconductor technology, and the challenges that face signal processing engineers in improving prosthetic performance, autonomy and robustness within the restrictive constraints of the IPP.


symposium on code generation and optimization | 2010

Towards program optimization through automated analysis of numerical precision

Michael D. Linderman; Matthew Ho; David L. Dill; Teresa H. Meng; Garry P. Nolan

Cloud and heterogeneous computing solutions exist today for the emerging big data problems in biology


European Journal of Human Genetics | 2016

Motivations, concerns and preferences of personal genome sequencing research participants: Baseline findings from the HealthSeq project

Saskia C. Sanderson; Michael D. Linderman; Sabrina A. Suckiel; George A. Diaz; Randi E. Zinberg; Kadija Ferryman; Melissa P. Wasserstein; Andrew Kasarskis; Eric E. Schadt

Reducing the arithmetic precision of a computation has real performance implications, including increased speed, decreased power consumption, and a smaller memory footprint. For some architectures, e.g., GPUs, there can be such a large performance difference that using reduced precision is effectively a requirement. The trade-off is that the accuracy of the computation will be compromised. In this paper we describe a proof assistant and associated static analysis techniques for efficiently bounding numerical and precision-related errors. The programmer/compiler can use these bounds to numerically verify and optimize an application for different input and machine configurations. We present several case study applications that demonstrate the effectiveness of these techniques and the performance benefits that can be achieved with rigorous precision analysis.


Hepatology | 2014

Costs of telaprevir-based triple therapy for hepatitis C:

Kian Bichoupan; Valérie Martel-Laferrière; David H. Sachs; Michel Ng; Emily Schonfeld; Alexis Pappas; James F. Crismale; Alicia Stivala; Viktoriya Khaitova; Donald Gardenier; Michael D. Linderman; Ponni V. Perumalswami; Thomas D. Schiano; Joseph A. Odin; Lawrence Liu; Alan J. Moskowitz; Douglas T. Dieterich; Andrea D. Branch

Whole exome/genome sequencing (WES/WGS) is increasingly offered to ostensibly healthy individuals. Understanding the motivations and concerns of research participants seeking out personal WGS and their preferences regarding return-of-results and data sharing will help optimize protocols for WES/WGS. Baseline interviews including both qualitative and quantitative components were conducted with research participants (n=35) in the HealthSeq project, a longitudinal cohort study of individuals receiving personal WGS results. Data sharing preferences were recorded during informed consent. In the qualitative interview component, the dominant motivations that emerged were obtaining personal disease risk information, satisfying curiosity, contributing to research, self-exploration and interest in ancestry, and the dominant concern was the potential psychological impact of the results. In the quantitative component, 57% endorsed concerns about privacy. Most wanted to receive all personal WGS results (94%) and their raw data (89%); a third (37%) consented to having their data shared to the Database of Genotypes and Phenotypes (dbGaP). Early adopters of personal WGS in the HealthSeq project express a variety of health- and non-health-related motivations. Almost all want all available findings, while also expressing concerns about the psychological impact and privacy of their results.


Nature Protocols | 2016

189,000 per sustained virological response.

Benedict Anchang; Tom D P Hart; Sean C. Bendall; Peng Qiu; Zach Bjornson; Michael D. Linderman; Garry P. Nolan; Sylvia K. Plevritis

In registration trials, triple therapy with telaprevir (TVR), pegylated interferon (Peg‐IFN), and ribavirin (RBV) achieved sustained virological response (SVR) rates between 64% and 75%, but the clinical effectiveness and economic burdens of this treatment in real‐world practice remain to be determined. Records of 147 patients who initiated TVR‐based triple therapy at the Mount Sinai Medical Center (May‐December 2011) were reviewed. Direct medical costs for pretreatment, on‐treatment, and posttreatment care were calculated using data from Medicare reimbursement databases, RED Book, and the Healthcare Cost and Utilization Project database. Costs are presented in 2012 U.S. dollars. SVR (undetectable hepatitis C virus [HCV] RNA 24 weeks after the end of treatment) was determined on an intention‐to‐treat basis. Cost per SVR was calculated by dividing the median cost by the SVR rate. Median age of the 147 patients was 56 years (interquartile range [IQR] = 51‐61), 68% were male, 19% were black, 11% had human immunodeficiency virus/HCV coinfection, 36% had advanced fibrosis/cirrhosis (FIB‐4 scores ≥3.25), and 44% achieved an SVR. The total cost of care was

Collaboration


Dive into the Michael D. Linderman's collaboration.

Top Co-Authors

Avatar

Eric E. Schadt

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George A. Diaz

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar

Andrew Kasarskis

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar

Randi E. Zinberg

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sabrina A. Suckiel

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar

Hardik Shah

Icahn School of Medicine at Mount Sinai

View shared research outputs
Top Co-Authors

Avatar

Milind Mahajan

Icahn School of Medicine at Mount Sinai

View shared research outputs
Researchain Logo
Decentralizing Knowledge