Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian E. Smith is active.

Publication


Featured researches published by Brian E. Smith.


Breast Cancer Research and Treatment | 2005

Effect of paroxetine hydrochloride (Paxil®) on fatigue and depression in breast cancer patients receiving chemotherapy

Joseph A. Roscoe; Gary R. Morrow; Jane T. Hickok; Karen M. Mustian; Jennifer J. Griggs; Sara Matteson; Peter Bushunow; Raman Qazi; Brian E. Smith

SummaryBackground. Fatigue can significantly interfere with a cancer patient’s ability to fulfill daily responsibilities and enjoy life. It commonly co-exists with depression in patients undergoing chemotherapy, suggesting that administration of an antidepressant that alleviates symptoms of depression could also reduce fatigue.n Methods. We report on a double-blind clinical trial of 94 female breast cancer patients receiving at least four cycles of chemotherapy randomly assigned to receive either 20 mg of the selective serotonin re-uptake inhibitor (SSRI) paroxetine (Paxil®, SmithKline Beecham Pharmaceuticals) or an identical-appearing placebo. Patients began their study medication seven days following their first on-study treatment and continued until seven days following their fourth on-study treatment. Seven days after each treatment, participants completed questionnaires measuring fatigue (Multidimensional Assessment of Fatigue, Profile of Mood States-Fatigue/Inertia subscale and Fatigue Symptom Checklist) and depression (Profile of Mood States-Depression subscale [POMS-DD] and Center for Epidemiologic Studies-Depression [CES-D]).n Results. Repeated-measures ANOVAs, after controlling for baseline measures, showed that paroxetine was more effective than placebo in reducing depression during chemotherapy as measured by the CES-D (p=0.006) and the POMS-DD (p=0.07) but not in reducing fatigue (all measures, ps > 0.27).n Conclusions. Although depression was significantly reduced in the 44 patients receiving paroxetine compared to the 50 patients receiving placebo, indicating that a biologically active dose was used, no significant differences between groups on any of the measures of fatigued were observed. Results suggest that modulation of serotonin may not be a primary mechanism of fatigue related to cancer treatment.


Blood | 2012

Mutations in the mechanotransduction protein PIEZO1 are associated with hereditary xerocytosis

Vincent P. Schulz; Brett L. Houston; Yelena Maksimova; Donald S. Houston; Brian E. Smith; Jesse Rinehart; Patrick G. Gallagher

Hereditary xerocytosis (HX, MIM 194380) is an autosomal dominant hemolytic anemia characterized by primary erythrocyte dehydration. Copy number analyses, linkage studies, and exome sequencing were used to identify novel mutations affecting PIEZO1, encoded by the FAM38A gene, in 2 multigenerational HX kindreds. Segregation analyses confirmed transmission of the PIEZO1 mutations and cosegregation with the disease phenotype in all affected persons in both kindreds. All patients were heterozygous for FAM38A mutations, except for 3 patients predicted to be homozygous by clinical and physiologic studies who were also homozygous at the DNA level. The FAM38A mutations were both in residues highly conserved across species and within members of the Piezo family of proteins. PIEZO proteins are the recently identified pore-forming subunits of channels that mediate mechanotransduction in mammalian cells. FAM38A transcripts were identified in human erythroid cell mRNA, and discovery proteomics identified PIEZO1 peptides in human erythrocyte membranes. These findings, the first report of mutation in a mammalian mechanosensory transduction channel-associated with genetic disease, suggest that PIEZO proteins play an important role in maintaining erythrocyte volume homeostasis.


international conference on supercomputing | 2008

The deep computing messaging framework: generalized scalable message passing on the blue gene/P supercomputer

Sameer Kumar; Gabor Dozsa; Gheorghe Almasi; Philip Heidelberger; Dong Chen; Mark E. Giampapa; Michael Blocksome; Ahmad Faraj; Jeffrey J. Parker; Joseph D. Ratterman; Brian E. Smith; Charles J. Archer

We present the architecture of the Deep Computing Messaging Framework (DCMF), a message passing runtime designed for the Blue Gene/P machine and other HPC architectures. DCMF has been designed to easily support several programming paradigms such as the Message Passing Interface (MPI), Aggregate Remote Memory Copy Interface (ARMCI), Charm++, and others. This support is made possible as DCMF provides an application programming interface (API) with active messages and non-blocking collectives. DCMF is being open sourced and has a layered component based architecture with multiple levels of abstraction, allowing the members of the community to contribute new components to its design at the various layers. The DCMF runtime can be extended to other architectures through the development of architecture specific implementations of interface classes. The production DCMF runtime on Blue Gene/P takes advantage of the direct memory access (DMA) hardware to offload message passing work and achieve good overlap of computation and communication. We take advantage of the fact that the Blue Gene/P node is a symmetric multi-processor with four cache-coherent cores and use multi-threading to optimize the performance on the collective network. We also present a performance evaluation of the DCMF runtime on Blue Gene/P and show that it delivers performance close to hardware limits.


international parallel and distributed processing symposium | 2012

PAMI: A Parallel Active Message Interface for the Blue Gene/Q Supercomputer

Sameer Kumar; Amith R. Mamidala; Daniel Faraj; Brian E. Smith; Michael Blocksome; Bob Cernohous; Douglas Miller; Jeffrey J. Parker; Joseph D. Ratterman; Philip Heidelberger; Dong Chen; Burkhard Steinmacher-Burrow

The Blue Gene/Q machine is the next generation in the line of IBM massively parallel supercomputers, designed to scale to 262144 nodes and sixteen million threads. With each BG/Q node having 68 hardware threads, hybrid programming paradigms, which use message passing among nodes and multi-threading within nodes, are ideal and will enable applications to achieve high throughput on BG/Q. With such unprecedented massive parallelism and scale, this paper is a groundbreaking effort to explore the design challenges for designing a communication library that can match and exploit such massive parallelism In particular, we present the Parallel Active Messaging Interface (PAMI) library as our BG/Q library solution to the many challenges that come with a machine at such scale. PAMI provides (1) novel techniques to partition the application communication overhead into many contexts that can be accelerated by communication threads, (2) client and context objects to support multiple and different programming paradigms, (3) lockless algorithms to speed up MPI message rate, and (4) novel techniques leveraging the new BG/Q architectural features such as the scalable atomic primitives implemented in the L2 cache, the highly parallel hardware messaging unit that supports both point-to-point and collective operations, and the collective hardware acceleration for operations such as broadcast, reduce, and all reduce. We experimented with PAMI on 2048 BG/Q nodes and the results show high messaging rates as well as low latencies and high throughputs for collective communication operations.


Ibm Journal of Research and Development | 2005

Blue Gene/L programming and operating environment

José E. Moreira; George S. Almasi; Charles J. Archer; Ralph Bellofatto; Peter Bergner; José R. Brunheroto; Michael Brutman; José G. Castaños; Paul G. Crumley; Manish Gupta; Todd Inglett; Derek Lieber; David Limpert; Patrick McCarthy; Mark Megerian; Mark P. Mendell; Michael Mundy; Don Reed; Ramendra K. Sahoo; Alda Sanomiya; Richard Shok; Brian E. Smith; Greg Stewart

With up to 65,536 compute nodes and a peak performance of more than 360 teraflops, the Blue Gene®/L (BG/L) supercomputer represents a new level of massively parallel systems. The system software stack for BG/L creates a programming and operating environment that harnesses the raw power of this architecture with great effectiveness. The design and implementation of this environment followed three major principles: simplicity, performance, and familiarity. By specializing the services provided by each component of the system architecture, we were able to keep each one simple and leverage the BG/L hardware features to deliver high performance to applications. We also implemented standard programming interfaces and programming languages that greatly simplified the job of porting applications to BG/L. The effectiveness of our approach has been demonstrated by the operational success of several prototype and production machines, which have already been scaled to 16,384 nodes.


computing frontiers | 2007

Parallel genomic sequence-search on a massively parallel system

Oystein Thorsen; Brian E. Smith; Carlos P. Sosa; Karl Jiang; Heshan Lin; Amanda Peters; Wu-chun Feng

In the life sciences, genomic databases for sequence search have been growing exponentially in size. As a result, faster sequence-search algorithms to search these databases continue to evolve to cope with algorithmic time complexity. The ubiquitous tool for such search is the Basic Local Alignment Search Tool (BLAST) [1] from the National Center for Biotechnology Information (NCBI). Despite continued algorithmic improvements in BLAST, it cannot keep up with the rate at which the database is exponentially increasing in size. Therefore, parallel implement-ations such as mpiBLAST have emerged to address this problem. The performance of such implementations depends on a myriad of factors including algorithmic, architectural, and mapping of the algorithm to the architecture. This paper describes modifications and extensions to a parallel and distributed-memory version of BLAST called mpiBLAST-PIO and how it maps to a massively parallel system, specifically IBM Blue Gene/L (BG/L). The extensions include a virtual file manager, a multiple master run-time model, efficient fragment distribution, and intelligent load balancing. In this study, we have shown that our optimized mpiBLAST-PIO on BG/L using a query with 28014 sequences and the NR and NT databases scales to 8192 nodes (two cores per node). The cases tested here are well suited for a massively parallel system.


Journal of Pain and Symptom Management | 2010

An exploratory study on the effects of an expectancy manipulation on chemotherapy-related nausea.

Joseph A. Roscoe; Michael O'Neill; Pascal Jean-Pierre; Charles E. Heckler; Ted J. Kaptchuk; Peter Bushunow; Michelle Shayne; Alissa Huston; Raman Qazi; Brian E. Smith

CONTEXTnPrevious research has shown that the effectiveness of acupressure bands in reducing chemotherapy-related nausea is related to patients expectations of efficacy.nnnOBJECTIVEnTo test whether an informational manipulation designed to increase expectation of efficacy regarding acupressure bands would enhance their effectiveness.nnnMETHODSnWe conducted an exploratory, four-arm, randomized clinical trial in breast cancer patients about to begin chemotherapy. All patients received acupressure bands and a relaxation CD. This report focuses on Arm 1(expectancy-neutral informational handout and CD) compared with Arm 4 (expectancy-enhancing handout and CD). Randomization was stratified according to the patients level of certainty that she would have treatment-induced nausea (two levels: high and low). Experience of nausea and use of antiemetics were assessed with a five-day diary.nnnRESULTSnOur expectancy-enhancing manipulation resulted in improved control of nausea in the 26 patients with high nausea expectancies but lessened control of nausea in 27 patients having low nausea expectancies. This interaction effect (between expected nausea and intervention effectiveness) approached statistical significance for our analysis of average nausea (P=0.084) and reached statistical significance for our analysis of peak nausea (P=0.030). Patients receiving the expectancy-enhancing manipulation took fewer antiemetic pills outside the clinic (mean(enhanced)=12.6; mean(neutral)=18.5, P=0.003).nnnCONCLUSIONnThis exploratory intervention reduced antiemetic use overall and also reduced nausea in patients who had high levels of expected nausea. Interestingly, it increased nausea in patients who had low expectancies for nausea. Confirmatory research is warranted.


high performance interconnects | 2009

MPI Collective Communications on The Blue Gene/P Supercomputer: Algorithms and Optimizations

Ahmad Faraj; Sameer Kumar; Brian E. Smith; Amith R. Mamidala; John A. Gunnels

The IBM Blue Gene/P (BG/P) system is a massively parallel supercomputer succeeding BG/L, and it comes with many machine design enhancements and new architectural features at the hardware and software levels. This paper presents techniques leveraging such features to deliver high performance MPI collective communication primitives. In particular, we exploit BG/P rich set of network hardware in exploring three classes of collective algorithms: global algorithms on global interrupt and collective networks for MPI COMM WORLD; rectangular algorithms for rectangular communicators on the torus network; and binomial algorithms for irregular communicators over the torus point-to-point network. We also utilize various forms of data movements including the direct memory access (DMA) engine, collective network, and shared memory, to implement synchronous and asynchronous algorithms of different objectives and performance characteristics. Our performance study on BG/P hardware with up to 16K nodes demonstrates the efficiency and scalability of the algorithms and optimizations.


european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 2008

Architecture of the Component Collective Messaging Interface

Sameer Kumar; Gabor Dozsa; Jeremy Berg; Bob Cernohous; Douglas Miller; Joseph D. Ratterman; Brian E. Smith; Philip Heidelberger

Different programming paradigms utilize a variety of collective communication operations, often with different semantics. We present the component collective messaging interface (CCMI), that can support asynchronous non-blocking collectives and is extensible to different programming paradigms and architectures. CCMI is designed with components written in the C++ programming language, allowing it to have reuse and extendability. Collective algorithms are embodied in topological schedulesand executorsthat execute them. Portability across architectures is enabled by the multisend data movement component. CCMI includes a programming language adaptor used to implement different APIs with different semantics for different paradigms. We study the effectiveness of CCMI on Blue Gene/P and evaluate its performance for the barrier, broadcast, and allreduce collective operations. We also present the performance of the barrier collective on the Abe Infiniband cluster.


IEEE Transactions on Parallel and Distributed Systems | 2008

An Efficient Parallel Implementation of the Hidden Markov Methods for Genomic Sequence-Search on a Massively Parallel System

Karl Jiang; Oystein Thorsen; Amanda Peters; Brian E. Smith; Carlos P. Sosa

Bioinformatics databases used for sequence comparison and sequence alignment are growing exponentially. This has popularized programs that carry out database searches. Current implementations of sequence alignment methods based on hidden Markov models (HMM) have proven to be computationally intensive and, hence, amenable to architectures with multiple processors. In this paper, we describe a modified version of the original parallel implementation of HMMs on a massively parallel system. This is part of the HMMER bioinformatics code. HMMER 2.3.2 uses profile HMMs for sensitive database searching based on statistical descriptions of a sequence familys consensus (Durbin et al., 1998), Two of the nine programs were further parallelized to take advantage of the large number of processors, namely, hmmsearch and hmmpfam. For our study, we start by porting the parallel virtual machine (PVM) versions of these two programs currently available as part of the HMMER suite of programs. We report the performance of these nonoptimized versions as baselines. Our work also includes the introduction of an alternate sequence file indexing, multiple-master configuration, dynamic data collection and, finally, load balancing via the indexed sequence files. This set of optimizations constitutes our modified version for massively parallel systems. Our results show parallel performance improvements of more than one order of magnitude (16 times) for hmmsearch and hmmpfam.

Collaboration


Dive into the Brian E. Smith's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Inglett

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge