Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Quinn Snell is active.

Publication


Featured researches published by Quinn Snell.


job scheduling strategies for parallel processing | 2001

Core Algorithms of the Maui Scheduler

David B. Jackson; Quinn Snell; Mark J. Clement

The Maui scheduler has received wide acceptance in the HPC community as a highly configurable and effective batch scheduler. It is currently in use on hundreds of SP, O2K, and Linux cluster systems throughout the world including a high percentage of the largest and most cutting edge research sites. While the algorithms used within Maui have proven themselves effective, nothing has been published to date documenting these algorithms nor the configurable aspects they support. This paper focuses on three areas of Maui scheduling, specifically, backfill, job prioritization, and fairshare. It briefly discusses the goals of each component, the issues and corresponding design decisions, and the algorithms enabling the Maui policies. It also covers the configurable aspects of each algorithm and the impact of various parameter selections.


job scheduling strategies for parallel processing | 2000

The Performance Impact of Advance Reservation Meta-scheduling

Quinn Snell; Mark J. Clement; David B. Jackson; Chad Gregory

As supercomputing resources become more available, users will require resources managed by several local schedulers. To gain access to a collection of resources, current systems require metajobs to run during locked down periods when the resources are only available for metajob use. It is more convenient and efficient if the user is able to make a reservation at the soonest time when all resources are available. System administrators are reluctant to allow reservations external to locked down periods because of the impact reservations may have on utilization and the Quality of Service that the center is able to provide to its normal users. This research quantifies the impact of advance reservations on and outlines the algorithms that must be used to schedule metajobs. The Maui scheduler is used to examine metascheduling using trace files from existing supercomputing centers. These results indicate that advance reservations can improve the response time for metajobs, while not significantly impacting overall system performance.


international parallel and distributed processing symposium | 2002

TCS: estimating gene genealogies

Mark J. Clement; Quinn Snell; P. Walke; David Posada; Keith A. Crandall

Phylogentic analysis is becoming an increasingly important tool for customized drug treatments, epidemiological studies, and evolutionary analysis. The TCS method provides an important tool for dealing with genes at a population level. Existing software for TCS analysis takes an unreasonable amount of time for the analysis of significant numbers of Taxa. This paper presents the TCS algorithms and describes initial attempts at parallelization. Performance results are also presented for the algorithm on several data sets.


Bioinformatics | 2010

The GNUMAP algorithm

Nathan L. Clement; Quinn Snell; Mark J. Clement; Peter C. Hollenhorst; Jahnvi Purwar; Barbara J. Graves; Bradley R. Cairns; W. Evan Johnson

MOTIVATION The advent of next-generation sequencing technologies has increased the accuracy and quantity of sequence data, opening the door to greater opportunities in genomic research. RESULTS In this article, we present GNUMAP (Genomic Next-generation Universal MAPper), a program capable of overcoming two major obstacles in the mapping of reads from next-generation sequencing runs. First, we have created an algorithm that probabilistically maps reads to repeat regions in the genome on a quantitative basis. Second, we have developed a probabilistic Needleman-Wunsch algorithm which utilizes _prb.txt and _int.txt files produced in the Solexa/Illumina pipeline to improve the mapping accuracy for lower quality reads and increase the amount of usable data produced in a given experiment. AVAILABILITY The source code for the software can be downloaded from http://dna.cs.byu.edu/gnumap.


Bioinformatics | 2007

DNA reference alignment benchmarks based on tertiary structure of encoded proteins

Hyrum Carroll; Wesley A. Beckstead; Timothy O'Connor; Mark T. W. Ebbert; Mark J. Clement; Quinn Snell; David A. McClellan

MOTIVATION Multiple sequence alignments (MSAs) are at the heart of bioinformatics analysis. Recently, a number of multiple protein sequence alignment benchmarks (i.e. BAliBASE, OXBench, PREFAB and SMART) have been released to evaluate new and existing MSA applications. These databases have been well received by researchers and help to quantitatively evaluate MSA programs on protein sequences. Unfortunately, analogous DNA benchmarks are not available, making evaluation of MSA programs difficult for DNA sequences. RESULTS This work presents the first known multiple DNA sequence alignment benchmarks that are (1) comprised of protein-coding portions of DNA (2) based on biological features such as the tertiary structure of encoded proteins. These reference DNA databases contain a total of 3545 alignments, comprising of 68 581 sequences. Two versions of the database are available: mdsa_100s and mdsa_all. The mdsa_100s version contains the alignments of the data sets that TBLASTN found 100% sequence identity for each sequence. The mdsa_all version includes all hits with an E-value score above the threshold of 0.001. A primary use of these databases is to benchmark the performance of MSA applications on DNA data sets. The first such case study is included in the Supplementary Material.


Proceedings of the ACM 1999 conference on Java Grande | 1999

Design issues for efficient implementation of MPI in Java

Glenn Judd; Mark J. Clement; Quinn Snell; Vladimir Getov

While there is growing interest in using Java for high-performance applications, many in the highperformance computing community do not believe that Java can match the performance of traditional native message passing environments. This paper discusses critical issues that must be addressed in the design of Java based message passing systems. Efficient handling of these issues allows Java-MPI applications to obtain performance which rivals that of traditional native message passing systems. To illustrate these concepts, the design and performance of a pure Java implementation of MPI are discussed.


Concurrency and Computation: Practice and Experience | 1998

DOGMA: distributed object group metacomputing architecture

Glenn Judd; Mark J. Clement; Quinn Snell

The performance of Java just-in-time compilers currently approaches native C++, making Java a serious contender for supercomputing application development. This paper presents DOGMA – a new Java based system which enables parallel computing on heterogeneous computers. DOGMA supports parallel programming in both a traditional message-passing form and a novel object-oriented approach. DOGMA provides support for dedicated clusters as well as idle workstations through the use of a web-based browse-in feature or the DOGMA screen saver. This research provides a unified environment for developing high-performance supercomputing applications on heterogeneous systems.


job scheduling strategies for parallel processing | 2002

Preemption Based Backfill

Quinn Snell; Mark J. Clement; David B. Jackson

Recent advances in DNA analysis, global climate modeling and computational fluid dynamics have increased the demand for supercomputing resources. Through increasing the efficiency and throughput of existing supercomputing centers, additional computational power can be provided for these applications. Backfill has been shown to increase the efficiency of supercomputer schedulers for large, homogenous machines[1]. Utilizations can still be as low as 60% for machines with heterogeneous resources and strict administrative requirements. Preemption based backfill allows the scheduler to be more aggressive in filling up the schedule for a supercomputer[2]. Utilization can be increased and administrative requirements relaxed if it is possible to preempt a running job to allow a higher priority task to run.


BMC Bioinformatics | 2011

Accelerated large-scale multiple sequence alignment

Scott Lloyd; Quinn Snell

BackgroundMultiple sequence alignment (MSA) is a fundamental analysis method used in bioinformatics and many comparative genomic applications. Prior MSA acceleration attempts with reconfigurable computing have only addressed the first stage of progressive alignment and consequently exhibit performance limitations according to Amdahls Law. This work is the first known to accelerate the third stage of progressive alignment on reconfigurable hardware.ResultsWe reduce subgroups of aligned sequences into discrete profiles before they are pairwise aligned on the accelerator. Using an FPGA accelerator, an overall speedup of up to 150 has been demonstrated on a large data set when compared to a 2.4 GHz Core2 processor.ConclusionsOur parallel algorithm and architecture accelerates large-scale MSA with reconfigurable computing and allows researchers to solve the larger problems that confront biologists today. Program source is available from http://dna.cs.byu.edu/msa/.


high performance distributed computing | 2002

An enterprise-based grid resource management system

Quinn Snell; Kevin Tew; Joseph J. Ekstrom; Mark J. Clement

As the Internet began its exponential growth into a global information environment, software was often unreliable, slow and had difficulty in interoperating with other systems. Supercomputing node counts also continue to follow high growth trends. Supercomputer and grid resource management software must mature into a reliable computational platform in much the same way that web services matured for the Internet. DOGMA The Next Generation (DOGMA-NG) improves on current resource management approaches by using tested off-the-shelf enterprise technologies to build a robust, scalable, and extensible resource management platform. Distributed web service technologies constitute the core of DOGMA-NGs design and provide fault tolerance and scalability. DOGMA-NGs use of open standard web technologies and efficient management algorithms promises to reduce management time and accommodate the growing size of future supercomputers. The use of web technologies also provides the opportunity for anew parallel programming paradigm, enterprise web services parallel programming, that also gains benefit from the scalable, robust component architecture.

Collaboration


Dive into the Quinn Snell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyrum Carroll

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Bodily

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Glenn Judd

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Keith A. Crandall

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nathan L. Clement

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Scott Lloyd

Brigham Young University

View shared research outputs
Researchain Logo
Decentralizing Knowledge