Tim Mattson
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Mattson.
ieee high performance extreme computing conference | 2013
Tim Mattson; David A. Bader; Jonathan W. Berry; Aydin Buluç; Jack J. Dongarra; Christos Faloutsos; John Feo; John R. Gilbert; Joseph E. Gonzalez; Bruce Hendrickson; Jeremy Kepner; Charles E. Leiserson; Andrew Lumsdaine; David A. Padua; Stephen W. Poole; Steven P. Reinhardt; Michael Stonebraker; Steve Wallach; Andrew Yoo
It is our view that the state of the art in constructing a large collection of graph algorithms in terms of linear algebraic operations is mature enough to support the emergence of a standard set of primitive building blocks. This paper is a position paper defining the problem and announcing our intention to launch an open effort to define this standard.
design automation conference | 2008
Tim Mattson; Michael Wrinn
The computer industry has a problem. As Moores law marches on, it will be exploited to double cores, not frequencies. But all those cores, growing to 8, 16 and beyond over the next several years, are of little value without parallel software. Where will this come from? With few exceptions, only graduate students and other strange people write parallel software. Even for numerically intensive applications, where parallel algorithms are well understood, professional software engineers almost never write parallel software. Somehow we need to (1) design many core systems programmers can actually use and (2) provide programmers with parallel programming environments that work. The good news is we have 25+ years of history in the HPC space to guide us. The bad news is that few people are paying attention to this experience. This talk looks at the history of parallel computing to develop a set of anecdotal rules to follow as we create many core systems and their programming environments. A common theme is that just about every mistake we could make has already been made by someone. So rather than reinvent these mistakes, lets learn from the past and do it right this time.
ieee high performance extreme computing conference | 2016
Vijay Gadepally; Peinan Chen; Jennie Duggan; Aaron J. Elmore; Brandon Haynes; Jeremy Kepner; Samuel Madden; Tim Mattson; Michael Stonebraker
Organizations are often faced with the challenge of providing data management solutions for large, heterogenous datasets that may have different underlying data and programming models. For example, a medical dataset may have unstructured text, relational data, time series waveforms and imagery. Trying to fit such datasets in a single data management system can have adverse performance and efficiency effects. As a part of the Intel Science and Technology Center on Big Data, we are developing a polystore system designed for such problems. BigDAWG (short for the Big Data Analytics Working Group) is a polystore system designed to work on complex problems that naturally span across different processing or storage engines. BigDAWG provides an architecture that supports diverse database systems working with different data models, support for the competing notions of location transparency and semantic completeness via islands and a middleware that provides a uniform multi-island interface. Initial results from a prototype of the BigDAWG system applied to a medical dataset validate polystore concepts. In this article, we will describe polystore databases, the current BigDAWG architecture and its application on the MIMIC II medical dataset, initial performance results and our future development plans.
international workshop on openmp | 2010
Michael Wong; Michael Klemm; Alejandro Duran; Tim Mattson; Grant E. Haab; Bronis R. de Supinski; Andrey Churbanov
OpenMP lacks essential features for developing mission-critical software. In particular, it has no support for detecting and handling errors or even a concept of them. In this paper, the OpenMP Error Model Subcommittee reports on solutions under consideration for this major omission. We identify issues with the current OpenMP specification and propose a path to extend OpenMP with error-handling capabilities. We add a construct that cleanly shuts down parallel regions as a first step. We then discuss two orthogonal proposals that extend OpenMP with features to handle system-level and user-defined errors.
international parallel and distributed processing symposium | 2017
Aydin Buluç; Tim Mattson; Scott McMillan; José E. Moreira; Carl Yang
The purpose of the GraphBLAS Forum is to standardize linear-algebraic building blocks for graph computations. An important part of this standardization effort is to translate the mathematical specification into an actual Application Programming Interface (API) that (i) is faithful to the mathematics and (ii) enables efficient implementations on modern hardware. This paper documents the approach taken by the C language specification subcommittee and presents the main concepts, constructs, and objects within the GraphBLAS API. Use of the API is illustrated by showing an implementation of the betweenness centrality algorithm.
High Performance Parallelism Pearls#R##N#Multicore and Many-Core Programming Approaches | 2015
Simon N McIntosh-Smith; Tim Mattson
This is an Intel Xeon Phi coprocessor Gem because we show the potential for using the OpenCL standard parallel programming language to deliver portable performance on Intel Xeon Phi coprocessors, Xeon processors, and many-core devices such as GPUs from multiple vendors. This portable performance can be delivered from a single program without needing multiple versions of the code, an advantage of OpenCL over most other approaches available today. As proof of OpenCL’s ability to deliver performance portability, we describe results from the BUDE molecular docking code, which sustains over 30% of peak floating-point performance on a wide variety of processors, including laptop CPUs, Xeon, Xeon Phi, and GPUs.
ieee high performance extreme computing conference | 2017
Vijay Gadepally; Kyle O'Brien; Adam Dziedzic; Aaron J. Elmore; Jeremy Kepner; Samuel Madden; Tim Mattson; Jennie Rogers; Zuohao She; Michael Stonebraker
A polystore system is a database management system composed of integrated heterogeneous database engines and multiple programming languages. By matching data to the storage engine best suited to its needs, complex analytics run faster and flexible storage choices helps improve data organization. BigDAWG (Big Data Working Group) is our prototype implementation of a polystore system. In this paper, we describe the current BigDAWG software release which supports PostgreSQL, Accumulo and SciDB. We describe the overall architecture, API and initial results of applying BigDAWG to the MIMIC II medical dataset.
ieee hot chips symposium | 2009
Tim Mattson
Presents a collection of slides covering the following topics: OpenCL language; heterogeneous computing; central processing unit; data parallelism; implicit SIMD; portable performance; task parallelism; and parallel programming.
arXiv: Artificial Intelligence | 2018
Justin E. Gottschlich; Armando Solar-Lezama; Nesime Tatbul; Michael Carbin; Martin C. Rinard; Regina Barzilay; Saman P. Amarasinghe; Joshua B. Tenenbaum; Tim Mattson
In this position paper, we describe our vision of the future of machine programming through a categorical examination of three pillars of research. Those pillars are: (i) intention, (ii) invention, and (iii) adaptation. Intention emphasizes advancements in the human-to-computer and computer-to-machine-learning interfaces. Invention emphasizes the creation or refinement of algorithms or core hardware and software building blocks through machine learning (ML). Adaptation emphasizes advances in the use of ML-based constructs to autonomously evolve software.
international parallel and distributed processing symposium | 2015
Tim Mattson
Welcome to the 3 edition of the IEEE Graph Algorithm Building Blocks workshop (GABB’2016). This is our first year as a full-day workshop of peer-reviewed papers. We received many high-quality submissions resulting in an excellent program. Our final program covers several aspects of graph computations: benchmarking, systems, algorithms, and programming models. Many people contributed to making this diverse program a reality. The authors who submitted their high-quality papers deserve the most credit for GABB’2016. We are also grateful to our program committee members who produced excellent reviews in a tight timeframe.