Erika Fuentes
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erika Fuentes.
Proceedings of the IEEE | 2005
James Demmel; Jack J. Dongarra; Victor Eijkhout; Erika Fuentes; Antoine Petitet; Rich Vuduc; R. C. Whaley; Katherine A. Yelick
One of the main obstacles to the efficient solution of scientific problems is the problem of tuning software, both to the available architecture and to the user problem at hand. We describe approaches for obtaining tuned high-performance kernels and for automatically choosing suitable algorithms. Specifically, we describe the generation of dense and sparse Basic Linear Algebra Subprograms (BLAS) kernels, and the selection of linear solver algorithms. However, the ideas presented here extend beyond these areas, which can be considered proof of concept.
Ibm Journal of Research and Development | 2006
Jack J. Dongarra; George Bosilca; Zizhong Chen; Victor Eijkhout; Graham E. Fagg; Erika Fuentes; Julien Langou; Piotr Luszczek; Jelena Pješivac-Grbović; Keith Seymour; Haihang You; Sathish S. Vadhiyar
The challenge for the development of next-generation software is the successful management of the complex computational environment while delivering to the scientist the full power of flexible compositions of the available algorithmic alternatives. Self-adapting numerical software (SANS) systems are intended to meet this significant challenge. The process of arriving at an efficient numerical solution of problems in computational science involves numerous decisions by a numerical expert. Attempts to automate such decisions distinguish three levels: algorithmic decision, management of the parallel environment, and processor-specific tuning of kernels. Additionally, at any of these levels we can decide to rearrange the users data. In this paper we look at a number of efforts at the University of Tennessee to investigate these areas.
ACM Transactions on Mathematical Software | 2009
Victor Eijkhout; Erika Fuentes
We propose a standard for generating, manipulating, and storing metadata describing numerical problems, in particular properties of matrices and linear systems. The standard comprises: –an API for metadata generating and querying software, and –an XML format for permanent storage of metadata. The API is open-ended, allowing for other parties to define additional metadata categories to be generated and stored within this framework. Furthermore, we present two software libraries, NMD and AnaMod, that implement this standard, and that contain a number of computational modules for numerical metadata. The libraries, more than simply illustrating the use of the standard, provide considerable utility to numerical researchers.
International Journal of Parallel Programming | 2005
Victor Eijkhout; Erika Fuentes; Thomas Eidson; Jack J. Dongarra
Self-Adapting Numerical Software (SANS) systems aim to automate some of the laborious human decision making involved in adapting numerical algorithms to problem data, network conditions, and computational platform. In this paper we describe the structure of a SANS system that tackles automatic algorithm choice, based on dynamic inspection of the problem data. We describe the various components of such a system, and their interfaces.
cluster computing and the grid | 2003
Micah Beck; Ying Ding; Erika Fuentes; Sharmila Kancherla
An exposed approach in computer service architecture is one that offers client software a primitive service whose semantics are closely based on the underlying physical infrastructure. The exposed approach relies on the client to build higher-level services, with more abstract semantics, out of such primitive tools using sophisticated compilation or run-time algorithms. Current approaches to reliable multicast focus on encapsulated algorithms for efficient retransmission of datagrams to sets of receivers that require them. These approaches include augmenting the primary multicast data channel with direct TCP connections or with secondary multicast channels for retransmissions, and on the possibility of retransmissions originating from nodes in the middle of the network. In this paper we offer an exposed approach to multicast that uses an underlying Logistical Networking infrastructure that makes possible the implementation of any of the current retransmission algorithms, as well as new strategies yet to be devised.
Software Automatic Tuning, From Concepts to State-of-the-Art Results | 2011
Sanjukta Bhowmick; Victor Eijkhout; Yoav Freund; Erika Fuentes; David E. Keyes
The solution of sparse linear systems, a fundamental and resource-intensive task in scientific computing, can be approached through multiple algorithms. Using an algorithm well adapted to characteristics of the task can significantly enhance the performance, such as reducing the time required for the operation, without compromising the quality of the result. However, the “best” solution method can vary even across linear systems generated in course of the same PDE-based simulation, thereby making solver selection a very challenging problem. In this paper, we use a machine learning technique, Alternating Decision Trees (ADT), to select efficient solvers based on the properties of sparse linear systems and runtime-dependent features, such as the stages of simulation. We demonstrate the effectiveness of this method through empirical results over linear systems drawn from computational fluid dynamics and magnetohydrodynamics applications. The results also demonstrate that using ADT can resolve the problem of “over-fitting”, which occurs when limited amount of data is available.
Archive | 2010
Victor Eijkhout; Erika Fuentes
In various areas of numerical analysis, there are several possible algorithms for solving a problem. In such cases, each method potentially solves the problem, but the runtimes can widely differ, and breakdown is possible. Also, there is typically no governing theory for finding the best method, or the theory is in essence uncomputable. Thus, the choice of the optimal method is in practice determined by experimentation and ‘numerical folklore’. However, a more systematic approach is needed, for instance since such choices may need to be made in a dynamic context such as a time-evolving system. Thus we formulate this as a classification problem: assign each numerical problem to a class corresponding to the best method for solving that problem. What makes this an interesting problem for Machine Learning, is the large number of classes, and their relationships. A method is a combination of (at least) a preconditioner and an iterative scheme, making the total number of methods the product of these individual cardinalities. Since this can be a very large number, we want to exploit this structure of the set of classes, and find a way to classify the components of a method separately. We have developed various techniques for such multi-stage recommendations, using automatic recognition of super-clases. These techniques are shown to pay off very well in our application area of iterative linear system solvers. We present the basic concepts of our recommendation strategy, and give an overview of the software libraries that make up the Salsa (Self-Adapting Large-scale Solver Architecture) project.
international conference on machine learning and applications | 2008
Victor Eijkhout; Erika Fuentes
In evolving applications, there is a need for the dynamic selection of algorithms or algorithm parameters. Such selection is hardly ever governed by exact theory, so intelligent recommender systems have been proposed. In our application area, the iterative solution of linear systems of equations, the recommendation process is especially complicated, since the classes have a multi-dimensional structure. We discuss different strategies of recommending the different components of the algorithms.
international conference on computational science | 2002
Alessandro Bassi; Micah Beck; Erika Fuentes; Terry Moore; James S. Plank
It is commonly observed that the continued exponential growth in the capacity of fundamental computing resources — processing power, communication bandwidth, and storage — is working a revolution in the capabilities and practices of the research community. It has become increasingly evident that the most revolutionary applications of this superabundance use resource sharing to enable new possibilities for collaboration and mutual benefit. Over the past 30 years, two basic models of resource sharing with different design goals have emerged. The differences between these two approaches, which we distinguish as the Computer Center and the Internet models, tend to generate divergent opportunity spaces, and it therefore becomes important to explore the alternative choices they present as we plan for and develop an information infrastructure for the scientific community in the next decade.
Archive | 2003
Victor Eijkhout; Erika Fuentes