Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tamara G. Kolda is active.

Publication


Featured researches published by Tamara G. Kolda.


ACM Transactions on Information Systems | 1998

A semidiscrete matrix decomposition for latent semantic indexing information retrieval

Tamara G. Kolda; Dianne P. O'Leary

The vast amount of textual information available today is useless unless it can be effectively and efficiently searched. The goal in information retrieval is to find documents that are relevant to a given user query. We can represent and document collection by a matrix whose (i, j) entry is nonzero only if the ith term appears in the jth document; thus each document corresponds to a columm vector. The query is also represented as a column vector whose ith term is nonzero only if the ith term appears in the query. We score each document for relevancy by taking its inner product with the query. The highest-scoring documents are considered the most relevant. Unfortunately, this method does not necessarily retrieve all relevant documents because it is based on literal term matching. Latent semantic indexing (LSI) replaces the document matrix with an approximation generated by the truncated singular-value decomposition (SVD). This method has been shown to overcome many difficulties associated with literal term matching. In this article we propose replacing the SVD with the semidiscrete decomposition (SDD). We will describe the SDD approximation, show how to compute it, and compare the SDD-based LSI method to the SVD-based LSI methods. We will show that SDD-based LSI does as well as SVD-based LSI in terms of document retrieval while requiring only one-twentieth the storage and one-half the time to compute each query. We will also show how to update the SDD approximation when documents are added or deleted from the document collection.


Siam Journal on Optimization | 1998

BFGS with update skipping and varying memory

Tamara G. Kolda; Dianne P. O'Leary; Larry Nazareth

We give conditions under which limited-memory quasi-Newton methods with exact line searches will terminate in n steps when minimizing n-dimensional quadratic functions. We show that although all Broyden family methods terminate in n steps in their full-memory versions, only BFGS does so with limited-memory. Additionally, we show that full-memory Broyden family methods with exact line searches terminate in at most n + p steps when p matrix updates are skipped. We introduce new limited-memory BFGS variants and test them on nonquadratic minimization problems.


Lecture Notes in Computer Science | 1998

Partitioning sparse rectangular matrices for parallel processing

Tamara G. Kolda

The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.


parallel computing | 1998

Partitioning Sparse Rectangular Matrices for Parallel Computations of Ax and ATv

Bruce Hendrickson; Tamara G. Kolda

This paper addresses the problem of partitioning the nonzeros of sparse nonsymmetric and nonsquare matrices in order to efficiently compute parallel matrix-vector and matrix-transpose-vector multiplies. Our goal is to balance the work per processor while keeping communications costs low. Although the symmetric partitioning problem has been well-studied, the nonsymmetric and rectangular cases have received scant attention. We show that this problem can be described as a partitioning problem on a bipartite graph. We then describe how to use (modified) multilevel methods to partition these graphs and how to implement the matrix multiplies in parallel to take advantage of the partitioning. Finally, we compare various multilevel and other partitioning strategies on matrices from different applications. The multilevel methods are shown to be best.


Other Information: Women of Applied Mathematics: Research and Leadership at the University of Maryland in College Park, Maryland, October 8--10, 2003 | 2004

Workshop on Women of Applied Mathematics: Research and Leadership

Dianne P. O'Leary; Tamara G. Kolda

We held a two and a half day workshop on Women of Applied Mathematics: Research and Leadership at the University of Maryland in College Park, Maryland, October 8--10, 2003. The workshop provided a technical and professional forum for eleven senior women and twenty-four early-career women in applied mathematics. Each participant committed to an outreach activity and publication of a report on the workshops web site. The final session of the workshop produced recommendations for future action.


Proposed for publication in arXiv. | 2013

Counting Triangles in Massive Graphs with MapReduce.

Tamara G. Kolda; Ali Pinar; Seshadhri Comandur; Todd D. Plantenga; Christine Task


Proposed for publication in arXiv. | 2013

On Reciprocity in Massively Multi-player Online Game Networks

Karthik Subbian; Ayush Singhal; Tamara G. Kolda; Ali Pinar; Jaideep Srivastava


Archive | 2011

The BTER Graph Model: Blocked Two-Level Erdos-Renyi.

Tamara G. Kolda; Ali Pinar; Seshadhri Comandur


Archive | 2015

Computing the Largest Entries in a Matrix Product via Sampling.

Tamara G. Kolda; Grey Ballard; Ali Pinar; Seshadhri Comandur


Archive | 2015

Modeling Large-Scale Networks.

Tamara G. Kolda; Sinan Aksoy; Ali Pinar; Todd D. Plantenga; Seshadhri Comandur; Dylan Stark

Collaboration


Dive into the Tamara G. Kolda's collaboration.

Top Co-Authors

Avatar

Todd D. Plantenga

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bruce Hendrickson

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Cynthia A. Phillips

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Madhav Jha

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel M. Dunlavy

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Grey Ballard

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar

Jaideep Ray

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge