Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Amy N. Langville is active.

Publication


Featured researches published by Amy N. Langville.


Computational Statistics & Data Analysis | 2007

Algorithms and applications for approximate nonnegative matrix factorization

Michael W. Berry; Murray Browne; Amy N. Langville; V. Paul Pauca; Robert J. Plemmons

The development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis are presented. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying data sets.


Siam Review | 2005

A Survey of Eigenvector Methods for Web Information Retrieval

Amy N. Langville; Carl D. Meyer

Web information retrieval is significantly more challenging than traditional well-controlled, small document collection information retrieval. One main difference between traditional information retrieval and Web information retrieval is the Webs hyperlink structure. This structure has been exploited by several of todays leading Web search engines, particularly Google and Teoma. In this survey paper, we focus on Web information retrieval methods that use eigenvector computations, presenting the three popular methods of HITS, PageRank, and SALSA.


SIAM Journal on Scientific Computing | 2005

A Reordering for the PageRank Problem

Amy N. Langville; Carl D. Meyer

We describe a reordering particularly suited to the PageRank problem, which reduces the computation of the PageRank vector to that of solving a much smaller system and then using forward substitution to get the full solution vector. We compare the theoretical rates of convergence of the original PageRank algorithm to that of the new reordered PageRank algorithm, showing that the new algorithm can do no worse than the original algorithm. We present results of an experimental comparison on five datasets, which demonstrate that the reordered PageRank algorithm can provide a speedup of as much as a factor of 6. We also note potential additional benefits that result from the proposed reordering.


SIAM Journal on Matrix Analysis and Applications | 2005

Updating Markov Chains with an Eye on Google's PageRank

Amy N. Langville; Carl D. Meyer

An iterative algorithm based on aggregation/disaggregation principles is presented for updating the stationary distribution of a finite homogeneous irreducible Markov chain. The focus is on large-scale problems of the kind that are characterized by Googles PageRank application, but the algorithm is shown to work well in general contexts. The algorithm is flexible in that it allows for changes to the transition probabilities as well as for the creation or deletion of states. In addition to establishing the rate of convergence, it is proven that the algorithm is globally convergent. Results of numerical experiments are presented.


international world wide web conferences | 2004

Updating pagerank with iterative aggregation

Amy N. Langville; Carl D. Meyer

We present an algorithm for updating the PageRank vector [1]. Due to the scale of the web, Google only updates its famous PageRank vector on a monthly basis. However, the Web changes much more frequently. Drastically speeding the PageRank computation can lead to fresher, more accurate rankings of the webpages retrieved by search engines. It can also make the goal of real-time personalized rankings within reach. On two small subsets of the web, our algorithm updates PageRank using just 25% and 14%, respectively, of the time required by the original PageRank algorithm. Our algorithm uses iterative aggregation techniques [7, 8] to focus on the slow-converging states of the Markov chain. The most exciting feature of this algorithm is that it can be joined with other PageRank acceleration methods, such as the dangling node lumpability algorithm [6], quadratic extrapolation [4], and adaptive PageRank [3], to realize even greater speedups (potentially a factor of 60 or more speedup when all algorithms are combined). every few weeks. Our solution harnesses the power of iterative aggregation principles for Markov chains to allow for much more frequent updates to the valuable ranking vectors.


Numerical Linear Algebra With Applications | 2004

A Kronecker product approximate preconditioner for SANs

Amy N. Langville; William J. Stewart

Many very large Markov chains can be modelled efficiently as stochastic automata networks (SANs). A SAN is composed of individual automata which, for the most part, act independently, requiring only infrequent interaction. SANs represent the generator matrix Q of the underlying Markov chain compactly as the sum of Kronecker products of smaller matrices. Thus, storage savings are immediate. The benefit of a SANs compact representation, known as the descriptor, is often outweighed by its tendency to make analysis of the underlying Markov chain tough. While iterative or projections methods have been used to solve the system πQ=0, the time until these methods converge to the stationary solution π is still unsatisfactory. SANs compact representation has made the next logical research step of preconditioning thorny. Several preconditioners for SANs have been proposed and tested, yet each has enjoyed little or no success. Encouraged by the recent success of approximate inverses as preconditioners, we have explored their potential as SAN preconditioners. One particularly relevant finding on approximate inverse preconditioning is the nearest Kronecker product approximation discovered by Pitsianis and Van Loan. In this paper, we extend the nearest Kronecker product technique to approximate the Q matrix for an SAN with a Kronecker product, A1 ⊗ A2 ⊗…⊗ AN. Then, we take M = A ⊗ A ⊗…⊗ A as our SAN NKP preconditioner. Copyright


Journal of Quantitative Analysis in Sports | 2009

Offense-Defense Approach to Ranking Team Sports

Anjela Govan; Amy N. Langville; Carl D. Meyer

The rank of an object is its relative importance to the other objects in the set. Often a rank is an integer assigned from the set 1,...,n. A ranking model is a method of determining a way in which the ranks are assigned. Usually a ranking model uses information available on the objects to determine their respective ratings. The most recognized application of ranking is the competitive sports. Numerous ranking models have been created over the years to compute the team ratings for various sports. In this paper we propose a flexible, easily coded, fast, iterative approach we call the Offense-Defense Model (ODM), to generating team ratings. The convergence of the ODM is grounded in the theory of matrix balancing.The rank of an object is its relative importance to the other objects in the set. Often a rank is an integer assigned from the set 1,...,n. A ranking model is a method of determining a way in which the ranks are assigned. Usually a ranking model uses information available on the objects to determine their respective ratings. The most recognized application of ranking is the competitive sports. Numerous ranking models have been created over the years to compute the team ratings for various sports. In this paper we propose a flexible, easily coded, fast, iterative approach we call the Offense-Defense Model (ODM), to generating team ratings. The convergence of the ODM is grounded in the theory of matrix balancing.


SIAM Journal on Scientific Computing | 2011

Sensitivity and Stability of Ranking Vectors

Timothy P. Chartier; Erich Kreutzer; Amy N. Langville; Kathryn Pedings

We conduct an analysis of the sensitivity of three linear algebra-based ranking methods: the Colley, Massey, and Markov methods. Our analysis employs reverse engineering, in that we start with a simple input ranking vector that we use to build a perfect season, and we then determine the output rating vectors produced by the three methods. This analysis shows that the PageRank rating vector is strongly nonuniformly spaced, while the Colley and Massey methods provide a uniformly spaced rating vector, which is more natural for a perfect season. We further extend our study of the sensitivity and rank stability of these three methods with a careful perturbation analysis of the same perfect season dataset. We find that the Markov method is highly sensitive to small changes in the data and show with an example from the NFL that the Markov methods ranking vector displays some odd unstable behavior.


Informs Journal on Computing | 2004

Testing the Nearest Kronecker Product Preconditioner on Markov Chains and Stochastic Automata Networks

Amy N. Langville; William J. Stewart

This paper is the experimental follow-up to Langville and Stewart (2002), where the theoretical background for the nearest Kronecker product (NKP) preconditioner was developed. Here we test the NKP preconditioner on both Markov chains (MCs) and stochastic automata networks (SANs). We conclude that the NKP preconditioner is not appropriate for general MCs, but is very effective for a MC stored as a SAN.


Journal of Quantitative Analysis in Sports | 2011

Sports Ranking with Nonuniform Weighting

Timothy P. Chartier; Erich Kreutzer; Amy N. Langville; Kathryn Pedings

This paper introduces the integration of nonuniform weighting for sports ranking. While the ideas of the paper can be applied to any ranking method, a careful investigation is conducted on incorporating weighting in two of todays most popular ranking algorithms: the Colley and Massey methods, which were both developed for sports ranking applications and have since become the baseline standards there. The article introduces how to adapt both methods in order to weight some factors, such as late season play, home court advantage, or a winning streak, more heavily in computing the ratings of teams. To illustrate the utility of such weighting, the paper applies the algorithmic ideas in the area in which both Colley and Massey motivated their methods: sports. As such, the target application is producing brackets for the Division I NCAA Mens Basketball Tournament also known as March Madness. All the methods were used to produce brackets in the 2010 tournament and their results are given, including several mathematically-produced brackets that were better than 90 percent of the nearly 5 million brackets submitted to ESPNs Tournament Challenge.

Collaboration


Dive into the Amy N. Langville's collaboration.

Top Co-Authors

Avatar

Carl D. Meyer

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William J. Stewart

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Anjela Govan

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge