Randall Bramley
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Randall Bramley.
SIAM Journal on Scientific Computing | 1993
Sirpa Saarinen; Randall Bramley; George Cybenko
The training problem for feedforward neural networks is nonlinear parameter estimation that can be solved by a variety of optimization techniques. Much of the literature on neural networks has focused on variants of gradient descent. The training of neural networks using such techniques is known to be a slow process with more sophisticated techniques not always performing significantly better. This paper shows that feedforward neural networks can have ill-conditioned Hessians and that this ill-conditioning can be quite common. The analysis and experimental results in this paper lead to the conclusion that many network training problems are ill conditioned and may not be solved more efficiently by higher-order optimization methods. While the analyses used in this paper are for completely connected layered networks, they extend to networks with sparse connectivity as well. The results suggest that neural networks can have considerable redundancy in parameterizing the function space in a neighborhood of a lo...
Siam Journal on Scientific and Statistical Computing | 1992
Randall Bramley; Ahmed H. Sameh
Three conjugate gradient accelerated row projection (RP) methods for nonsymmetric linear systems are presented and their properties described. One method is based on Kaczmarz’s method and has an iteration matrix that is the product of orthogonal projectors; another is based on Cimmino’s method and has an iteration matrix that is the sum of orthogonal projectors. A new RP method, which requires fewer matrix-vector operations, explicitly reduces the problem size, is error reducing in the two-norm, and consistently produces better solutions than other RP algorithms, is also introduced. Using comparisons with the method of conjugate gradient applied to the normal equations, the properties of RP methods are explained.A row partitioning approach is described that yields parallel implementations suitable for a wide range of computer architectures, requires only a few vectors of extra storage, and allows computing the necessary projections with small errors. Numerical testing verifies the robustness of this appro...
international symposium on computer architecture | 1993
David J. Kuck; Edward S. Davidson; Duncan H. Lawrie; Ahmed H. Sameh; Chuan-Qi Zhu; Alexander V. Veidenbaum; Jeff Konicek; Pen Chung Yew; Kyle A. Gallivan; William Jalby; Harry A. G. Wijshoff; Randall Bramley; Ulrike Meier Yang; Perry A. Emrath; David A. Padua; Rudolf Eigenmann; Jay Hoeflinger; Greg Jaxon; Zhiyuan Li; T. Murphy; John T. Andrews; Stephen W. Turner
In this paper, we give an overview of the Cedar multiprocessor and present recent performance results. These include the performance of some computational kernels and the Perfect Benchmarks. We also present a methodology for judging parallel system performance and apply this methodology to Cedar, Cray YMP-8, and Thinking Machines CM-5.
SIAM Journal on Scientific Computing | 1997
Xiaoge Wang; Kyle A. Gallivan; Randall Bramley
A new preconditioner for symmetric positive definite systems is proposed, analyzed, and tested. The preconditioner, compressed incomplete modified Gram--Schmidt (CIMGS), is based on an incomplete orthogonal factorization. CIMGS is robust both theoretically and empirically, existing (in exact arithmetic) for any full rank matrix. Numerically it is more robust than an incomplete Cholesky factorization preconditioner (IC) and a complete Cholesky factorization of the normal equations. Theoretical results show that the CIMGS factorization has better backward error properties than complete Cholesky factorization. For symmetric positive definite M-matrices, CIMGS induces a regular splitting and better estimates the complete Cholesky factor as the set of dropped positions gets smaller. CIMGS lies between complete Cholesky factorization and incomplete Cholesky factorization in its approximation properties. These theoretical properties usually hold numerically, even when the matrix is not an M-matrix. When the drop set satisfies a mild and easily verified (or enforced) property, the upper triangular factor CIMGS generates is the same as that generated by incomplete Cholesky factorization. This allows the existence of the IC factorization to be guaranteed, based solely on the target sparsity pattern.
SIAM Journal on Scientific Computing | 1996
Randall Bramley; Beata Winnicka
In 1980, S.-P. Han [Least-Squares Solution of Linearlnequalities, Tech. Report TR–2141, Mathematics Research Center, University of Wisconsin-Madison, 1980] described a finitely terminating algorithm for solving a system
Applied Numerical Mathematics | 1991
Randall Bramley; Ahmed H. Sameh
Ax \leqslant b
international conference on supercomputing | 1988
Randall Bramley; Ahmed H. Sameh
of linear inequalities in a least squares sense. The algorithm uses a singular value decomposition of a submatrix of A on each iteration, making it impractical for all but the smallest problems. This paper shows that a modification of Han’s algorithm allows the iterates to be computed using QR factorization with column pivoting, which significantly reduces the computational cost and allows efficient updating/downdating techniques to be used.The effectiveness of this modification is demonstrated, implementation details are given, and the behaviour of the algorithm discussed. Theoretical and numerical results are shown from the application of the algorithm to linear separability problems.
european conference on parallel processing | 2001
Yoichi Muraoka; Randall Bramley; David Snelling; Harry A. G. Wijshoff
Abstract Row projection algorithms for solving nonsymmetric linear systems are theoretically robust, converging for systems with indefinite symmetric parts and arbitrarily distributed eigenvalues. Slow convergence, however, has prevented their widespread use. This paper shows that a domain decomposition approach gives row projection methods that allow parallelism in the computations. The level of concurrency and size of the created subtasks can be chosen to suit the target machine, and the resulting algorithms have advantages over standard domain decomposition methods.
Proceedings of a conference on Preconditioned conjugate gradient methods | 1991
Randall Bramley; Hsin-Chu Chen; Ulrike Meier; Ahmed H. Sameh
An iterative method for the solution of nonsymmetric linear systems of equations is described and tested. The method, block symmetric successive over-relaxation with conjugate gradient acceleration (BSSOR), is remarkably robust and when applied to block tridiagonal systems allows parallelism in the computations. BSSOR compares favorably to unpreconditioned conjugate gradient-like algorithms in efficiency, and although generally slower than preconditioned methods it is far more reliable. The concept behind BSSOR can, in general, be applied to sparse linear systems (even if they are singular), sparse nonlinear systems of equations and least squares problems.
european conference on parallel processing | 2001
Yoichi Muraoka; Randall Bramley; David Snelling; Harry A. G. Wijshoff
Applications of High-Performance Computers spans a large intellectual area and now includes all the traditional application science and engineering fields. The end goal of all high performance computing research is eventually to support applications, but those applications have traditionally had a strong feedback effect on computer architecture, hardware design, and systems.