Robert J. Melosh
Duke University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert J. Melosh.
Computers & Structures | 1987
U. Fischer; Robert J. Melosh
Abstract A new variant of the simplex algorithm is used to distinguish between contact and non-contact points for elastic contact problems. The variant develops a unique solution of the problem in a finite number of analysis cycles. Applications of the method are shown to both the contact and uncontact problems. Use of the modified algorithm in other classes of sceleronomic analysis is described. It is concluded that the reformulation not only leads to proof of the solution uniqueness but also provides a solution method using the Phase I stage of the simplex algorithm.
Computers & Structures | 1984
Senol Utku; Robert J. Melosh
Abstract The development of the finite element method so far indicates that it is a discretization technique especially suited for positive definite, self-adjoint, elliptic systems, or systems with such components. The application of the method leads to the discretized equations in the form of u = f(u) , where u lists the response of the discretized system at n preselected points called nodes. Instead of explicit expressions, vector function f and its Jacobian f,u are available only numerically for a numerically given u. The solution of u = f(u) is usually a digital computer. Due to finiteness of the computer wordlength, the numerical solution u c is in general different from u. Let u ( x , t) denote the actual response of the system in continuum at points corresponding to those of u. In the literature. u ( x , t)- u is called the discretization errors, u - u c the round-off errors, and the s is. u ( x , t)- u c is called the solution errors. In this paper, a state-of-the-art survey is given on the identification, growth, relative magnitudes, estimation, and control of the components of the solution errors.
Computers & Structures | 1982
Senol Utku; Robert J. Melosh; Munirul Islam; M. Salama
Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.
Computers & Structures | 1990
H.A. Smith; Robert J. Melosh
Abstract The unsymmetric formulation is a dynamic model for computing numerically exact free vibration frequencies and modes of structural systems. Based on exact representation of element inertia forces and a frequency independent stiffness matrix, this formulation obtains discretization error-free solutions. Previous studies have shown that in modeling the vibration response of trusses, the unsymmetric formulation is more computationally efficient than the conventional finite element formulation, particularly when accuracy requirements are high. This study extends the development of the unsymmetric formulation to more general vibration problems. The unsymmetric formulation is developed and illustrated here for the vibration analysis of beams and frames with uniform and/or lumped mass distributions. The computational efficiency of the formulation is studied and compared to that of the exact displacement and conventional finite element models. Results from numerical examples suggest that the unsymmetric formulation provides more accurate free vibration modes and frequencies of frames in less computational time than do other dynamic models.
Computers & Structures | 1985
Robert J. Melosh; Senol Utku; Moktar Salama
This paper presents and examines direct solution algorithms for the linear simultaneous equations that arise when finite element models represent an engineering system. It identifies the mathematical processing of four solution methods and assesses their data processing implications using concurrent processing.
Finite Elements in Analysis and Design | 1993
Robert J. Melosh
Given the mathematical idealization of a structure, modeling introduces all the approximations which vanish as the number of unconstrained degrees of freedom of the finite element grid approaches infinity. As a practical matter, the computer code selected to support the analysis limits the definition of the idealized model and hence, the accuracy of the analysis. Sensitivity analysis can furnish information on the effect of these limitations on structural response. However, by definition, the judgement exercised by the analyst in idealization is unassailable by the computer program. That is, we assume that during a finite element analysis the computer has no access to alternative unprogrammed idealizations. A very important example of idealization is the choice of the constitutive equations and the choice of the coefficients of its particularization. Choices affecting the relevance of the element model include use of Euclidean geometry to define the length, volume, and deformations of the structure and the coefficients chosen to particularize the geometry. The assumption of frictionless pins or spring supports, the limitation of the distribution of applied loadings, and the mathematical model of nodal displacement constraints are other examples of idealization choices. Examples of modeling approximations include element and/or node approximations. Use of straight lines to represent curves, polygons or hyperbolas to model circular edges, and stepped surface thicknesses typify approximations to the original geometry. Use of polynomials with few terms to represent deformations is the most important modeling approximation of the deformed geometry. Modeling applied loading and settlements by linear functions of element local coordinates illustrates modeling approximations of boundary conditions. By definition, when the modeling is optimum, the required computer solution accuracy is developed with a minimum of computer resources. The parameters of this optimization problem are the modeling choices, the analysis strategy and the efficiency of the computer configuration in implementing the choices. In this study, we assume that the constraint on the optimum is evaluation of the external work in the structure to a prespecified number of significant digits. We measure the efficiency of the computer configuration by the relative number of degrees of freedom in the analysis. Melosh [1] emphasizes that conventional analysis with element models satisfying minimum convergence requirements can be much less efficient than analysis using curve fitting. Examples suggest that use of fitting can reduce the number of calculations by more than two orders of magnitude and the storage requirements by more than three orders. Furthermore, the previous study indicates that using rational polynomials or hyperbolic estimating functions for curve fitting offers no advantage over polynomials.
Finite Elements in Analysis and Design | 1990
Robert J. Melosh
Abstract The finite element analysis convergence curve defines the relationship between the grid interval and the analysis accuracy. At issue is the change in discretization error as a function of the number of degrees of freedom of the analysis model. Study of the relationship will play an important role in improving quality assurance of response predictions. This paper illustrates the use of element testing for assessing convergence curve characteristics which depend on the choice of element model, examines the effects of changes in boundary conditions, and demonstrates how remodeling decisions affect the curve. Data from the element tests identify nastrans quad 4 model as “well behaved” for rectangles and not well behaved for parallelograms. Data from analyses of the effect of boundary conditions on the convergence curve prove that oscillatory convergence can occur with mixed conditions. The strategy of remeshing proves capable of destroying convergence monotonicity ensured by well-behaved element models and boundary conditions.
IEEE Transactions on Computers | 1989
Senol Utku; M. Salama; Robert J. Melosh
The inherent strong seriality of closely coupled systems is circumvented by defining a family of permutations for reordering equation sets whose matrix of coefficients is Hermitian block tridiagonal. The authors show how these permutations can be used to achieve relatively high concurrency in the Cholesky factorization of banded systems at the expense of introducing limited extra computations due to fill-in terms in the factors. Directed graphs are developed for the concurrent factorization of the transformed matrix of coefficients by the Cholesky algorithm. Expressions for speedup and efficiency are derived in terms of parameters of the permutation, set of equations, and machine architecture. >
Finite Elements in Analysis and Design | 1986
Robert J. Melosh; R Araya; Charbel Farhat; J Garcelon; J Mora
Abstract A challenge in structural engineering of space and ground radio and light wave reflectors is to retain surface geometry to a high precision in the context of pinned connections and changing loads. This paper describes a direct approach to assessing worst-case geometric degradation of these structures for hypothesized slots in the members. The paper presents the structural model, an analysis procedure, and illustrative results. The model addresses articulated structures with slots small compared with member lengths. The analysis process implies stepwise linear behavior. The examples encompass determinate and indeterminate two-dimensional trusses. The approach results in an analysis process capable of predicting the extremes of slip accumulation. Like limit analysis, it provides behavior prediction in a finite number of steps with guaranteed success. Data from the sample analyses suggest that fabrication tolerances should be much less than allowable member elongations if long-time high precision is desired.
Finite Elements in Analysis and Design | 1985
Robert J. Melosh; Abdulati B. Bolkhir
Abstract Evaluations of finite element model discretization error using other finite element analysis results exhibit high promise and erratic fidelity. This paper provides data from carefully controlled experiments which demonstrate the possibility of highly-accurate qualification predictions and suggest sources of failure in the past. The paper reviews the polynomial extrapolation theory, describes a procedure for adapting the general theory to a particular analysis, and illustrates application of the new approach to problems involving distributed loadings on beams. Predictions of accuracy for the beam agree to up to twelve digits with exact measures of error. This success recommends high-precision finite element analysis and suggests that finite element analysis accuracy may be only weakly sensitive to the initial selection of mesh parameters.