Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toby Heyn is active.

Publication


Featured researches published by Toby Heyn.


ACM Transactions on Graphics | 2015

Using Nesterov's Method to Accelerate Multibody Dynamics with Friction and Contact

Hammad Mazhar; Toby Heyn; Dan Negrut; Alessandro Tasora

We present a solution method that, compared to the traditional Gauss-Seidel approach, reduces the time required to simulate the dynamics of large systems of rigid bodies interacting through frictional contact by one to two orders of magnitude. Unlike Gauss-Seidel, it can be easily parallelized, which allows for the physics-based simulation of systems with millions of bodies. The proposed accelerated projected gradient descent (APGD) method relies on an approach by Nesterov in which a quadratic optimization problem with conic constraints is solved at each simulation time step to recover the normal and friction forces present in the system. The APGD method is validated against experimental data, compared in terms of speed of convergence and solution time with the Gauss-Seidel and Jacobi methods, and demonstrated in conjunction with snow modeling, bulldozer dynamics, and several benchmark tests that highlight the interplay between the friction and cohesion forces.


Journal of Computational and Nonlinear Dynamics | 2014

Parallel Computing in Multibody System Dynamics: Why, When, and How

Dan Negrut; Radu Serban; Hammad Mazhar; Toby Heyn

This paper addresses three questions related to the use of parallel computing in Multibody Dynamics (MBD) simulation. The “why parallel computing?” question is answered based on the argument that in the upcoming decade parallel computing represents the main source of speed improvement in MBD simulation. The answer to “when is it relevant?” is built around the observation that MBD software users are increasingly interested in multi-physics problems that cross disciplinary boundaries and lead to large sets of equations. The “how?” question is addressed by providing an overview of the state of the art in parallel computing. Emphasis is placed on parallelization approaches and support tools specific to MBD simulation. Three MBD applications are presented where parallel computing has been used to increase problem size and/or reduce time to solution. The paper concludes with a summary of best practices relevant when mapping MBD solutions onto parallel computing hardware.


GPU Computing Gems Jade Edition | 2012

Solving Large Multibody Dynamics Problems on the GPU

Dan Negrut; Alessandro Tasora; Mihai Anitescu; Hammad Mazhar; Toby Heyn; Arman Pazouki

Publisher Summary This chapter describes an approach for the dynamic simulation of large collections of rigid bodies interacting through millions of frictional contacts and bilateral mechanical constraints. The ability to efficiently and accurately simulate the dynamics of rigid multibody systems is relevant in computer-aided engineering design, virtual reality, video games, and computer graphics. Devices composed of rigid bodies interacting through frictional contacts and mechanical joints pose numerical solution challenges because of the discontinuous nature of the motion. Reports indicate that the most popular rigid body software for engineering simulation, which uses an approach based on the so-called “discrete element method,” runs into significant difficulties when handling problems involving thousands of contact events. Another example of commercially available rigid body dynamics software is NVIDIAs PhysX. This software is commonly used in real-time applications where performance is the primary goal. The formulation of the equations of motion, that is, the equations that govern the time evolution of a multibody system, is based on the absolute, or Cartesian, representation of the attitude of each rigid body in the system. The GPU dynamics solver data structures are implemented as large arrays (buffers) to match the execution model associated with NVIDIAs CUDA. Four main buffers used are—the contacts buffer, the constraints buffer, the reduction buffer, and the bodies buffer. The data structure for the contacts has been mapped into columns of four floats.


ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2009

A Parallel Algorithm for Solving Complex Multibody Problems With Stream Processors

Toby Heyn; Alessandro Tasora; Mihai Anitescu; Dan Negrut

This paper describes a numerical method for the parallel solution of the differential measure inclusion problem posed by mechanical multibody systems containing bilateral and unilateral frictional constraints. The method proposed has been implemented as a set of parallel algorithms leveraging NVIDIA’s Compute Unified Device Architecture (CUDA) library support for multi-core stream computing. This allows the proposed solution to run on a wide variety of GeForce and TESLA NVIDIA graphics cards for high performance computing. Although the methodology relies on the solution of cone complementarity problems known to be fine-grained in terms of data dependency, a suitable approach has been developed to exploit parallelism with low overhead in terms of memory access and thread synchronization. Additionally, a parallel collision detection algorithm has been incorporated to further exploit available parallelism. Initial numerical tests described in this paper demonstrate a speedup of one order of magnitude for the solution time of both the collision detection and the cone complementarity problems when performed in parallel. Since stream multiprocessors are becoming ubiquitous as embedded components of next-generation graphic boards, the solution proposed represents a cost-efficient way to simulate the time evolution of complex mechanical problems with millions of parts and constraints, a task that used to require powerful supercomputers. The proposed methodology facilitates the analysis of extremely complex systems such as granular material flows and off-road vehicle dynamics.Copyright


ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2011

ENABLING COMPUTATIONAL DYNAMICS IN DISTRIBUTED COMPUTING ENVIRONMENTS USING A HETEROGENEOUS COMPUTING TEMPLATE

Dan Negrut; Toby Heyn; Andrew Seidl; Dan Melanz; David Lamb

This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and GPUs. The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of sub-domains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) a method for the geometric domain decomposition and mapping onto heterogeneous hardware; (b) methods for proximity computation or collision detection; (c) support for moving data among the corresponding hardware as elements move from subdomain to subdomain; (d) numerical methods for solving the specific dynamics problem of interest; and (e) tools for performing visualization and post-processing in a distributed manner. In this contribution the components (a) and (c) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics; i.e., task (b) above, is discussed separately and in the context of GPU computing. This task is shown to benefit of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the US Army. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Army, and shall not be used for advertising or product endorsement purposes.Copyright


ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference | 2013

Chrono: A Parallel Physics Library for Rigid-Body, Flexible-Body, and Fluid Dynamics

Toby Heyn; Hammad Mazhar; Arman Pazouki; Daniel Melanz; Andrew Seidl; Justin Madsen; Aaron Bartholomew; Dan Negrut; David Lamb; Alessandro Tasora

This contribution discusses a multi-physics simulation engine, called Chrono, that relies heavily on parallel computing. Chrono aims at simulating the dynamics of systems containing rigid bodies, flexible (compliant) bodies, and fluid-rigid body interaction. To this end, it relies on five modules: equation formulation (modeling), equation solution (simulation), collision detection support, domain decomposition for parallel computing, and post-processing analysis with emphasis on high quality rendering/visualization. For each component we point out how parallel CPU and/or GPU computing have been leveraged to allow for the simulation of applications with millions of degrees of freedom such as rover dynamics on granular terrain, fluid-structure interaction problems, or large-scale flexible body dynamics with friction and contact for applications in polymer analysis.


Thirteenth ASCE Aerospace Division Conference on Engineering, Science, Construction, and Operations in Challenging Environments, and the 5th NASA/ASCE Workshop On Granular Materials in Space Exploration | 2012

Using a Granular Dynamics Code to Investigate the Performance of a Helical Anchoring System Design

Hammad Mazhar; Marco B. Quadrelli; Toby Heyn; Justin Madsen; Dan Negrut

NASA is interested in designing a spacecraft capable of visiting a Near Earth Object (NEO), performing experiments, and then returning safely. Certain periods of this mission will require the spacecraft to remain stationary relative to the NEO. Due to the low gravity, such situations require an anchoring mechanism that is compact, easy to deploy and upon mission completion, easily removed. In the proposed approach, using Chrono::Engine (Tasora 2008; Negrut, Tasora et al. 2011; SBEL 2011), a simulation package capable of utilizing massively parallel GPU hardware, extensive validation experiments will first be performed. A set of parametric studies will concentrate on the simulation of the anchoring system. The outcome of this effort will be a systematic study that considers several different anchor designs, along with a recommendation on which anchor design is better suited to the task of anchoring. The anchors will be tested against a range of parameters relating to soil, environment and anchor penetration angles/velocities on a NEO to better understand their performance characteristics. SIMULATION CAPABILITY The simulation of very large collections of rigid bodies is prohibitively time consuming if done on sequential processors. Until recently, the high cost of parallel computing limited the analysis of such large systems to a small number of research groups. This is rapidly changing, owing in large part to general-purpose computing on the GPU (GP-GPU). GP-GPU computing has been vigorously promoted by NVIDIA since the release of the CUDA development platform (NVIDIA 2011), an application interface for software development targeted to run on NVIDIA GPUs. A large number of scientific applications have been developed using CUDA, most of them dealing with problems that are quite easily parallelizable such as molecular dynamics or signal processing. Very few GP-GPU projects are concerned though with the dynamics of multibody systems, the two most significant being the Havok (Havok 2011) and the NVIDIA PhysX (NVIDIA 2010) engines. Both are commercial and proprietary libraries used in the video-game industry and their algorithmic details are not public. Typically, these physics engines trade precision for efficiency as the priority is in speed rather than accuracy. In this context, the goal of this effort is to moderately de-emphasize the efficiency attribute and instead implement a free, general-purpose physics based GPU solver for multibody dynamics backed by convergence results that guarantee the accuracy of the numerical solution. Unlike the so-called penalty or regularization methods, where the frictional interaction can be represented by a collection of stiff springs combined with damping


ASME 2006 International Mechanical Engineering Congress and Exposition | 2006

A Real-Space Parallel Optimization Model Reduction Approach for Electronic Structure Computation in Large Nanostructures Using Orbital-Free Density Functional Theory

Dan Negrut; Mihai Anitescu; Anter El-Azab; Steve Benson; Emil M. Constantinescu; Toby Heyn; Peter Zapol

The goal of this work is the development of a highly parallel approach to computing the electron density in nanostructures. In the context of orbital-free density functional theory, a model reduction approach leads to a parallel algorithm that mirrors the subdomain partitioning of the problem. The resulting form of the energy functional that is subject to the minimization process is compact and simple. Computation of gradient and hessian information is immediate. The salient attribute of the proposed methodology is the use of model reduction (reconstruction) within the framework of electronic structure computation.Copyright


Multibody System Dynamics | 2012

Leveraging parallel computing in multibody dynamics

Dan Negrut; Alessandro Tasora; Hammad Mazhar; Toby Heyn; Philipp Hahn


International Journal for Numerical Methods in Engineering | 2013

Using Krylov subspace and spectral methods for solving complementarity problems in many‐body contact dynamics simulation

Toby Heyn; Mihai Anitescu; Alessandro Tasora; Dan Negrut

Collaboration


Dive into the Toby Heyn's collaboration.

Top Co-Authors

Avatar

Dan Negrut

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Hammad Mazhar

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mihai Anitescu

Argonne National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Arman Pazouki

California State University

View shared research outputs
Top Co-Authors

Avatar

Aaron Bartholomew

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Andrew Seidl

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Daniel Melanz

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Justin Madsen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Anter El-Azab

Florida State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge