E. Yilmaz
Indiana University – Purdue University Indianapolis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by E. Yilmaz.
Parallel Computational Fluid Dynamics 2004#R##N#Multidisciplinary Applications | 1996
R.U. Payli; E. Yilmaz; A. Ecer; H.U. Akay; Stanley Chien
Publisher Summary This chapter presents a version of dynamic load balancing (DLB), developed at the IUPUI CFD Laboratory. Some changes are made to the previous version to make it more compact and distributable with a three-dimensional test case such that those who are interested in using DLB can have a complete package in a compact form. The chapter demonstrates applicability of the environment by using a parallel example program for computational fluid dynamics (CFD) applications. System load measurement of DLB is modified with average load history provided by Unix/Linux systems rather than tracking processes by system agents of the DLB package. In addition, load balancer program is implemented in Java same as all other units of the DLB to make it all in one language. Parallel simulations show that DLB makes significant improvement for better load balance on the system. File I/O and running DLB is smooth. Using LAM/MPI makes program start and halt easier than MPICH due to starting/halting parallel environment with daemons running on all compute nodes. In this version, a user provides script file to start/halt all programs and environments and running of DLB for several cycles. This can be automated further by providing a better user interface to DLB package.
International Journal of Computational Fluid Dynamics | 2001
E. Yilmaz; M. S. Kavsaoglu; H.U. Akay; I. S. Akmandor
A parallel adaptive Euler flow solution algorithm is developed for 3D applications on distributed memory computers. Significant contribution of this research is the development and implementation of a parallel grid adaptation scheme together with an explicit cell vertex-based finite volume 3D flow solver on unstructured tetrahedral grids. Parallel adaptation of grids is based on grid-regeneration philosophy by using an existing serial grid generation program. Then, a general partitioner repartitions the grid. An adaptive sensor value, which is a measure to refine or coarsen grids, is calculated considering the pressure gradients in all partitioned blocks of grids. The parallel performance of the present study was tested. Parallel computations were performed on Unix workstations and a Linux cluster using MPI communication library. The present results show that overall adaptation scheme developed in this study is applicable to any pair of a flow solver and grid generator with affordable cost. It is also proved that parallel adaptation is necessary for accurate and efficient flow solutions.
Archive | 2009
E. Yilmaz; R.U. Payli; H.U. Akay; A. Ecer
In this paper, performance of hybrid programming approach using MPI and OpenMP for a parallel CFD solver was studied in a single cluster of multi-core parallel system. Timing cost for computation and communication was compared for different scenarios. Tuning up the MPI based parallelizable sections of the solver with OpenMP functions and libraries were done. BigRed parallel system of Indiana University was used for parallel runs for 8, 16, 32, and 64 compute nodes with 4 processors (cores) per node. Four threads were used within the node, one for each core. It was observed that MPI performed better than the hybrid with OpenMP in overall elapsed time. However, the hybrid approach showed improved communication time for some cases. In terms of parallel speedup and efficiency, hybrid results were close to MPI, though they were higher for processor numbers less than 32. In general, MPI outperforms hybrid for our applications on this particular computing platform.
Parallel Computational Fluid Dynamics 2005#R##N#Theory and Applications | 2006
R.U. Payli; H.U. Akay; A.S. Baddi; A. Ecer; E. Yilmaz; E. Oktay
Publisher Summary TeraGrid integrates a distributed set of the highest capability computational, data management, and visualization resources through high-performance network connection, grid computing software, and coordinated services. Currently eight U.S supercomputing centers provide resources to the TeraGrid. These centers are the University of Chicago/Argonne National Laboratory (UC/ANL), Indiana University (IU), the National Center for Supercomputing Applications (NCSA), Oak Ridge National Laboratory (ORNL), Pittsburgh Supercomputing Center (PSC), Purdue University (PU), the San Diego Supercomputing Center (SDSC), and the Texas Advanced Computing Center (TACC). The chapter explores the potential applications of TeraGrid. TeraGrid offers a good platform to researchers to run and visualize the large data sets that result in the solutions of large-scale problems. Even though some of the features are still in the testing stage, the stability of the system has been observed to improve. More performance tests have to be conducted for large-scale applications.
Archive | 2009
R.U. Payli; E. Yilmaz; H.U. Akay; A. Ecer
TeraGrid is a National Science Foundation supported computing grid available for scientists and engineers in U.S. universities and government research laboratories. It is the world’ largest and most comprehensive distributed cyberinfrastructure for open scientific research. With its high-performance data management, computational, and visualization resources, TeraGrid has opened up new opportunities for scientists and engineers for solving large-scale problems with relative ease. To assess the performance of different resources available on the TeraGrid, parallel performance of a flow solver was tested on Indiana University’ IBM e1350 and San Diego Supercomputing Center’ IBM BlueGene/L systems, which are two of the TeraGrid computational resources with differing architectures. The results of a large-scale problem are visualized on a parallel visualization toolkit, ParaView, available on the TeraGrid to test its usability for large scale-problems.
parallel computing | 2007
E. Yilmaz; R.U. Payli; H.U. Akay; A. Ecer
Abstract Cross-cluster load distribution of a parallel job has been studied in this paper. Two TeraGrid sites was pooled as a single resource to run 128 and 512 mesh blocks of a CFD problem having 18M mesh elements. Blocks were evenly distributed across the Grid clusters. Random block distribution was compared with two different distributions of graph partitioning. Interface size and block size were considered as weighting factors to group blocks between the Grid sites. Communication between Grid sites is major bottleneck though there are minor differences in timing when considering interface size and block size as weighting factors. As number of mesh blocks increase, communication dependency across the Grid sites increases.
Archive | 2010
E. Yilmaz; R.U. Payli; Hassan U. Akay; A. Ecer; Jingxin Liu
In this paper, we present scalability characteristics of a parallel flow solver on two large computing systems. The flow solver is based cell-centered finite volume discretizations along with explicit and implicit time integration methodologies. It has capability to solve moving body problems using Overset grid approach. Overset option is yet in sequential form. This solver is compared with another in-house flow solver for the parallel performance on two large-scale parallel computing platforms up to 2048 number of processors. Parallel timing performance of the solver was analyzed using the Vampir timing tool for DLR-F6 wing body configuration with 18 million elements. Timing of the Overset component was tested for a butterfly valve flow problem in a channel.
Parallel Computational Fluid Dynamics 2003#R##N#Advanced Numerical Methods Software and Applications | 2004
E. Yilmaz; A. Ecer; H.U. Akay; R.U. Payli; Stanley Chien; Y. Wang
Publisher Summary This chapter discusses the applications of parallel computing over the grids. It has been performed to demonstrate compatibility and applicability of the parallel tools developed in the Computational Fluids Dynamics Laboratory, IUPUI, in the Grid environment. The objective is to use all available resources (heterogeneous operating systems such as Unix, Linux, and Windows, and different resources such as parallel computers and clusters through networks) distributed and owned by different organizations. The new tools should respond to changing environments that are identified by the computer and network loads, restricted access problems, and multiple high-demanding users. It should also support shared or dedicated computer resources. A load management tool should be capable of hiding the complexity of the computing environment from the user. In a previous chapter, two gatekeepers are defined in the CFD Laboratory for the grid environment by using Globus. Single job submission between two gatekeepers in Globus environment has been achieved with its default schedule.
Parallel Computational Fluid Dynamics 2002#R##N#New Frontiers and Multi-disciplinary Applications | 2003
E. Yilmaz; A. Ecer; H.U. Akay; Stanley Y. P. Chien; R.U. Payli
This work is aimed to understand the equilibrium situation in a multi-user parallel computing environment and to observe if the users can run the applications efficiently without cooperating with each other in the system. A dynamic load balancing program is used to submit the parallel jobs and perform automatic load balancing. Several different load and computer configurations are tested for these tasks. Computer cluster tested is composed of heterogeneous operation systems of Unix, Linux and Win2K.
Parallel Computational Fluid Dynamics 2002#R##N#New Frontiers and Multi-disciplinary Applications | 2003
E. Yilmaz; A. Ecer; R.U. Payli; Isaac Lopez; Nan-Suey Liu; Kuo-Huey Chen
Abstract: This paper discusses parallel performance of the FLUX module of the National Combustion Code (NCC) in Dynamic Load Balancing (DLB) environments. The tools used in parallelization and load balancing were developed at IUPUI. The objective of the parallel tool is to allow code developers to parallelize codes easily and effectively, without having to invest significant resources in learning parallel programming techniques. A parallelized version of the FLUX code has been tested for a sample case. Parallel performance of the code is presented in the paper. It was observed that by increasing the number of parallel partitions, the efficiency of parallel FLUX code was improved. In addition, individual users can benefit from improvements made to the DLB environment when submitting parallel jobs. The DLB tool package provides a non-cooperative load balancing environment that also ensures collective benefit in terms of the computational cost.