Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin W. Glass is active.

Publication


Featured researches published by Colin W. Glass.


Journal of Chemical Physics | 2006

Crystal structure prediction using ab initio evolutionary techniques: Principles and applications

Artem R. Oganov; Colin W. Glass

We have developed an efficient and reliable methodology for crystal structure prediction, merging ab initio total-energy calculations and a specifically devised evolutionary algorithm. This method allows one to predict the most stable crystal structure and a number of low-energy metastable structures for a given compound at any P-T conditions without requiring any experimental input. Extremely high (nearly 100%) success rate has been observed in a few tens of tests done so far, including ionic, covalent, metallic, and molecular structures with up to 40 atoms in the unit cell. We have been able to resolve some important problems in high-pressure crystallography and report a number of new high-pressure crystal structures (stable phases: epsilon-oxygen, new phase of sulphur, new metastable phases of carbon, sulphur and nitrogen, stable and metastable phases of CaCO3). Physical reasons for the success of this methodology are discussed.


Nature | 2009

Ionic high-pressure form of elemental boron

Artem R. Oganov; Jiuhua Chen; Carlo Gatti; Yanzhang Ma; Yanming Ma; Colin W. Glass; Zhenxian Liu; Tony Yu; Oleksandr O Kurakevych; Vladimir L. Solozhenko

Boron is an element of fascinating chemical complexity. Controversies have shrouded this element since its discovery was announced in 1808: the new ‘element’ turned out to be a compound containing less than 60–70% of boron, and it was not until 1909 that 99% pure boron was obtained. And although we now know of at least 16 polymorphs, the stable phase of boron is not yet experimentally established even at ambient conditions. Boron’s complexities arise from frustration: situated between metals and insulators in the periodic table, boron has only three valence electrons, which would favour metallicity, but they are sufficiently localized that insulating states emerge. However, this subtle balance between metallic and insulating states is easily shifted by pressure, temperature and impurities. Here we report the results of high-pressure experiments and ab initio evolutionary crystal structure predictions that explore the structural stability of boron under pressure and, strikingly, reveal a partially ionic high-pressure boron phase. This new phase is stable between 19 and 89 GPa, can be quenched to ambient conditions, and has a hitherto unknown structure (space group Pnnm, 28 atoms in the unit cell) consisting of icosahedral B12 clusters and B2 pairs in a NaCl-type arrangement. We find that the ionicity of the phase affects its electronic bandgap, infrared adsorption and dielectric constants, and that it arises from the different electronic properties of the B2 pairs and B12 clusters and the resultant charge transfer between them.


Computer Physics Communications | 2006

USPEX—Evolutionary crystal structure prediction

Colin W. Glass; Artem R. Oganov; Nikolaus Hansen

We approach the problem of computational crystal structure prediction, implementing an evolutionary algorithm—USPEX (Universal Structure Predictor: Evolutionary Xtallography). Starting from chemical composition we have tested USPEX on numerous systems (with up to 80 atoms in the unit cell) for which the stable structure is known and have observed a success rate of nearly 100%, simultaneously finding large sets of


Acta Crystallographica Section B-structural Science | 2012

Constrained evolutionary algorithm for structure prediction of molecular crystals: methodology and applications

Qiang Zhu; Artem R. Oganov; Colin W. Glass; Harold T. Stokes

Evolutionary crystal structure prediction proved to be a powerful approach for studying a wide range of materials. Here we present a specifically designed algorithm for the prediction of the structure of complex crystals consisting of well defined molecular units. The main feature of this new approach is that each unit is treated as a whole body, which drastically reduces the search space and improves the efficiency, but necessitates the introduction of new variation operators described here. To increase the diversity of the population of structures, the initial population and part (~20%) of the new generations are produced using space-group symmetry combined with random cell parameters, and random positions and orientations of molecular units. We illustrate the efficiency and reliability of this approach by a number of tests (ice, ammonia, carbon dioxide, methane, benzene, glycine and butane-1,4-diammonium dibromide). This approach easily predicts the crystal structure of methane A containing 21 methane molecules (105 atoms) per unit cell. We demonstrate that this new approach also has a high potential for the study of complex inorganic crystals as shown on examples of a complex hydrogen storage material Mg(BH(4))(2) and elemental boron.


Computer Physics Communications | 2014

ms2: A molecular simulation tool for thermodynamic properties, new version release

Colin W. Glass; Steffen Reiser; Gábor Rutkai; Stephan Deublein; Andreas Köster; Gabriela Guevara-Carrion; Amer Wafai; Martin Horsch; Martin Bernreuther; Thorsten Windmann; Hans Hasse; Jadran Vrabec

Abstract A new version release (2.0) of the molecular simulation tool ms2 [S. Deublein et al., Comput. Phys. Commun. 182 (2011) 2350] is presented. Version 2.0 of ms2 features a hybrid parallelization based on MPI and OpenMP for molecular dynamics simulation to achieve higher scalability. Furthermore, the formalism by Lustig [R. Lustig, Mol. Phys. 110 (2012) 3041] is implemented, allowing for a systematic sampling of Massieu potential derivatives in a single simulation run. Moreover, the Green–Kubo formalism is extended for the sampling of the electric conductivity and the residence time. To remove the restriction of the preceding version to electro-neutral molecules, Ewald summation is implemented to consider ionic long range interactions. Finally, the sampling of the radial distribution function is added. Program summary Program title: m s 2 Catalogue identifier: AEJF_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEJF_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 50375 No. of bytes in distributed program, including test data, etc.: 345786 Distribution format: tar.gz Programming language: Fortran90. Computer: The simulation program m s 2 is usable on a wide variety of platforms, from single processor machines to modern supercomputers. Operating system: Unix/Linux. Has the code been vectorized or parallelized?: Yes: Message Passing Interface (MPI) protocol and OpenMP Scalability is up to 2000 cores. RAM: m s 2 runs on single cores with 512 MB RAM. The memory demand rises with increasing number of cores used per node and increasing number of molecules. Classification: 7.7, 7.9, 12. External routines: Message Passing Interface (MPI) Catalogue identifier of previous version: AEJF_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 2350 Does the new version supersede the previous version?: Yes. Nature of problem: Calculation of application oriented thermodynamic properties for fluids consisting of rigid molecules: vapor–liquid equilibria of pure fluids and multi-component mixtures, thermal and caloric data as well as transport properties. Solution method: Molecular dynamics, Monte Carlo, various classical ensembles, grand equilibrium method, Green–Kubo formalism, Lustig formalism Reasons for new version: The source code was extended to introduce new features. Summary of revisions: The new features of Version 2.0 include: Hybrid parallelization based on MPI and OpenMP for molecular dynamics simulation; Ewald summation for long range interactions; sampling of Massieu potential derivatives; extended Green–Kubo formalism for the sampling of the electric conductivity and the residence time; radial distribution function. Restrictions: None. The system size is user-defined. Typical problems addressed by m s 2 can be solved by simulating systems containing typically 1000–4000 molecules. Unusual features: Auxiliary feature tools are available for creating input files, analyzing simulation results and visualizing molecular trajectories. Additional comments: Sample makefiles for multiple operation platforms are provided. Documentation is provided with the installation package and is available at http://www.ms-2.de . Running time: The running time of m s 2 depends on the specified problem, the system size and the number of processes used in the simulation. E.g. running four processes on a “Nehalem” processor, simulations calculating vapor–liquid equilibrium data take between two and 12 hours, calculating transport properties between six and 24 hours. Note that the examples given above stand for the total running time as there is no post-processing of any kind involved in property calculations.


Journal of Chemical Theory and Computation | 2014

ls1 mardyn: The Massively Parallel Molecular Dynamics Code for Large Systems

Christoph Niethammer; Stefan Becker; Martin Bernreuther; Martin Buchholz; Wolfgang Eckhardt; Alexander Heinecke; Stephan Werth; Hans-Joachim Bungartz; Colin W. Glass; Hans Hasse; Jadran Vrabec; Martin Horsch

The molecular dynamics simulation code ls1 mardyn is presented. It is a highly scalable code, optimized for massively parallel execution on supercomputing architectures and currently holds the world record for the largest molecular simulation with over four trillion particles. It enables the application of pair potentials to length and time scales that were previously out of scope for molecular dynamics simulation. With an efficient dynamic load balancing scheme, it delivers high scalability even for challenging heterogeneous configurations. Presently, multicenter rigid potential models based on Lennard-Jones sites, point charges, and higher-order polarities are supported. Due to its modular design, ls1 mardyn can be extended to new physical models, methods, and algorithms, allowing future users to tailor it to suit their respective needs. Possible applications include scenarios with complex geometries, such as fluids at interfaces, as well as nonequilibrium molecular dynamics simulation of heat and mass transfer.


Nature | 2009

Erratum: Ionic high-pressure form of elemental boron

Artem R. Oganov; Jiuhua Chen; Carlo Gatti; Yanzhang Ma; Yanming Ma; Colin W. Glass; Zhenxian Liu; Tony Yu; Oleksandr O. Kurakevych; Vladimir L. Solozhenko

This corrects the article DOI: 10.1038/nature07736


international supercomputing conference | 2013

591 TFLOPS Multi-trillion Particles Simulation on SuperMUC

Wolfgang Eckhardt; Alexander Heinecke; Reinhold Bader; Matthias Brehm; Nicolay Hammer; Herbert Huber; Hans-Georg Kleinhenz; Jadran Vrabec; Hans Hasse; Martin Horsch; Martin Bernreuther; Colin W. Glass; Christoph Niethammer; Arndt Bode; Hans-Joachim Bungartz

Anticipating large-scale molecular dynamics simulations (MD) in nano-fluidics, we conduct performance and scalability studies of an optimized version of the code ls1 mardyn. We present our implementation requiring only 32 Bytes per molecule, which allows us to run the, to our knowledge, largest MD simulation to date. Our optimizations tailored to the Intel Sandy Bridge processor are explained, including vectorization as well as shared-memory parallelization to make use of Hyperthreading. Finally we present results for weak and strong scaling experiments on up to 146016 Cores of SuperMUC at the Leibniz Supercomputing Centre, achieving a speed-up of 133k times which corresponds to an absolute performance of 591.2 TFLOPS.


european conference on parallel processing | 2014

DASH: Data Structures and Algorithms with Support for Hierarchical Locality

Karl Fürlinger; Colin W. Glass; José Gracia; Andreas Knüpfer; Jie Tao; Denis Hünich; Kamran Idrees; Matthias Maiterth; Yousri Mhedheb; Huan Zhou

DASH is a realization of the PGAS (partitioned global address space) model in the form of a C++ template library. Operator overloading is used to provide global-view PGAS semantics without the need for a custom PGAS (pre-)compiler. The DASH library is implemented on top of our runtime system DART, which provides an abstraction layer on top of existing one-sided communication substrates. DART contains methods to allocate memory in the global address space as well as collective and one-sided communication primitives. To support the development of applications that exploit a hierarchical organization, either on the algorithmic or on the hardware level, DASH features the notion of teams that are arranged in a hierarchy. Based on a team hierarchy, the DASH data structures support locality iterators as a generalization of the conventional local/global distinction found in many PGAS approaches.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

DART-MPI: An MPI-based Implementation of a PGAS Runtime System

Huan Zhou; Yousri Mhedheb; Kamran Idrees; Colin W. Glass; José Gracia; Karl Fürlinger

A Partitioned Global Address Space (PGAS) approach treats a distributed system as if the memory were shared on a global level. Given such a global view on memory, the user may program applications very much like shared memory systems. This greatly simplifies the tasks of developing parallel applications, because no explicit communication has to be specified in the program for data exchange between different computing nodes. In this paper we present DART, a runtime environment, which implements the PGAS paradigm on large-scale high-performance computing clusters. A specific feature of our implementation is the use of one-sided communication of the Message Passing Interface (MPI) version 3 (i.e. MPI-3) as the underlying communication substrate. We evaluated the performance of the implementation with several low-level kernels in order to determine overheads and limitations in comparison to the underlying MPI-3.

Collaboration


Dive into the Colin W. Glass's collaboration.

Top Co-Authors

Avatar

Artem R. Oganov

Skolkovo Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans Hasse

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

José Gracia

University of Stuttgart

View shared research outputs
Top Co-Authors

Avatar

Martin Horsch

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge