Georg Hahn
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Georg Hahn.
conference on scientific computing | 2016
Hristo Djidjev; Georg Hahn; Susan M. Mniszewski; Christian F. A. Negre; Anders M. N. Niklasson; Vivek Sardeshmukh
We study a graph partitioning problem motivated by the simulation of the physical movement of multi-body systems on an atomistic level, where the forces are calculated from a quantum mechanical description of the electrons. Several advanced algorithms have been published in the literature for such simulations that are based on evaluations of matrix polynomials. We aim at efficiently parallelizing these computations by using a special type of graph partitioning. For this, we represent the zero-nonzero structure of a thresholded matrix as a graph and partition that graph into several components. The matrix polynomial is then evaluated for each separate submatrix corresponding to the subgraphs and the evaluated submatrix polynomials are used to assemble the final result for the full matrix polynomial. The paper provides a rigorous definition as well as a mathematical justification of this partitioning problem. We use several algorithms to compute graph partitions and experimentally evaluate their performance with respect to the quality of the partition obtained with each method and the time needed to produce it.
computing frontiers | 2017
Guillaume Chapuis; Hristo Djidjev; Georg Hahn; Guillaume Rizk
This paper assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, and compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.
Scandinavian Journal of Statistics | 2016
Axel Gandy; Georg Hahn
We are concerned with a situation in which we would like to test multiple hypotheses with tests whose p-values cannot be computed explicitly but can be approximated using Monte Carlo simulation. This scenario occurs widely in practice. We are interested in obtaining the same rejections and non-rejections as the ones obtained if the p-values for all hypotheses had been available. The present article introduces a framework for this scenario by providing a generic algorithm for a general multiple testing procedure. We establish conditions that guarantee that the rejections and non-rejections obtained through Monte Carlo simulations are identical to the ones obtained with the p-values. Our framework is applicable to a general class of step-up and step-down procedures, which includes many established multiple testing corrections such as the ones of Bonferroni, Holm, Sidak, Hochberg or Benjamini–Hochberg. Moreover, we show how to use our framework to improve algorithms available in the literature in such a way as to yield theoretical guarantees on their results. These modifications can easily be implemented in practice and lead to a particular way of reporting multiple testing results as three sets together with an error bound on their correctness, demonstrated exemplarily using a real biological dataset.
Statistics and Computing | 2017
Axel Gandy; Georg Hahn
Multiple hypothesis testing is widely used to evaluate scientific studies involving statistical tests. However, for many of these tests, p values are not available and are thus often approximated using Monte Carlo tests such as permutation tests or bootstrap tests. This article presents a simple algorithm based on Thompson Sampling to test multiple hypotheses. It works with arbitrary multiple testing procedures, in particular with step-up and step-down procedures. Its main feature is to sequentially allocate Monte Carlo effort, generating more Monte Carlo samples for tests whose decisions are so far less certain. A simulation study demonstrates that for a low computational effort, the new approach yields a higher power and a higher degree of reproducibility of its results than previously suggested methods.
AStA Advances in Statistical Analysis | 2018
Georg Hahn
Statistical discoveries are often obtained through multiple hypothesis testing. A variety of procedures exists to evaluate multiple hypotheses, for instance the ones of Benjamini–Hochberg, Bonferroni, Holm or Sidak. We are particularly interested in multiple testing procedures with two desired properties: (solely) monotonic and well-behaved procedures. This article investigates to which extent the classes of (monotonic or well-behaved) multiple testing procedures, in particular the subclasses of so-called step-up and step-down procedures, are closed under basic set operations, specifically the union, intersection, difference and the complement of sets of rejected or non-rejected hypotheses. The present article proves two main results: First, taking the union or intersection of arbitrary (monotonic or well-behaved) multiple testing procedures results in new procedures which are monotonic but not well-behaved, whereas the complement or difference generally preserves neither property. Second, the two classes of (solely monotonic or well-behaved) step-up and step-down procedures are closed under taking the union or intersection, but not the complement or difference.
SIAM Journal on Scientific Computing | 2017
Purnima Ghale; Matthew P. Kroonblawd; Susan M. Mniszewski; Christian F. A. Negre; Robert Pavel; Sergio Pino; Vivek Sardeshmukh; Guangjie Shi; Georg Hahn
Quantum-based molecular dynamics (QMD) is a highly accurate and transferable method for material science simulations. However, the time scales and system sizes accessible to QMD are typically limited to picoseconds and a few hundred atoms. These constraints arise due to expensive self-consistent ground-state electronic structure calculations that can often scale cubically with the number of atoms. Linearly scaling methods depend on computing the density matrix
Scandinavian Journal of Statistics | 2014
Axel Gandy; Georg Hahn
\mathbf{P}
Journal of Signal Processing Systems | 2018
Guillaume Chapuis; Hristo Djidjev; Georg Hahn; Guillaume Rizk
from the Hamiltonian matrix
arXiv: Methodology | 2016
Dong Ding; Axel Gandy; Georg Hahn
\mathbf{H}
arXiv: Computation | 2015
Axel Gandy; Georg Hahn
by exploiting the sparsity in both matrices. The second-order spectral projection (SP2) algorithm is an