Marcello Benedetti
Ames Research Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcello Benedetti.
Physical Review A | 2016
Marcello Benedetti; John Realpe-Gómez; Rupak Biswas; Alejandro Perdomo-Ortiz
An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact on deep learning and other machine-learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggest it will do so with an {\it instance-dependent} effective temperature, different from its physical temperature. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this work, we propose a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a special class of a restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep-learning architectures. We also provide a comparison to
Physical Review X | 2017
Marcello Benedetti; John Realpe-Gómez; Rupak Biswas; Alejandro Perdomo-Ortiz
k
arXiv: Quantum Physics | 2018
Marcello Benedetti; John Realpe-Gómez; Alejandro Perdomo-Ortiz
-step contrastive divergence (CD-
Archive | 2015
Marcello Benedetti; John Realpe-Gómez; Rupak Biswas; Alejandro Perdomo-Ortiz
k
arXiv: Quantum Physics | 2018
Alejandro Perdomo-Ortiz; Marcello Benedetti; John Realpe-Gómez; Rupak Biswas
) with
Bulletin of the American Physical Society | 2017
John Realpe-Gómez; Marcello Benedetti; Rupak Biswas; Alejandro Perdomo-Ortiz
k
arXiv: Quantum Physics | 2018
Marcello Benedetti; Delfina Garcia-Pintos; Yunseong Nam; Alejandro Perdomo-Ortiz
up to 100. Although assuming a suitable fixed effective temperature also allows us to outperform one step contrastive divergence (CD-1), only when using an instance-dependent effective temperature do we find a performance close to that of CD-100 for the case studied here.
Archive | 2015
Marcello Benedetti; John Realpe-Gómez; Rupak Biswas; Alejandro Perdomo-Ortiz
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
arXiv: Quantum Physics | 2018
Marcello Benedetti; Edward Grant; Leonard Wossnig; Simone Severini
Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the \textit{quantum-assisted Helmholtz machine:} a hybrid quantum-classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16x16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.
Archive | 2018
Radu Serban; Max Wilson; Marcello Benedetti; John Realpe-Gómez; Alejandro Perdomo-Ortiz; Andre Petukhov; Paramsothy Jayakumar