O. J. E. Maroney
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by O. J. E. Maroney.
Physical Review Letters | 2013
Matthew S. Leifer; O. J. E. Maroney
We examine the relationship between quantum contextuality (in both the standard Kochen-Specker sense and in the generalized sense proposed by Spekkens) and models of quantum theory in which the quantum state is maximally epistemic. We find that preparation noncontextual models must be maximally epistemic, and these in turn must be Kochen-Specker noncontextual. This implies that the Kochen-Specker theorem is sufficient to establish both the impossibility of maximally epistemic models and the impossibility of preparation noncontextual models. The implication from preparation noncontextual to maximally epistemic then also yields a proof of Bells theorem from an Einstein-Podolsky-Rosen-like argument.
Physical Review E | 2009
O. J. E. Maroney
In a recent paper [Stud. Hist. Philos. Mod. Phys. 36, 355 (2005)] it is argued that to properly understand the thermodynamics of Landauers principle it is necessary to extend the concept of logical operations to include indeterministic operations. Here we examine the thermodynamics of such operations in more detail, extending the work of Landauer to include indeterministic operations and to include logical states with variable entropies, temperatures, and mean energies. We derive the most general statement of Landauers principle and prove its universality, extending considerably the validity of previous proofs. This confirms conjectures made that all logical operations may, in principle, be performed in a thermodynamically reversible fashion, although logically irreversible operations would require special, practically rather difficult, conditions to do so. We demonstrate a physical process that can perform any computation without work requirements or heat exchange with the environment. Many widespread statements of Landauers principle are shown to be special cases of our generalized principle.
Proceedings of the National Academy of Sciences of the United States of America | 2013
Richard E. George; Lucio Robledo; O. J. E. Maroney; Machiel Blok; Hannes Bernien; Matthew Markham; Daniel Twitchen; John J. L. Morton; G. Andrew D. Briggs; R. Hanson
One of the most striking features of quantum mechanics is the profound effect exerted by measurements alone. Sophisticated quantum control is now available in several experimental systems, exposing discrepancies between quantum and classical mechanics whenever measurement induces disturbance of the interrogated system. In practice, such discrepancies may frequently be explained as the back-action required by quantum mechanics adding quantum noise to a classical signal. Here, we implement the “three-box” quantum game [Aharonov Y, et al. (1991) J Phys A Math Gen 24(10):2315–2328] by using state-of-the-art control and measurement of the nitrogen vacancy center in diamond. In this protocol, the back-action of quantum measurements adds no detectable disturbance to the classical description of the game. Quantum and classical mechanics then make contradictory predictions for the same experimental procedure; however, classical observers are unable to invoke measurement-induced disturbance to explain the discrepancy. We quantify the residual disturbance of our measurements and obtain data that rule out any classical model by ≳7.8 standard deviations, allowing us to exclude the property of macroscopic state definiteness from our system. Our experiment is then equivalent to the test of quantum noncontextuality [Kochen S, Specker E (1967) J Math Mech 17(1):59–87] that successfully addresses the measurement detectability loophole.Richard E. George, ∗ Lucio Robledo, † Owen Maroney, Machiel Blok, Hannes Bernien, Matthew L. Markham, Daniel J. Twitchen, John J. L. Morton, G. Andrew D. Briggs, and Ronald Hanson University of Oxford, Department of Materials, 12/13 Parks Road, Oxford, OX1 3PH, United Kingdom Kavli Institute of Nanoscience Delft, Delft University of Technology, Post Office Box 5046, 2600 GA Delft, The Netherlands University of Oxford, Faculty of Philosophy, 10 Merton Street, Oxford, OX1 4JJ, United Kingdom Element Six, Ltd., Kings Ride Park, Ascot, Berkshire SL5 8BP, United Kingdom (Dated: May 1, 2014)
Physical Review Letters | 2014
Jonathan Barrett; Eric G. Cavalcanti; Raymond Lal; O. J. E. Maroney
According to a recent no-go theorem [M. Pusey, J. Barrett and T. Rudolph, Nat. Phys. 8, 475 (2012)], models in which quantum states correspond to probability distributions over the values of some underlying physical variables must have the following feature: the distributions corresponding to distinct quantum states do not overlap. In such a model, it cannot coherently be maintained that the quantum state merely encodes information about underlying physical variables. The theorem, however, considers only models in which the physical variables corresponding to independently prepared systems are independent, and this has been used to challenge the conclusions of that work. Here we consider models that are defined for a single quantum system of dimension d, such that the independence condition does not arise, and derive an upper bound on the extent to which the probability distributions can overlap. In particular, models in which the quantum overlap between pure states is equal to the classical overlap between the corresponding probability distributions cannot reproduce the quantum predictions in any dimension d ≥ 3. Thus any ontological model for quantum theory must postulate some extra principle, such as a limitation on the measurability of physical variables, to explain the indistinguishability of quantum states. Moreover, we show that as d→∞, the ratio of classical and quantum overlaps goes to zero for a class of states. The result is noise tolerant, and an experiment is motivated to distinguish the class of models ruled out from quantum theory.
Archive | 2004
Ta Barrass; Y Wu; I N Semeniouk; D. Bonacorsi; D. M. Newbold; L. Tuura; T. Wildish; C. Charlot; N. De Filippis; S. Metson; I. Fisk; Jose M Hernandez; C. Grandi; A Afaq; J. Rehn; O. J. E. Maroney; K. Rabbertz; W Jank; P. Garcia-Abia; M Ernst; A. Fanfani
CMS currently uses a number of tools to transfer data which, taken together, form the basis of a heterogeneous datagrid. The range of tools used, and the directed, rather than optimized nature of CMS recent large scale data challenge required the creation of a simple infrastructure that allowed a range of tools to operate in a complementary way. The system created comprises a hierarchy of simple processes (named ‘agents’) that propagate files through a number of transfer states. File locations and some application metadata were stored in POOL file catalogues, with LCG LRC or MySQL back-ends. Agents were assigned limited responsibilities, and were restricted to communicating state in a well-defined, indirect fashion through a central transfer management database. In this way, the task of distributing data was easily divided between different groups for implementation. The prototype system was developed rapidly, and achieved the required sustained transfer rate of ~10 MBps, with O(10) files distributed to 6 sites from CERN. Experience with the system during the data challenge raised issues with underlying technology (MSS write/read, stability of the LRC, maintenance of file catalogues, synchronization of filespaces), all of which have been successfully identified and handled. The development of this prototype infrastructure allows us to plan the evolution of backbone CMS data distribution from a simple hierarchy to a more autonomous, scalable model drawing on emerging agent and grid technology. DATA DISTRIBUTION FOR CMS The Compact Muon Solenoid (CMS) experiment at the LHC will produce Petabytes of data a year [1]. This data is then to be distributed to multiple sites which form a hierarchical structure based on available resources: the detector is associated with a Tier 0 site; Tier 1 sites are typically large national computing centres; and Tier 2 sites are Institutes with a more restricted availability of resources and/or services. A core set of Tier 1 sites with large tape, disk and network resources will receive raw and reconstructed data to safeguard against data loss at CERN. Smaller sites, associated with certain analysis groups or Universities, will also subscribe to certain parts of the data. Sites at all levels will be involved in producing Monte Carlo data for comparison with detector data. At the Tier 0 the raw experiment data undergoes a process called reconstruction in which it is restructured to represent physics objects. This data will be grouped hierarchically by stream and dataset based on physics content, then further subdivided by finer granularity metadata. There are therefore three main use cases for distribution in CMS. The first can be described as a push with high priority, in which raw data is replicated to tape at Tier 1s. The second is a subscription pull, where a site subscribes to all data in a given set and data is transferred as it is produced. This use case corresponds to a site registering an interest in the data produced by an ongoing Monte Carlo simulation. The third is a random pull, where a site or individual physicist just wishes to replicate an extant dataset in a one-off transfer. Although these use cases are here discussed in terms of push and pull these can be slightly misleading descriptions. The key point is the effective handover of responsibility for replication between distribution components; for example, it is necessary to determine whether a replica has been created safely in a Tier 1 tape store before being able to delete it from a buffer at the source. This handover is enabled with well-defined handshakes or exchanges of state messages between distribution components. The conceptual basis of data distribution for CMS is then distribution through a hierarchy of sites, with smaller sites associating themselves to larger by subscribing to some subset of the data stored at the larger site. The management of this data poses two overall problems. The first problem is that sustained transfers at the 100+ MBps estimated for CMS alone are currently only approached by existing experiments. The second problem is one of managing the logisitics of subscription transfer based on metadata at granularities between high
Foundations of Physics | 1999
O. J. E. Maroney; B. J. Hiley
Quantum state teleportation has focused attention on the role of quantum information. Here we examine quantum teleportation through the Bohm interpretation. This interpretation introduced the notion of active information and we show that it is this information that is exchanged during teleportation. We discuss the relation between our notion of active information and the notion of quantum information introduced by Schumacher.
Foundations of Physics | 2010
O. J. E. Maroney
Schulman (Entropy 7(4):221–233, 2005) has argued that Boltzmann’s intuition, that the psychological arrow of time is necessarily aligned with the thermodynamic arrow, is correct. Schulman gives an explicit physical mechanism for this connection, based on the brain being representable as a computer, together with certain thermodynamic properties of computational processes. Hawking (Physical Origins of Time Asymmetry, Cambridge University Press, Cambridge, 1994) presents similar, if briefer, arguments. The purpose of this paper is to critically examine the support for the link between thermodynamics and an arrow of time for computers. The principal arguments put forward by Schulman and Hawking will be shown to fail. It will be shown that any computational process that can take place in an entropy increasing universe, can equally take place in an entropy decreasing universe. This conclusion does not automatically imply a psychological arrow can run counter to the thermodynamic arrow. Some alternative possible explanations for the alignment of the two arrows will be briefly discussed.
Studies in History and Philosophy of Modern Physics | 2017
O. J. E. Maroney
Quantum preand post-selection (PPS) paradoxes occur when counterfactual inferences are made about different measurements that might have been performed, between two measurements that are actually performed. The 3 box paradox is the paradigm example of such a paradox, where a ball is placed in one of three boxes and it is inferred that it would have been found, with certainty, both in box 1 and in box 2 had either box been opened on their own. Precisely what is at stake in PPS paradoxes has been unclear, and classical models have been suggested which are supposed to mimic the essential features of the problem. We show that the essential difference between the classical and quantum preand post-selection effects lies in the fact that for a quantum PPS paradox to occur the intervening measurement, had it been performed, would need to be invasive but non-detectable. This invasiveness is required even for null result measurements. While some quasi-classical features (such as non-contextuality and macrorealism) are compatible with PPS paradoxes, it seems no fully classical model of the 3 box paradox is possible.
Foundations of Physics | 2005
O. J. E. Maroney
If the density matrix is treated as an objective description of individual systems, it may become possible to attribute the same objective significance to statistical mechanical properties, such as entropy or temperature, as to properties such as mass or energy. It is shown that the de Broglie--Bohm interpretation of quantum theory can be consistently applied to density matrices as a description of individual systems. The resultant trajectories are examined for the case of the delayed choice interferometer, for which Bell [Int. J. Quantum chem. 155--159, (1980)] appears to suggest that such an interpretation is not possible. Bell’s argument is shown to be based upon a different understanding of the density matrix to that proposed here.
grid computing | 2004
S. Burke; F. J. Harris; Ian Stokes-Rees; I. Augustin; F. Carminati; J. Closier; E. van Herwijnen; A. Sciaba; D Boutigny; J. J. Blaising; Vincent Garonne; A. Tsaregorodtsev; Paolo Capiluppi; A. Fanfani; C. Grandi; R. Barbera; E. Luppi; Guido Negri; L. Perini; S. Resconi; M. Reale; A. De Salvo; S. Bagnasco; P. Cerello; Kors Bos; D.L. Groep; W. van Leeuwen; Jeffrey Templon; Oxana Smirnova; O. J. E. Maroney
An overview is presented of the characteristics of HEP computing and its mapping to the Grid paradigm. This is followed by a synopsis of the main experiences and lessons learned by HEP experiments in their use of DataGrid middleware using both the EDG application testbed and the LCG production service. Particular reference is made to experiment ‘data challenges’, and a forward look is given to necessary developments in the framework of the EGEE project.