Unifying Hidden-Variable Problems from Quantum Mechanics by Logics of Dependence and Independence
UUnifying Hidden-Variable Problems fromQuantum Mechanics by Logics of Dependenceand Independence
Rafael Albert
RWTH Aachen University, [email protected]
Erich Grädel
RWTH Aachen University, [email protected]
Abstract
We study hidden-variable models from quantum mechanics, and their abstractions in purely probab-ilistic and relational frameworks, by means of logics of dependence and independence, based on teamsemantics. We show that common desirable properties of hidden-variable models can be definedin an elegant and concise way in dependence and independence logic. The relationship betweendifferent properties, and their simultaneous realisability can thus been formulated and a proved on apurely logical level, as problems of entailment and satisfiability of logical formulae. Connectionsbetween probabilistic and relational entailment in dependence and independence logic allow usto simplify proofs. In many cases, we can establish results on both probabilistic and relationalhidden-variable models by a single proof, because one case implies the other, depending on purelysyntactic criteria. We also discuss the ‘no-go’ theorems by Bell and Kochen-Specker and provide apurely logical variant of the latter, introducing non-contextual choice as a team-semantical property.
Theory of computation → Logic
Keywords and phrases
Hidden-variables, logics of dependence and independence, relational versusprobabilistic team semantics, Kochen-Specker Theorem, Bell’s Theorem
Hidden-variable models have been proposed since the 1920s as an alternative to the dominantinterpretation, later called the Copenhagen interpretation, of quantum mechanics with thegoal to explain and remove counterintuitive aspects of quantum mechanics. In particular,quantum systems behave according to the Copenhagen interpretation probabilistically ratherthan deterministically, non-local interactions between agents or particles that are widelyseparated in space are possible through entanglement, and there is an unavoidable dependencebetween an observer of a quantum mechanical system and the observed properties. Due tosuch features, Einstein, Podolsky, and Rosen [15] considered a quantum mechanical state an‘incomplete’ description of physical reality. The basic idea of hidden-variable models is to‘complete’ quantum mechanics by adding unobservable, ‘hidden’, variables to the descriptionof a system, to obtain models that are consistent with the predictions of quantum mechanics,but which do not exhibit counterintuitive behavior such as non-determinism.The question to what extent hidden-variable models can indeed explain quantum mech-anical effects in a satisfactory way, has been studied for a long time by many researchers.John von Neumann [30], who has coined the term ‘hidden-variable models’, claimed to haveestablished the impossibility of such an endeavour:It should be noted that we need not go any further into the mechanism of the ‘hiddenparameters’, since we now know that the established results of quantum mechanicscan never be re-derived with their help. a r X i v : . [ c s . L O ] F e b Hidden-Variable Problems and Logics of Dependence and Independence
However, this claim was not generally accepted. Some of the most outspoken criticism camefrom Grete Hermann, Davin Mermin, and John Bell [2], who even went as far as calling vonNeumann’s proof “not merely false, but foolish” (see [10]). Further, David Bohm and Lois deBroglie developed non-standard, deterministic interpretations of quantum mechanics usinghidden-variables referred to as Bohmian mechanics or de Broglie-Bohm theory [6, 7, 13].Many different variants of the desirable properties of determinism, locality, and independ-ence of hidden-variable models have been studied. A comprehensive survey of such propertieshas been given in an influential paper by Brandenburger and Yanofsky [9], in an abstract andpurely probabilistic framework that does not make explicit reference to quantum mechanics.It makes precise the relationship between the different properties and discusses the questionwhich combinations of them can be realised simultaneously in a probabilistic hidden-variablemodel. Indeed, the famous ‘no-go’ theorems of quantum mechanics, such as the ones byBell [4] and Kochen-Specker [25], imply that there are severe limitations for this, and thusfor the hidden-variable programme in general. However certain interesting combinations ofproperties are jointly realisable, and the work of Brandenburger and Yanovsky [9] gives adetailed account of what is possible in a probabilistic setting.A further important step for the understanding of hidden-variable phenomena has beenAbramsky’s proposal of a purely relational (rather than probabilistic) framework [1] for hidden-variable models, with discrete analogues of the probabilistic dependence and independenceproperties studied in [9]. He showed that the main structure of the theory is preserved underthis simplification. Abramsky’s work opens the possibility to study hidden-variable questionson a much more general level leaving aside quantum mechanical details.We propose here a study of hidden-variable properties by means of modern logics ofdependence and independence. These logics are based on team semantics, introduced byHodges [24]. While a classical logical formula (from first-order logic, for instance) is evaluatedfor a single assignment, mapping its free variables to values in some mathematical structure,team semantics evaluates a formula for a set of such assignments, called a team. Further, whileprevious formalisms for studying dependence and independence (such as Henkin quantifiersor independence-friendly logic) had modeled dependencies by special quantifiers, the moderndependence and independence logics treat them, following a proposal by Väänänen [29],as atomic properties of teams . While these logics are syntactically very simple, extendingthe atomic team properties by the standard first-order operators ∨ , ∧ , ∃ and ∀ , they aresemantically rather powerful. Indeed, team semantics admits the manipulation of second-order objects by first-order syntax and, in fact, independence logic [19] has the full expressivepower of existential second-order and can thus define all team properties in the complexityclass NP [16].There are several reasons why logics of dependence and independence are natural toolsfor reasoning about hidden-variable models. First of all, the desirable properties of hidden-variable models, in particular the ones studied in [1] and [9], are obviously properties ofdependence and independence. We shall see that in the logics we are using, their definition isextremely simple and transparent, mostly just conjunctions of dependence or independenceatoms. Second, the models studied in hidden-variable theories, both empirical modelsand hidden-variable models, and both in the relational and the probabilistic setting, canreadily be understood and presented as teams. This means that formulae in dependence andindependence logic can be directly evaluated on the models we are interested in. Moreover,it turns out that operators on the team side are naturally compatible with the structure ofhidden-variable models – for instance, the connection between probabilistic and relational . Albert and E. Grädel 3 models corresponds directly to the relationship between relational and probabilistic teamsemantics.Although our study does not establish new results on quantum mechanical hidden-variable models as such, we think that our spelling out of the connections between logicswith team semantics and hidden-variable models in detail is useful and provides the followingcontributions:We show that the properties of empirical and hidden-variable models can be defined in avery concise and elegant way in logics of dependence and independence. This also helpsto make implicit assumptions in the definition of such properties explicit.Our team-semantical framework enables a full unification of the probabilistic and relationaltheory. In fact, for all the properties that we study, the same formula can be used forboth settings, evaluated over relational teams in the first case, and over probabilisticteams in the second case.Connections between different properties of hidden-variable models can be formulatedin terms of logical entailment between the team-semantical formulae defining theseproperties and can thus been proved on a purely logical level. This may also be beneficialfor formalising such proofs in an appropriate proof system or theorem prover.Similarly, the existence of an empirically equivalent hidden-variable model that satisfiessome combination of desirable properties corresponds to the satisfiability of a suitableformula by an extension of the team which represents a given empirical model.On a purely logical level, we establish connections between probabilistic entailment andrelational entailment of formulae from dependence and independence logic, which webelieve to be of independent interest. Applying these connections to the entailment andsatisfiability problems related to properties of hidden-variable models, we often need onlyone proof to establish both the probabilistic and the relational case because, dependingon purely syntactic criteria, one case implies the other. This provides more generalreasons why the relational variant of hidden-variable models is so closely related to theirprobabilistic counterpart, and further motivates Abramsky’s translation as a special caseof a more general framework.We further show that the famous ‘no-go’ theorems by Bell and Kochen-Specker can beformulated in a natural way in the team semantical framework. In particular we providelogical variants of the Kochen-Specker Theorem, highlighting the aspect of contextualityin a different way compared to the classical formulation, and discuss the related notion of non-contextual choice .We remark that researchers studying logics of dependence and independence have beenaware for some time of the possibility to use these logics for reasoning about hidden-variablemodels in quantum mechanics, and this idea was informally discussed in this researchcommunity at several occasions. However, when we started our work, no systematic study inthis direction had been conducted yet. Only recently, when this paper was almost finished,it was brought to our attention that Joni Puljujärvi and Jouko Väänänen at the Universityof Helsinki have simultaneously pursued a similar line of research [27]. We introduce the setup required to understand both relational and probabilistic hidden-variable models. We also define teams and explain how hidden-variable models can be cast
Hidden-Variable Problems and Logics of Dependence and Independence as teams. Finally, we show that the structure of hidden-variable models is compatible withoperators on the team side and thereby demonstrate that teams are very well suited toexpress hidden-variable phenomena.
A purely relational setting to speak about hidden-variable models was introduced by Abramsky[1]. (cid:73)
Definition 2.1.
Let M , . . . , M n and O , . . . , O n be finite sets. We set M = Q ni =1 M i and O = Q ni =1 O i . An arbitrary relation e ⊆ M × O is called an empirical model over ( M, O ) .In this setting, n is called the arity of the system, M the measurement set, and O set ofoutcomes. We usually interpret such a system as having n components which may but need not beseparated in space. We interpret M i as the set of measurements that can be performed incomponent i and O i as potential outcomes in that component. The measurement-outcomepairs ( m, o ) ∈ e are interpreted to be possible in the model. To make sure that the underlyingsets M and O can be inferred from the empirical model, we tacitly assume that every value m i ∈ M i and o j ∈ O j appears in at least one tuple of the relation e ; in database terms thismeans that S i ≤ n M i ∪ S i ≤ n O i coincides with the active domain of e . (cid:73) Example 2.2.
Let us consider a system with 2 components, called Alice and Bob. Let M = { a , a } and M = { b } . Thus, Alice has a choice between two measurements while Bobcan perform only one. In this example, all measurements reveal a single bit of information,i.e. we let O = O = { + , −} . An example of an empirical model over ( M, O ) is e (+ , +) (+ , − ) ( − , +) ( − , − )( a , b ) 1 0 0 1( a , b ) 0 1 1 0 In this example, the outcome of Alice’s measurement is always identical to Bob’s if Alicechooses a and always opposite to Bob’s if she chooses a . (cid:73) Definition 2.3.
Let M and O be as in Definition 2.1, and let Λ be a finite set. A hidden-variable model over ( M, O, Λ) is a relation h ⊆ M × O × Λ . The elements λ ∈ Λ are calledhidden-variables. The interpretation of hidden-variable models is similar to that of empirical models. Thehidden-variables are assumed to be unobservable parameters of the system that may influenceoutcomes. A triplet ( m, o, λ ) ∈ h means that the measurement-outcome pair ( m, o ) ispossible in the system when the hidden-variable takes the value λ . Again, we tacitly assumethat every λ ∈ Λ appears in at least one tuple of h . The original motivation behind theintroduction of hidden-variables was that seemingly unintuitive phenomena on the empiricalside in quantum mechanics might be explained by ignorance about the hidden-variablesand not by an intrinsically non-classical system. However, as we shall see later, certainnon-classical phenomena remain even if hidden-variables are introduced.Hidden-variable models induce empirical models by projecting back onto the empiricallyobservable parameters M × O , giving rise to a notion of empirical equivalence, i.e. a notionof which systems we can distinguish by experiment and which not. . Albert and E. Grädel 5 (cid:73) Definition 2.4.
Let h be a hidden-variable model over ( M, O, Λ) . We call e := { ( m, o ) ∈ M × O : ( ∃ λ ∈ Λ)( m, o, λ ) ∈ h } its induced empirical model , and say that h and e empirically equivalent . Now that we have defined relational models, we present them as teams. (cid:73)
Definition 2.5.
A team is a set X of assignments s : D → A with a common finitedomain D = dom( X ) of variables and values in a set A . For a tuple of variables x =( x , . . . , x m ) ∈ X , we write X ( x ) := { ( s ( x ) , . . . , s ( x m )) : s ∈ X } ⊆ A m for the set ofvalues of x in X . Thinking of an arbitrary but fixed enumeration of the finite domainof a team X as dom( X ) = { x , . . . , x k } , we often identify X with its relational encoding X ( x ) = { s ( x ) : s ∈ X } ⊆ A k and assignments s ∈ X with corresponding tuples. With that in mind, the interpretation of hidden-variable models as teams is very natural.To make sure that the underlying sets of measurements M = Q ni =1 M i , outcomes O = Q ni =1 O i , and hidden-variables Λ can be inferred from the team we again assume that every m i ∈ M i , o j ∈ O j and λ ∈ Λ appears in at least one assignment of the team. Then we havea one-to-one correspondence between hidden-variable models and teams (and similarly forempirical models). (cid:73)
Definition 2.6.
A team X over variables Var hn := { m , . . . , m n , o , . . . , o n , λ } induces ahidden-variable model, denoted h X , with M i := X ( m i ) , O i := X ( o i ) , and Λ := X ( λ ) suchthat ( a, b, c ) ∈ h X : ⇐⇒ ( ∃ s ∈ X ) s ( m ) = a, s ( o ) = b, and s ( λ ) = c. We denote by e X the empirical model that is induced by h X . Analogously, empirical models e ⊆ M × O are represented by teams over Var en := { m , . . . , m n , o , . . . , o n } . Systems arising from quantum mechanics are probabilistic in nature. Thus, the literatureprimarily investigates probabilistic hidden-variable models. A comprehensive discussion ofsuch models can be found in [9]. However, the relevant definitions of probabilistic modelsand their properties are presented here in a somewhat different way, on the basis of our teamsemantical framework. (cid:73)
Definition 2.7.
Let ( M, O ) be as in Definition 2.1. A probability distribution e P : M × O → [0 , is called a probabilistic empirical model over ( M, O ) . Analogously, a probabilistic hidden-variable models is a probability distributions h P over M × O × Λ . We make use of standardnotation for marginalization and conditionals when speaking about probabilistic models. Forexample, for a probabilistic hidden-variable model h P : M × O × Λ → [0 , , we call themarginalization e P : M × O → [0 , with e P ( m, o ) := h P ( m, o ) = P λ ∈ Λ h P ( m, o, λ ) its inducedempirical model. In this case, h P and e P are called empirically equivalent. The intuition behind empirical equivalence is that empirically equivalent models cannotbe distinguished by experiment as they agree on the observable parameters of the system.For this purpose, the conditional distributions h P ( o | m ) are actually slightly more relevantthan the joint probabilities h P ( o, m ) as they define the outcome distributions for fixed Hidden-Variable Problems and Logics of Dependence and Independence measurements; h P ( m ) has no meaningful interpretation without going into discussions aboutthe experimenters’ free will. Thus, the literature regards a slightly different notion ofempirical equivalence. [9] defines an empirical model e P and a hidden-variable model h P to be empirically equivalent if they agree on all suitable conditional probabilities, i.e. if h P ( o | m ) = e P ( o | m ) always holds. We think that the definition we have chosen, i.e. requiringthe slightly stronger h P ( o, m ) = e P ( o, m ), is more natural and elegant for our purposes sinceempirical equivalence reduces to marginal equivalence in the standard probability theoreticsense. For all relevant results it makes no difference which choice is made. The probabilitydistribution h P ( m, o, λ ) can be decomposed into h P ( m ) , h P ( λ | m ) and h P ( o | m, λ ). For allproperties of interest only the latter two components matter. Thus, we could adjust h P ( m ) toenforce agreement with the empirical model without harming properties of h P . In particular,all existence and non-existence results which we will show later in this work are independentof this technical detail.The connection between probabilistic and relational models is one of possibilistic collapse,i.e. we consider the set of tuples with probability greater than zero. This corresponds to ourunderstanding of relational models where we interpret ( m, o ) ∈ e as statement “it is possible in e to obtain the measurement-outcome pair ( m, o )”. (cid:73) Definition 2.8.
A probabilistic empirical model e P over ( M, O ) induces the relational model e := { ( m, o ) ∈ M × O : e P ( m, o ) > } . Analogously, probabilistic hidden-variable modelsinduce relational hidden variable models. Again, we cast probabilistic models as teams, using the notion of probabilistic teams,which have been considered for instance in [14, 20, 21]. (cid:73)
Definition 2.9. A probabilistic team is a pair X = ( X, P X ) , where X is a relational teamand P X : X → (0 , is a probability distribution over X . X is referred to as the underlyingteam of X . We denote P instead of P X if the context is clear. As in the relational case, there is a one-to-one correspondence between probabilistichidden-variable models and probabilistic teams. (cid:73)
Definition 2.10.
A probabilistic team X = ( X, P ) over variables Var hn = { m , . . . , m n ,o , . . . , o n , λ } induces a hidden-variable model h X with M i := X ( m i ) , O i := X ( o i ) , and Λ := X ( λ ) , such that, for every a ∈ M , b ∈ O , and c ∈ Λ h X ( a, b, c ) := ( P ( s ) if s ( m, o, λ ) = ( a, b, c )0 if ( a, b, c ) X ( m, o, λ ) We denote the underlying probabilistic empirical model by e X . Team semantics admits elegant formulations of hidden-variable phenomena because thestructure of hidden-variable and empirical models nicely interplays with operators on theteam side. First, we observe the following relationship between probabilistic and relationalhidden-variable models. (cid:73)
Lemma 2.11.
Let X = ( X, P ) be a probabilistic team over Var hn and let h X be the corres-ponding hidden-variable model. Then X is the team representation of the induced relationalmodel h of h X . . Albert and E. Grädel 7 Indeed, X is the underlying support team of X , containing all assignments with non-zero probability, and this corresponds to the notion of possibilistic collapse. This holds ofcourse also for empirical models. Further, we observe that the induced empirical model of ahidden-variable model corresponds to the restriction to Var en on the team side. (cid:73) Lemma 2.12.
Let X be a relational team over Var hn , let h X be its corresponding hidden-variable model and e X its induced empirical model. Then, X (cid:22) Var en is the team representationof e X . Similarly, let X be a probabilistic team over Var hn representing the probabilistic hidden-variable model h X , and let e X be its induced empirical model. Then, X (cid:22) Var en is the teamrepresentation of e X . This allow us to cast properties of the underlying model as properties of the hidden-variablemodel. We can also formulate sort of a reverse. (cid:73)
Corollary 2.13.
Let X be a team over Var en with corresponding model e X . A team Y over Var hn represents an empirically equivalent hidden-variable model if, and only if, Y (cid:22) Var en = X .Analogously, let X be a probabilistic team over Var en . A team Y over Var hn represents anempirically equivalent hidden-variable model if, and only if, Y (cid:22) Var en = X . All these correspondences are neatly summarized by the commutative diagram in Figure 1.
X X (cid:22)
Var en X X (cid:22)
Var en h X e X h X e X Figure 1
Compatibility of team and model structure
We now introduce the logical machinery that we need to reason about teams and thehidden-variable models that they represent. We first recall the main definitions for logicswith relational and probabilistic team semantics. Then, we develop novel correspondencesbetween the two framework that will then allow us to unify relational and probabilistichidden-variables models and enable us to develop their theory in parallel.
Modern logics of dependence and independence are based on atomic properties of teamsand on an interpretation of the classical logical operators ∧ , ∨ , ∃ and ∀ in the framework ofteams. There are many atomic team properties that have been studied in the context of such Hidden-Variable Problems and Logics of Dependence and Independence logics. In this paper, we shall need only dependence, inclusion, and independence, which aredefined as follows, for any team X , and for tuples x, y, z of variables in the domain of X . Dependence: X | = dep( x, y ) if for all s, s ∈ X such that s ( x ) = s ( x ), also s ( y ) = s ( y ); Inclusion: X | = x ⊆ y if X ( x ) ⊆ X ( y ); Independence: X | = x ⊥ z y if for all s, s ∈ X such that s ( z ) = s ( z ), there exists some s ∈ X such that s ( z ) = s ( z ) , s ( x ) = s ( x ), and s ( y ) = s ( y ).Thus dep( x, y ) describes functional dependence of y from x , in the sense that thereexist a function f : X ( x ) → X ( y ) such that s ( y ) = f ( s ( x )) for every assignment s ∈ X .Independence however is more than the absence of dependence. The atom x ⊥ z y expressesthat for any fixed value for z in the team, the values of x and y are completely independentin the sense that additional information about the value of x does not constrain the possiblevalues of y in any way, and vice versa. This variant of independence is sometimes called conditional independence . A simple independence atom instead has the form x ⊥ y . We thenhave that X | = x ⊥ y if, and only if, X ( xy ) = X ( x ) × X ( y ), so any value for x in the teamco-exists with any value for y in a common assignment in X .Syntactically, dependence logic FO(dep) is built from dependence atoms dep( x, y ) andfirst-order literals, i.e. atoms and negated atoms of a fixed vocabulary τ , by the operators ∧ , ∨ , ∃ and ∀ . All formulae are written in negation normal form and negation is only applied tofirst-order atoms, not to dependence atoms. Independence logic FO( ⊥ ) is built analogously,using independence atoms x ⊥ z y instead.We now present the relational team semantics of FO(dep) and FO( ⊥ ). Recall that thetraditional Tarski semantics for first-order formulae ϕ ( x ) is based on single assignments s whose domain must comprise the variables in free( ϕ ); we write A | = ϕ [ s ] for saying that A satisfies ϕ with the assignment s .The team semantics for a formula ϕ ( x ) in a logic of dependence and independence isinstead defined by inductive clauses for the satisfaction relation A | = X ϕ saying that theteam X satisfies ϕ in A . Since the underlying structure A is not of interest for the purposesof this paper, we shall simplify notation and simply write X | = ϕ instead. For a classicalfirst-order literal, i.e. an atom or its negation, we simply say that X | = ϕ if ϕ [ s ] is true inthe underlying structure for all s ∈ X . To present the inductive rules of team semantics, weshall need two basic operations that (possibly) extend the domain of a given team X (withvalues in A ) to new variables. (cid:73) Definition 3.1.
Given an assignment s , a variable x , and a value a ∈ A , we write s [ x a ] for the assignment that extends, or updates, s by mapping x to a (and leavingthe values of all other variables unchanged). The unrestricted generalisation of X over A is X [ x A ] := { s [ x a ] : s ∈ X, a ∈ A } , and the Skolem-extension for a function F : X → P ( A ) \ { ∅ } is X [ x F ] := { s [ x a ] : s ∈ X, a ∈ F ( s ) } . The following inductive rules then extend the team semantics of atomic team propertiesand first-order literals to arbitrary formulae in FO(dep) or FO( ⊥ ): X | = ϕ ∧ ϕ if X | = ϕ i for i = 1 , X | = ϕ ∨ ϕ if X = X ∪ X for two teams X i such that A | = X i ϕ i ; X | = ∀ xϕ if X [ x A ] | = ϕ ; X | = ∃ xϕ if X [ x F ] | = ϕ , for some suitable Skolem extension of X by a function F : X → P ( A ) \ { ∅ } . . Albert and E. Grädel 9 Since dep( x, y ) ≡ y ⊥ x y , we can freely use dependence atoms in FO( ⊥ ), and it is known[16] that inclusion atoms x ⊆ y are definable in FO( ⊥ ) as well. In fact, the logic FO(dep , ⊆ )with both inclusion and dependence atoms is equivalent to FO( ⊥ ) and it suffices to usesimple independence atoms to get the full power of FO( ⊥ ). To understand probabilistic hidden-variable phenomena, we make use of a probabilistic variantof team semantics. Our definitions are essentially the same as in [14, 20, 21] with deviationsin some details. However, we stress the fact that we consciously select notations that invitejumping back and forth between probabilistic and relational semantics. In particular, weuse precisely the same syntax for the probabilistic variants of FO(dep) and FO( ⊥ ), but weevaluate the formulae now over probabilistic teams X = ( X, P X ). To define the probabilisticsemantics for these logics, we first have to describe the meaning of first-order literals and ofdependence and independence atoms.For first-order literals ϕ , we let X | = ϕ if, and only if, X | = ϕ in the relational sense. X | = dep( x, y ) if for all a ∈ X ( x ) there is a b ∈ X ( y ) such that P ( y = b | x = a ) = 1.Note that this corresponds to a deterministic sense of dependence. X | = x ⊥ z y if we have conditional stochastic independence between x and y for any given z . Formally, this means that for all a ∈ X ( x ) , b ∈ X ( y ) , c ∈ X ( z ), P ( x = a, y = b | z = c ) = P ( x = a | z = c ) · P ( y = b | z = c ) . The probabilistic team semantics of the logical operators generalises the relationalsemantics. To make this precise, we have to give appropriate definitions for the split, thegeneralisation, and the Skolem extension of a probabilistic team. (cid:73)
Definition 3.2.
Let X = ( X, P X ) be a probabilistic team with A as its set of values,and let ∆( A ) denote the set of all probability distributions over A . Consider a function F : X → ∆( A ) that maps every s ∈ X to a probability distribution F s ∈ ∆( A ) on A . Itinduces a function F : X → P ( A ) \ { ∅ } via the support of the distributions F s , i.e. by setting F ( s ) := { a ∈ A : F s ( a ) > } . Then, the Skolem extension X [ x
7→ F ] is defined as theprobabilistic team over X [ x F ] with distribution P ( s [ x a ]) := X t ∈ X,t [ x a ]= s [ x a ] P X ( t ) · F t ( a ) . Note that the right-hand side simplifies to P X ( s ) · F s ( a ) if x is a new variable. Intuitively, this means that the probability mass of s is split over multiple extensions s [ x a ] where the proportion assigned to each a is given by F s ( a ). Thus, we can interpret F s ( a ) as the probability of the event x = a conditional on agreement with s on dom( s ). Thefunction F tells us how we have to extend our underlying relational support team.Note that the marginalization of Skolem extensions by F gives back the original team.More formally, for X = ( X, P ) with x dom( X ) and a Skolem extension Y = X [ x
7→ F ], itholds that X = Y (cid:22) dom( X ). (cid:73) Definition 3.3.
The uniform extension X [ x A ] of X is the special case of a Skolemextension X [ x
7→ F ] where F maps all s ∈ X to the uniform distribution over A . Explicitly, we get the distribution P ( s [ x a ]) = 1 | A | X t ∈ Xt [ x a ]= s [ x a ] P X ( t ) . Again this simplifies to P ( s [ x a ]) = P X ( s ) / | A | if x is a new variable. This corresponds toa uniform split of the probability mass of s over all | A | extensions. (cid:73) Definition 3.4.
Let X be a probabilistic team with X as its underlying team. We nowdefine the probabilistic team semantics of the logical operators as follows: X | = ϕ ∧ ϕ if X | = ϕ i for i = 1 , . Thus, conjunctions are defined in the straightforwardway, as in relational team semantics. X | = ϕ ∨ ϕ if there are probabilistic teams X , X and λ ∈ [0 , with X = X ∪ X and P X = (1 − λ ) P X + λ P X such that X i | = ϕ i for i = 1 , . In other words, this means thatwe can split X into a convex combination of two teams that satisfy the formulae ϕ i . X | = ∀ xϕ if X [ x A ] | = ϕ , i.e. if the uniform extension of X satisfies ϕ . X | = ∃ xϕ if X [ x
7→ F ] | = ϕ for some F : X → ∆( A ) , i.e. if a suitable Skolem extensionof X satisfies ϕ . The existential quantifier is most relevant for our purposes and we want to provide moreintuition for it. For a variable x dom( X ), X | = ∃ xϕ expresses the existence of anotherprobabilistic team Y over dom( X ) ∪ { x } such that Y satisfies ϕ , and Y marginalizes to X if restricted to dom( X ). This means that for all s ∈ X we have that P X ( s ) = P a ∈ A P Y ( s [ x a ]). We now compare probabilistic and relational team semantics. We start with the satisfactionrelation. Note that it is essential for the formulation of this theorem that we use the samesyntax for both variants. (cid:73)
Theorem 3.5.
Let X be a probabilistic team and X its underlying team. For every formula ψ ∈ FO( ⊥ ) , we have that X | = ψ implies X | = ψ . For every ϕ ∈ FO(dep) , we also have theconverse, so that X | = ϕ if, and only if, X | = ϕ . Proof.
We proceed by induction.For first-order literals ϕ , we obviously have that X | = ϕ if, and only if, X | = ϕ .Assume that X | = x ⊥ z y . Now let s, s ∈ X with s ( z ) = s ( z ) = c . With a = s ( x ) and b = s ( y ) we get that P ( x = a, y = b | z = c ) = P ( x = a | z = c ) | {z } ≥ P ( s ) > · P ( y = b | z = c | {z } ≥ P ( s ) > ) > . Since X is the support team of X , there exists s ∈ X with s ( x, y, z ) = ( a, b, c )For dependence atom, the implication from left to right follows from the fact thatdependence atoms can be understood as special cases of independence atoms, the otherdirection is straightforward since X is defined to be the support of P . . Albert and E. Grädel 11 It remains to be shown that the logical operators preserve the implications in both directions.This is obvious for conjunctions.For X | = ψ ∨ ϑ , we choose Y | = ψ and Z | = ϑ with X = (1 − λ ) Y + λ Z so that X = Y ∪ Z .By induction hypothesis, Y | = ψ and Z | = ϑ , hence X | = ψ ∨ ϑ . Conversely, given aprobabilistic team X and a decomposition X = Y ∪ Z of the underlying team with Y | = ψ and Z | = ϑ , it is straightforward to construct Y and Z with underlying Y and Z such that P X = (1 − λ ) P Y + λ P Z for some λ ∈ [0 , Y | = ψ and Z | = ϑ ,hence X | = ψ ∨ ϑ .For formulae ∀ xψ , the induction step follows from the fact that the underlying team of X [ x A ] is X [ x A ].If X | = ∃ xψ by means of F : X → P ( A ) \ { ∅ } , we put F ( s ) = { a ∈ A : F s ( a ) > } toget the Skolem extension X [ x F ] as underlying team of X [ x
7→ F ], so X [ x F ] | = ψ and hence X | = ∃ xψ . Conversely, let X | = ∃ xψ , hence X [ x F ] | = ψ for some F : X → P ( A ) \ { ∅ } . Let F : X ∆( A ) be the function that maps every s ∈ X to theuniform distribution over F ( s ). Then, X [ x
7→ F ] has X [ x F ] as underlying team andthe induction hypothesis is applicable. (cid:74) We next address the matter of logical entailment. (cid:73)
Definition 3.6.
Let ψ, ϕ ∈ FO( ⊥ ) . We write ψ | = rel ϕ if for all suitable relational teams X with X | = ψ also X | = ϕ holds. Analogously, we define ψ | = prob ϕ as entailment accordingto probabilistic team semantics. If both ψ | = rel ϕ and ψ | = prob ϕ hold, we write ψ | = all ϕ . Results that are analogous to the theorem below have been established for databases butapparently not applied to teams so far. (cid:73)
Theorem 3.7.1.
For formulae in
FO(dep) , | = rel and | = prob coincide. If ψ ∈ FO(dep) , ϕ ∈ FO( ⊥ ) , then ψ | = prob ϕ = ⇒ ψ | = rel ϕ . If ψ ∈ FO( ⊥ ) and ϕ ∈ FO(dep) , then ψ | = rel ϕ = ⇒ ψ | = prob ϕ . For ϕ, ψ ∈ FO( ⊥ ) , neither implication holds in general. Conjunctions of conditionalindependence atoms suffice to generate counterexamples. Proof.
Notice that (2) and (3) follow immediately from Theorem 3.5 and that (1) follows bycombining (2) and (3) since FO(dep) ⊆ FO( ⊥ ). (4) remains to be shown. We adapt formulaeand counterexamples from [31].We first show that, in general, ψ | = rel ϕ does not imply ψ | = prob ϕ . For this, let ψ = z ⊥ x w ∧ z ⊥ y w ∧ x ⊥ wz y and ϕ = z ⊥ xy w . The intuition behind ψ | = rel ψ is thatwe can combine z ⊥ x w and z ⊥ y w to get z ⊥ xy w by using x ⊥ wz y to enforce agreementon xy .Let X | = ψ and s , s ∈ X with ( a, b ) = s ( x, y ) = s ( x, y ). Further, let c := s ( z ) and d := s ( w ). Since X | = z ⊥ x w , there exists s ∈ X with s ( x ) = a and s ( z, w ) = ( c, d ).Similarly, we obtain s ∈ X with s ( y ) = b and s ( z, w ) = ( c, d ) via X | = z ⊥ y w . Byapplying X | = x ⊥ wz y on s and s , we finally obtain s ∈ X with s ( x, y, z, w ) = ( a, b, c, d )which shows that X | = ϕ . Thus, ψ | = rel ϕ . Howver, it is straightforward to verify thatthe probabilistic team given below satisfies ψ but not ϕ . x y z w P x y z w ψ = prob ϕ ψ = rel ϕ To prove that ψ | = prob ϕ does not necessarily imply that ψ | = rel ϕ , we use the fact from[28, equation (A3), p. 15] that ψ | = prob ϕ , for ψ = x ⊥ yz y ∧ z ⊥ x w ∧ z ⊥ y w ∧ x ⊥ y and ϕ = z ⊥ w (proven in a different context with techniques employing measure theoryand information theory). On the other side, a team proving that ψ = rel ϕ is given above(to the right). (cid:74) We remark that if ϕ, ψ ∈ FO( ⊥ ) are conjunctions of simple independence atoms , then ψ | = rel ϕ ⇐⇒ ψ | = prob ϕ . This follows by observing that the proof calculi provided in theliterature, see [17] for the relational and [18] for the probabilistic case, coincide. We now define and investigate properties of hidden-variable models by means of dependenceand independence logic.
In the literature, many properties of hidden-variable and empirical models are investigated.Given that these models can be seen as (relational or probabilistic) teams, we can defineproperties of such models by logical formulae that are evaluated over such teams. Actually,since natural properties are defined for models of arbitrary arity, their logical definition isnot given by a single formula, but by a uniform family of such formulae, one for each arity.That is, for a property P of probabilstic hidden-variable models, we present a family offormulae ψ n ∈ FO( ⊥ ), with free variables in Var hn such that for any probabilistic team X that represents a hidden-variable model h X of arity n , we have that h X has property P if,and only if, X | = ψ n . Similarly, for empirical models, and for the relational case.Actually, it is a fundamental observation that for all elementary properties, syntacticallyidentical formulae work for the relational and the probabilistic case simultaneously. Also, ourformulae are in fact very simple; essentially just conjunctions of atoms. This is quite elegantand another indication that team semantics is very well suited to express hidden-variablephenomena. Properties of empirical models
We start with presenting three fundamental propertiesthat empirical models may or may not have. Note that in our formulae, the role of n ishidden in notations such as m for m . . . m n and o for o . . . , o n . The following notation ishelpful: For a tuple z = ( z . . . z n ), let z − i = ( z . . . z i − z i +1 . . . z n ). . Albert and E. Grädel 13 Weak Determinism:
An empirical model is weakly deterministic if the combined measure-ments m in all components deterministically determine the outcome o of all components.However, this property does not forbid that the outcome of the i -th component dependson the measurement choice in components other than i . This property is defined byWeakDet en := dep( m, o ) . Strong Determinism:
If we strengthen Weak Determinism and require that the outcome ofthe i -th component is uniquely determined by the measurement choice in that component,we obtain the notion of Strong Determinism, defined byStrongDet en := n ^ i =1 dep( m i , o i ) . No-Signalling:
This property is a bit more complicated. Roughly speaking, the idea is toformalize that no information can be transmitted to component i by choice of measurementin components other than i . Thus, the outcome of the i -th component (which for exampleAlice receives) may, conditional on her choice of measurement, not depend on othermeasurements. An illustration of this point is given in Example 4.1 below. No-Signallingis closely related to the famous No-Communication Theorem in Quantum Mechanics.Formally,NoSig en := n ^ i =1 o i ⊥ m i m − i . It may not be completely obvious why the formulae NoSig en have the intended meaning.Intuitively, an independence atom x ⊥ z y expresses that conditional on knowing z , gettinginformation about x does not give additional information about y . Thus, NoSig en statesthat if m i is known, the other measurements m − i do not give additional informationabout o i . (cid:73) Example 4.1.
To illustrate how a violation of No-Signalling allows communication, considerthe empirical model defined by the following team X : m m o o a b + − a b − − In this case, X = o ⊥ m m which allows Bob to instantly transmit information to Aliceby choice of his measurement. If Bob makes measurement b , Alice receives a + in herexperiment; if he chooses b , Alice receives a − . In this configuration, Bob can send a full bit ofinformation to Alice by means of his measurement choice; No-Signalling is rightfully violated.Also note that this example satisfies Weak Determinism but not Strong Determinism.It is a relevant feature of team semantics, often called locality, that the meaning of aformula depends only on those variables that occur free in it. Thus, even if we evaluate aformula over a hidden-variable model, we can express properties of the underlying empiricalmodel (if the variable λ is not used in the formula). Properties of the underlying empiricalmodel can therefore be cast as properties of the hidden-variable model itself. This is madeprecise in the following proposition. (cid:73) Proposition 4.2.
Let X be a probabilistic team over Var hn , and let ψ e ∈ FO( ⊥ ) be a formulaover Var en that formalizes a property P of probabilistic empirical models. Then X | = ψ e if,and only if, e X has property P . Proof.
By locality, X | = ψ e is equivalent to X (cid:22) Var en | = ψ e . By Lemma 2.12, X (cid:22) Var en is theteam representation of e X . (cid:74) By the same argument, the analogous statement for relational teams and models alsoholds.
Properties of hidden-variable models
We now show, in a similar fashion, that commonproperties of hidden-variable models are definable in dependence and independence logic.The reader may want to compare our formula with the definitions in [1, 9]. Except in the caseof locality (which we shall discuss separately), it is straightforward to see that our definitionscapture the properties from the literature. We feel that our definitions by formulae of teamsemantics are more compact and precise, and they highlight the essential features betterthan an explicit unrolling of the semantic of dependence and independence atoms in everyinstance.
Weak Determinism:
For hidden-variable models, Weak Determinism means that for everyfixed value of a hidden-variable the chosen measurements in all components determine theoutcomes. It is essential that the outcomes may also depend on the hidden-variable, whichallows explaining non-deterministic empirical models by deterministic hidden-variablemodels with more fine-grained internal states. FormallyWeakDet hn := dep( mλ, o ) . Strong Determinism:
Strong Determinism strengthens Weak Determinism by requiringthat for every fixed value of the hidden-variable the measurement in the i -th componentdetermines the outcome in the i -th component. Formally,StrongDet hn := n ^ i =1 dep( m i λ, o i ) . Single-Valuedness:
A hidden-variable model is single valued if the hidden-variable set Λhas only one element. Such a model is essentially just an empirical model, because asingle-valued variable does not provide any additional freedom.SingVal hn := dep( − , λ ) . λ -Independence: This formalizes the idea that the measurement process and the hidden-variable should be independent. It is related to the intuition that the physical reality wemeasure shall be independent of the experimenters’ choices. A particularly pathologicalcounterexample to λ -Independence would be a system where the hidden-variable of thesystem uniquely determines which measurements will be done by the experimenters. λ -Indep hn := m ⊥ λ. Outcome Independence:
This property states that outcomes of measurements in differentcomponents shall be independent from each other when conditioning on mλ . This meansthat when we perform the same measurements in the same hidden states, the outcomesof the various components shall not provide information about each other. Note that . Albert and E. Grädel 15 violation of this property is related to the phenomenon of quantum entanglement wherewe obtain correlations between outcomes even in case of spatially separated particles.Out-Indep hn := n ^ i =1 o i ⊥ mλ o − i . Parameter Independence:
This is essentially the hidden-variable analogue of No-Signalling.For fixed m i λ , the outcome o i of component i shall be independent of the measurementstaken in the other components:Par-Indep hn := n ^ i =1 o i ⊥ m i λ m − i . Locality:
A hidden-variable model is local if all components are independent of eachother. Roughly speaking, to understand a system satisfying Locality, one only needsto understand every component and piece them together independently. We claim thatthis adequately expressed by the conjunction of Parameter Independence and OutcomeIndependence:Loc hn := Out-Indep hn ∧ Par-Indep hn . Heuristically speaking, Outcome Independence gives independence of the i -th componentof the other outcomes while Parameter Independence gives independence of the othermeasurements. Put together, they give full independence the other components. Thisargument is made precise below. (cid:73) Example 4.3.
We adapt Example 4.1 to a hidden-variable model satisfying Single-Valuedness: m m o o λa b + − λ a b − − λ The resulting model does satisfy neither Parameter Independence nor Strong Determinismbut satisfies Weak Determinism and Outcome Independence.An intuitive and precise semantic characterization of Locality is shown in the followingtwo lemmata for the relational and probabilistic case respectively. (cid:73)
Lemma 4.4.
Let X be a relational team of arity n over Var hn . X | = Loc hn if, and only if,for every tuple ( a, c ) ∈ X ( m, λ ) and for all b , . . . , b n with b i ∈ X ( o i ) the following condition(*) holds: If there is an assignment s i ∈ X with s i ( m i , o i , λ ) = ( a i , b i , c ) for all i , then thereis an s ∈ X with s ( m, o, λ ) = ( a, b, c ) . Proof.
Assume that X | = Loc hn , i.e., X | = Out-Indep hn ∧ Par-Indep hn . Let ( a, c ) ∈ X ( m, λ )and b i ∈ X ( b i ) for every i ≤ n . Let s i ∈ X with s i ( m i , o i , λ ) = ( a i , b i , c ). We need toconstruct some t ∈ X with t ( m, o, λ ) = ( a, b, c ). Since ( a, c ) ∈ X ( m, λ ), there exists ˜ s ∈ X with ˜ s ( m, λ ) = ( a, c ). By applying Parameter Independence on s i and ˜ s we get ˜ s i ∈ X whichfulfills ˜ s i ( m, λ ) = ( a, c ) and ˜ s i ( o i ) = b i . By inductively applying Outcome Indpendence, wecan now construct a sequence t , . . . , t n with t i ( m, λ ) = ( a, c ) and t i ( o , . . . , o i ) = ( b , . . . , b i ).Choosing t := t n completes the argument for the first implication.For the converse, suppose that X satisfies condition (*). We prove that X | = Out-Indep hn .Let s , s ∈ X with s ( m, λ ) = s ( m, λ ) and let i ≤ n . Condition (*) immediately gives that s given by s ( m, λ ) = s ( m, λ ), s ( o i ) = s ( o i ), and s ( o − i ) = s ( o − i ) is indeed in X ; thisproves Outcome Independence. Parameter Independence is shown similarly. (cid:74) Condition (*) in Lemma 4.4 is a common alternative definition of Locality. In words itessentially states that the question whether a measurement-outcome pair ( a, b ) is possiblefor a fixed value of λ reduces to whether the measurement-outcome pair ( a i , b i ) is possiblein every component i . However, a slight technical complication is that we require this toonly hold for measurements a that are possible for a fixed value of λ : We explicitly do notrequire that ( a i , c ) ∈ X ( m i , λ ) for all i implies ( a, c ) ∈ X ( m, λ ). The latter condition iscalled Measurement Locality in [1] and rather natural but not entailed in the usual definitionof Locality. (cid:73) Example 4.5.
Consider the hidden-variable model given by the following team: m m o o λa b λ a b λ a c λ a c λ a b λ a c λ This model does not satisfy Locality: For the hidden-variable λ , it is possible that a resultsin 0 and that b results in 1. However, ( ab, , λ ) is not an element of the team even though( ab, λ ) ∈ X ( m, λ ). Thus, Locality is falsified according to the previous lemma. Note that,in this instance, Parameter Independence is violated since the choice of measurement in thesecond component influences the possible results in the first component. However, OutcomeIndependence holds. Also, this team satisfies λ -Independence since all possible measurements,i.e. ab and ac , appear for both values of the hidden-variable.In the probabilistic case, Locality means that the probability factors over the variouscomponents. This is in a sense a quantitative analogue of the qualitative statement from theprevious lemma. (cid:73) Lemma 4.6.
Let X = ( X, P ) be a probabilistic team over Var hn . Then X | = Loc hn if, andonly if the following condition (**) holds: For all ( a, c ) ∈ X ( m, λ ) and all b , . . . , b n with b i ∈ X ( o i ) P ( o = b | m = a, λ = c ) = Y ni =1 P ( o i = b i | m i = a i , λ = c ) . Proof.
Assume that X | = Loc hn . Let ( a, c ) ∈ X ( m, λ ) and b i ∈ X ( o i ) for i ≤ n . To simplifynotation, let o k := o . . . o k and b k := b . . . b k We apply induction over k to obtain P ( o k +1 = b k +1 | m = a, λ = c )= P ( o k +1 = b k +1 | o k = b k , m = a, λ = c ) · P ( o k = b k | m = a, λ = c )= P ( o k +1 = b k +1 | m = a, λ = c ) · Y ki =1 P ( o i = b i | m = a, λ = c )= Y k +1 i =1 P ( o i = b i | m = a, λ = c ) = Y k +1 i =1 P ( o i = b i | m i = a i , λ = c ) , by applying Outcome Independence, Parameter Independence, and the induction hypothesis.Setting k + 1 = n completes the argument. . Albert and E. Grädel 17 For the converse, let X satisfy condition (**). We first show that X | = Par-Indep hn : P ( o i = b i | m = a, λ = c ) = X b − i P ( o = b, m = a, λ = c )= X b − i Y nj =1 P ( o j = b j | m j = a j , λ = c )= P ( o i = b i | m i = a i , λ = c ) · Y j = i X b j P ( o j = b j | m j = a j , λ = c )= P ( o i = b i | m i = a i , λ = c ) . Outcome Independence is then shown by the following equation for all i : P ( o i = b i , o − i = b − i | m = a, λ = c ) = Y nj =1 P ( o j = b j | m j = a j , λ = c )= P ( o i = b i | m i = a i , λ = c ) · Y j = i P ( o j = b j | m j = a j , λ = c )= P ( o i = b i | m i = a i , λ = c ) · P ( o − i = b − i | m − i = a − i , λ = c )= P ( o i = b i | m = a, λ = c ) · P ( o − i = b − i | m = a, λ = c ) , where the last equality follows by Parameter Independence. (cid:74) Unifying Relational and Probabilistic Properties.
A major advantage of our framework isthat it gives a fundamental reason why properties of probabilistic models carry over to theunderlying relational models. This has been observed by Abramsky [1] for the particularproperties discussed there (and here) but needed to be shown in every specific instance. Ourapproach gives a general theoretical reason for this behavior. Due to by Theorem 3.5, it isan immediate consequence of the fact that the properties are defined by the same syntacticformulae in independence logic. (cid:73)
Proposition 4.7.
Let ψ ∈ FO( ⊥ ) capture a property P of both probabilistic and relationalmodels. If a probabilistic model satisfies P , its underlying relational model also satisfies P .For ψ ∈ FO(dep) , the converse also holds.
In particular, this holds for every single property defined in Section 4.1. (cid:73)
Corollary 4.8.
If a probabilistic hidden-variable model satisfies any of the elementaryproperties defined above (e.g. λ -Independence, Locality, No-Signalling, . . . ), its underlyingrelational model also satisfies the corresponding property. For the properties which relyonly on dependence atoms (Strong-Determinism, Weak-Determinism, Single-Valuedness), theconverse also holds. The proposition applies of course much more generally than just to the handful ofproperties that we listed explicitly. Of course, every family of formulae in FO( ⊥ ) inducesproperties of relational and probabilistic models in a way that satisfies the assumptionsof Proposition 4.7. Since independence logic is a rather powerful logic – recall that it canexpress all NP properties of teams by [16] – this induces a very large class of properties withthe aforementioned relationship between probabilistic and relational models. Implications of the form that “all models with property P also satisfy property Q ” havebeen proved for instance in [1] and [9]. In our framework, such statements are elegantly castas entailments between formulae of independence logic FO( ⊥ ). Thus, they can be provendirectly on the logical level. This allows us to make use of the relationship between | = rel and | = prob to develop the results for both cases in parallel. (cid:73) Theorem 4.9.1.
Single-Valuedness implies λ -Independence: SingVal hn | = all λ -Indep hn . Weak Determinism implies Outcome Independence:
WeakDet hn | = all Out-Indep hn . Strong Determinism corresponds to the conjunction of Weak Determinism and ParameterIndependence:
StrongDet hn ≡ all WeakDet hn ∧ Par-Indep hn . Proof.
By Theorem 3.7, the probabilistic case suffices for (1) and (2). Claim (1) immediatelyfollows from the fact that point distributions are stochastically independent of all distributions.For (2) we need to show that WeakDet hn | = prob Out-Indep hn . Let X = ( X, P ) | = WeakDet hn .For every ( a, c ) ∈ X ( m, λ ) the distribution P ( o | m = a, λ = c ) is a point distribution. Thisimplies that X | = o i ⊥ mλ o − i and thereby X | = Out-Indep hn .For (3) we first observe that the entailment StrongDet hn | = WeakDet hn is obvious inboth semantics. For StrongDet hn | = all WeakDet hn ∧ Par-Indep hn , it remains to show thatStrongDet hn | = prob Par-Indep hn . But this is trivial as well since the distribution over o i fora fixed pair ( m i , λ ) assigns, by Strong Determinism, probability 1 to a single value. Thissuffices because point distributions are stochastically independent of every other distributionand thus in particular from the conditional distribution on m − i .For the converse, we show that WeakDet hn ∧ Par-Indep hn | = rel StrongDet hn . Let X | =WeakDet hn ∧ Par-Indep hn , and let s , s ∈ X agree on m i and λ . Then, by ParameterIndependence, there exists s ∈ X that takes the same value on m i and λ and also satisfies s ( o i ) = s ( o i ) and s ( m − i ) = s ( m − i ). This entails that s ( m ) = s ( m ) which in turn gives,by Weak Determinism, that s ( o ) = s ( o ) and in particular s ( o i ) = s ( o i ) = s ( o i ). (cid:74) It is instructive to think about the intuition for the classification of Strong Determinismas conjunction of Weak Determinism and Parameter Independence: Consider a model whichsatisfies Weak Determinism but not Strong Determinism. In this case, full measurements m uniquely determine outcomes o but single m i do not determine o i . However, it is thennecessary that the other measurements m − i contain information about o i which contradictsParameter Independence. In a sense, the independence of other measurements is preciselythe missing component to go from Weak Determinism to Strong Determinism.Recall that we defined Locality as conjunction of Parameter Independence and OutcomeIndependence. Thus, we immediately obtain the following. (cid:73) Corollary 4.10.
Strong Determinism implies Locality:
StrongDet hn | = all Loc hn . Because of Proposition 4.2, we can formulate implications of the form “if h is a hidden-variable model satisfying P h , then its underlying empirical model e satisfies P e ” also byentailment of properties. (cid:73) Theorem 4.11.
The underlying empirical models of hidden-variable models that satisfyParameter Independence and λ -Independence fulfill No-Signalling: λ -Indep hn ∧ Par-Indep hn | = all NoSig en . . Albert and E. Grädel 19 Before we prove this we want to give some intuition. Recall that the only differencebetween Parameter Independence and No-Signalling is that the former requires independenceof o i and m − i conditional on ( m i , λ ) instead of just m i . Intuitively, we use the λ -Independenceto obtain a “copy” of s – namely ˜ s – which agrees with s on the hidden-variable to allowus to apply Parameter Independence. Proof (relational case).
Let X | = Par-Indep hn ∧ λ -Indep hn . Let i ∈ { , . . . , n } and s , s ∈ X with s ( m i ) = s ( m i ). Choose by λ -Independence an assignment ˜ s ∈ X with ˜ s ( λ ) = s ( λ )and ˜ s ( m ) = s ( m ). Then s ( m i , λ ) = ˜ s ( m i , λ ) and by applying Parameter Independenceto s and ˜ s , we get s ∈ X with s ( o i ) = s ( o i ) and s ( m − i ) = ˜ s ( m − i ) = s ( m − i ). Thus, X | = NoSig en . (cid:74) Proof (probabilistic case).
Let X = ( X, P ) | = Par-Indep hn ∧ λ -Indep hn and i ∈ { , . . . , n } .Let o ∈ X ( o i ) , a ∈ X ( m i ), and a − i ∈ X ( m − i ). We put Λ = X ( λ ). We have P ( o i = o | m = a ) = X c ∈ Λ P ( o i = o, λ = c | m = a )= X c ∈ Λ P ( o i = o | λ = c, m = a ) · P ( λ = c | m = a )= X c ∈ Λ P ( o i = o | λ = c, m i = a i ) · P ( λ = c | m = a ) (by Par-Indep hn )= X c ∈ Λ P ( o i = o | λ = c, m i = a i ) · P ( λ = c | m i = a i ) (by λ -Indep hn )= P ( o i = o | m i = a i )and thus X | = o i ⊥ m i m − i . (cid:74) We now investigate the question whether a given empirical models admits an empiricallyequivalent hidden-variable model with certain given properties. Again, our team semanticalframework allows us to treat this in an elegant fashion. Essentially, the question reduces tothe satisfiability of formulae of the form ∃ λψ on extensions of the given team by a finite setof values for the hidden variable λ , where ψ encodes the properties of interest.Notice that a team X of assignments s : dom( X ) → A can of course also be understoodas a team of assignments s : dom( X ) → A ∪ Λ for any set Λ, and since elements of Λ do notoccur in the team, this does not change any of the atomic dependence and independenceproperties of X . However, for an existential formula ∃ λϕ (or a universal one) the universe ofvalues that are available for λ does of course matter. (cid:73) Definition 5.1.
For a team X with values in A and a set Λ , we write X + Λ for the teamwith the additional supply Λ of values. We say that a formula ϕ ∈ FO( ⊥ ) is satisfiable by a(finite) extension of X if there exists a (finite) set Λ such that X + Λ | = ψ . All of this appliesas well to probabilistic teams. The following observation, which we formulate for probabilistic teams, connects existenceof suitable hidden-variable models with team semantics. (cid:73)
Proposition 5.2.
Let X be a probabilistic team representing an empirical model e X . Foran arbitrary property P of hidden-variable models captured by a formula ψ ∈ FO( ⊥ ) , thefollowing two statements are equivalent: There is a hidden-variable model h P which satisfies P and is empirically equivalent to e X . ∃ λψ is satisfiable by a finite extension of X . Proof.
A hidden-variable model h P (where λ takes values in Λ) which is empirically equivalentto e X corresponds to a team that can be written as a Skolem extension Y = ( X + Λ)[ λ
7→ F ]for a suitable function F : X → ∆(Λ). Meanwhile, every Skolem extension of X + Λ representsa hidden-variable model that is equivalent to e X . The proposition follows since X + Λ | = ∃ λψ expresses precisely that a suitable Skolem extension of X + Λ satisfies ψ . (cid:74) We immediately get the following connection between probabilistic and relational existenceresults by our general results on the level of team semantics. (cid:73)
Proposition 5.3.
Let ψ ∈ FO( ⊥ ) formalize a property P of both relational and probabilistichidden-variable models. If a probabilistic empirical model has an empirically equivalenthidden-variable model satisfying P , then this also holds for its induced relational model. If ψ ∈ FO(dep) , the converse also holds.
Proof.
Assume that the probabilistic case holds for a probabilistic team X = ( X, P ) overVar en . Thus, there is a finite set Λ such that X + Λ | = ∃ λψ . This implies by Theorem 3.5 that X + Λ | = ∃ λψ , which was to be shown. For ψ ∈ FO(dep), the converse follows by similararguments. (cid:74)
Again, our elementary properties can be treated as a special case of this general result. Ifa probabilistic empirical model possesses an equivalent hidden-variable model satisfying anyof the elementary properties defined above (such as Locality, λ -Independence, Strong Determ-inism, . . . ), then the same holds for its underlying relational model. For the properties thatrely only on dependence atoms (Strong-Determinism, Weak-Determinism, Single-Valuedness),the converse also holds.We next give logical proofs of some existence theorems – which are already known from[1] and [9] – in our framework. We will start with a trivial one. (cid:73) Proposition 5.4.
Every (relational and probabilistic) empirical model is realized by anequivalent hidden-variable model satisfying Single-Valuedness.
Proof.
Recall that SingVal hn := dep( − , λ ), and for any team X , we can take an arbitrary setΛ to get that X + Λ | = ∃ λ dep( − , λ ). This establishes the relational case, which also impliesthe probabilistic one. (cid:74)(cid:73) Proposition 5.5.
Every (relational and probabilistic) empirical model is realized by anempirically equivalent hidden-variable model satisfying Strong Determinism.
Proof.
Let X be a team and put Λ := X . We need to show that X + Λ | = ∃ λ ^ ni =1 dep( m i λ, o i ) . Consider the Skolem extension Y = ( X + Λ)[ λ F ] with F ( s ) = { s } . For all s, s ∈ Y , s ( λ ) = s ( λ ) implies that s = s . This shows that Y satisfies Strong Determinism andthus proves the relational case. Since StrongDet hn ∈ FO(dep), the probabilistic case alsofollows. (cid:74)
This construction shows that Strong Determinism is hardly a satisfying property in itself;letting every fixed hidden variable deterministically determine a measurement-outcome pairis far from desirable. A natural additional assumption is λ -Independence, which states . Albert and E. Grädel 21 that the measurement process is independent from the hidden-variables of the system tobe measured. This is by no means satisfied in the construction above; indeed, the hidden-variables deterministically determine the measurement taken.Proposition 5.5 implies an existence result for Locality since StrongDet hn | = all Loc hn . (cid:73) Corollary 5.6.
Every (relational and probabilistic) empirical model is realized by anempirically equivalent hidden-variable model satisfying Locality.
Terms such as “local realism” appearing in the literature refer in fact to the existence ofhidden-variable models satisfying the conjunction of Locality and λ -Independence. This isnot proven above and indeed in general false as we shall see in Section 6 when we discussBell’s Theorem. But if we weaken Strong Determinism to Weak Determinism, we can realizeit together with λ -Independence. (cid:73) Proposition 5.7.
Every relational empirical model and every probabilistic empirical modelwith rational probabilities is realized by an empirically equivalent hidden-variable modelsatisfying Weak Determinism and λ -Independence. Proof.
Note that we only need to prove the probabilistic case, as the restriction to rationalprobabilities does not impair the argument in the proof of Proposition 5.3. We adapt aconstruction from [9] to our team semantic framework.Let X = ( X, P X ) be a probabilistic team over Var en , with | X ( m ) | = L and | X ( o ) | = K .For every pair z = ( a, b ) ∈ X ( m ) × X ( o ), let P X ( o = b | m = a ) = p ( z ) = r ( z ) s ( z )with r ( z ) , s ( z ) ∈ N such that s ( z ) = 0 and r ( z ) and s ( z ) are co-prime for each z . Let N bethe least common multiple of all numbers s ( z ), and choose a set Λ with N points. The idea isto make all λ ∈ Λ equally likely and independent from the measurements. However, we assigna different number of hidden-variables to each measurement-output pair as follows. For every z , let N ( z ) := p ( z ) N ∈ N . Clearly, for every a ∈ X ( m ) we have that P b ∈ X ( o ) N ( a, b ) = N .For every a ∈ X ( m ) we thus get a partition of Λ into a collection (Λ( a, b )) b ∈ X ( o ) of disjointsets where Λ( z ) has N ( z ) elements. We now define the Skolem extension Y = X [ λ
7→ F ] bythe function F : X → ∆(Λ) that maps s ∈ X with s ( m, o ) = z to the uniform distributionover Λ( z ). We claim that Y satisfies Weak Determinism and λ -Independence. Weak Determinism:
Weak Determinism follows immediately from the construction of Y .Regard arbitrary a ∈ X ( m ) , c ∈ Λ. There exists a unique b ∈ X ( o ) such that c ∈ N (( a, b )).This is by construction the unique b ∈ X ( o ) with P ( o = b, λ = c | m = a ) > λ -Independence: To prove λ -Independence we have to show that Y | = m ⊥ λ . Regardarbitrary a ∈ X ( m ) and c ∈ Λ and choose the unique b with non-zero probability implied byWeak Determinism. We observe that P ( λ = c | m = a ) = P ( o = b, λ = c | m = a )= P ( λ = c | o = b, m = a ) · P ( o = b | m = a ) = 1 N ( z ) p ( z ) = 1 N , independent of a . This completes the proof. (cid:74) Next we provide a normal form for hidden-variable models satisfying Locality and λ -Independence, i.e. “local realism”. The relational case is adapted from [1] to our frameworkwhile the elementary team semantical proof for the probabilistic case is original to our work.The probabilistic result is known from [8] by a rather involved and non-elementary proofusing measure and integration theory. (cid:73) Theorem 5.8.
Every relational empirical model and every probabilistic empirical modelwith rational probabilities which admits an empirically equivalent hidden-variable satisfyingLocality and λ -Independence also admits an equivalent model satisfying Strong Determinismand λ -Independence. Proof (relational case).
Let X be a team over Var en such that X + Λ | = ( ∃ λ ∈ Λ) λ -Indep hn ∧ Loc hn . Thus, there is a Skolem extension Y = ( X +Λ)[ λ F ] such that Y | = λ -Indep hn ∧ Loc hn For a ∈ X ( m ) , c ∈ Λ we let O ( a, c ) := { b ∈ X ( o ) : ( a, b, c ) ∈ Y ( m, o, λ ) } . By λ -Independence, O ( a, c ) = ∅ . Also, for each i ≤ n and a ∈ X ( m i ), we set O i ( a, c ) := { b ∈ X ( o i ) : ( a, b, c ) ∈ Y ( m i , o i , λ ) } . Note that Y | = Loc hn and Lemma 4.4 imply that O ( a, c ) = Q ni =1 O i ( a i , c ).Let F ic be the set of all functions f i : X ( m i ) → X ( o i ) where f i ( a ) ∈ O i ( a, c ) for all a ∈ X ( m i ). The set Λ is now defined to contain all pairs ( c, f ) = ( c, ( f , . . . , f n )) where c ∈ Y ( λ ) and f i ∈ F ic for i ≤ n .Let F : X → P + (Λ ) be the function with F ( s ) = { ( c, f ) ∈ Λ : f i ( s ( m i )) = s ( o i ) for all i } .It is straightforward to see that F ( s ) is indeed non-empty. We claim that Z := X [ λ F ]satisfies Strong Determinism and λ -Independence. Strong Determinism:
Let s, s ∈ Z with s ( λ ) = s ( λ ) = ( c, f ) and s ( m i ) = s ( m i ) = a .By definition of F , it holds that s ( o i ) = f i ( s ( m i )) = f i ( s ( m i )) = s ( o i ). λ -Independence: We show the slightly stronger claim that for every a ∈ X ( m ) and( c, f ) ∈ Λ there exists some s ∈ Z with s ( m ) = a and s ( λ ) = ( c, f ). By definition of Λ , itholds that b i := f i ( a i ) ∈ O i ( a i , c ) for all i ≤ n . Thus, b ∈ Q ni =1 O i ( a i , c ) = O ( a, c ). Choose s ∈ X with s ( m ) = a, s ( o ) = b . Since ( c, f ) ∈ F ( s ), our claim and thereby λ -Independencefollows. (cid:74) Proof (probabilistic case).
Let now X = ( X, P X ) be a probabilistic team over Var en with X + Λ | = ( ∃ λ ∈ Λ) λ -Indep hn ∧ Loc hn , and let Y = ( X + Λ)[ λ
7→ F ] be a Skolem extension with Y | = λ -Indep hn ∧ Loc hn As in the relational case we set for a ∈ X ( m ) , c ∈ Λ O ( a, c ) = { b ∈ X ( o ) : ( a, b, c ) ∈ Y ( m, o, λ ) } = { b ∈ X ( o ) : P Y ( o = b | m = a, λ = c ) > } , which is non-empty by λ -Independence. Also, we let for all i ≤ n and a ∈ X ( m i ) O i ( a, c ) := { b ∈ X ( o i ) : ( a, b, c ) ∈ Y ( m i , o i , λ ) } such that, by Locality, O ( a, c ) = Q ni =1 O i ( a i , c ).Let Z i = Y ( m i ) × Y ( o i ) × Λ. For every triple z i = ( a i , b i , c ) ∈ Z i , let P Y ( o i = b i | m i = a i , λ = c ) = p i ( z i ) = r i ( z i ) s i ( z i ) . . Albert and E. Grädel 23 where r i ( z ) and s i ( z ) are co-prime natural numbers with s i ( z i ) = 0. Let N i be the leastcommon multiple of the numbers s i ( z i ), let Λ i be a set with N i elements and define˜Λ = Λ × (Λ × · · · × Λ n ) . We then put N i ( z i ) = p i ( z i ) N i and construct, for every pair ( a i , c ) ∈ Y ( m i ) × Λ a partition(Λ i ( a i , b i , c )) b i ∈ Y ( o i ) of Λ i with | Λ i ( z i ) | = N i ( z i ) for all z i ∈ Z i . This is possible because X b i ∈ Y ( o i ) N i ( a i , b i , c ) = X b i ∈ Y ( o i ) p i ( a i , b i , c ) N i = N i X b i ∈ Y ( o i ) P Y ( o i = b i | m i = a i , λ = c ) = N i . We now define a function F : X → ∆(˜Λ) as follows. An assignment s ∈ X with s ( m, o, λ ) =( a, b, c ) defines the tuple ( z , . . . , z n ) where z i = ( a i , b i , c ) ∈ Z i . The function F maps s tothe probability distribution F s with F s ( c, ( c , . . . , c n )) = P Y ( λ = c | m = a, o = b ) N ( z ) . . . N n ( z n ) , if ( c , . . . , c n ) ∈ Λ ( z ) × · · · × Λ n ( z n ), and F s ( c, ( c , . . . , c n )) = 0, otherwise. It is straight-forward to see that this is indeed always a probability distribution. We then define Z = X [ λ
7→ F ] and claim that Z satisfies Strong Determinism and λ -Independence: Strong Determinism:
Regard an arbitrary a i ∈ X ( m i ) and ( c, ( c , . . . , c n )) ∈ ˜Λ. Supposethat P Z ( o i = b i | m i = a i , λ = ( c, ( c , . . . , c n ))) >
0. By definition of F it then follows that c i ∈ Λ i ( z i ) for z i = ( a i , b i , c ). Since a i , c, c i are fixed, this uniquely determines b i and showsStrong Determinism. λ -Independence: Here, we abuse notation a bit and write λ = ( λ , λ ) to refer to the twocomponents of elements of ˜Λ. Let a ∈ X ( m ) and ( c, ( c , . . . , c n )) ∈ ˜Λ. There is a unique b ∈ X ( o ) with c i ∈ Λ( a i , b i , c ) for all i . Thus, P Z ( λ = ( c, ( c , . . . , c n )) | m = a ) = P Z ( λ = ( c, ( c , . . . , c n ) , o = b | m = a )= P Z ( λ = ( c , . . . , c n ) | λ = c, o = b, m = a ) · P Z ( λ = c, o = b | m = a ) . (cid:73) Lemma 5.9. P Z ( o = b, λ = c | m = a ) = P Y ( λ = c ) Q ni =1 p i ( z i ) . Indeed, P Z ( λ = c, o = b, m = a ) = P Y ( λ = c, o = b, m = a ) since marginalizing Z to( m, o, λ ) gives rise to Y . Hence P Z ( o = b, λ = c | m = a ) = P Y ( o = b, λ = c | m = a )= P Y ( o = b | λ = c, m = a ) · P Y ( λ = c | m = a ) , and by Locality and λ -Independence of Y this coincides with n Y i =1 P Y ( o i = b i | m i = a i , λ = c ) · P Y ( λ = c ) = P Y ( λ = c ) n Y i =1 p i ( z i ) . This proves the lemma. Putting it together with the equation above we get P Z ( λ = ( c, ( c , . . . , c n )) | m = a )= n Y i =1 N i · p i ( z i ) · P Y ( λ = c ) n Y i =1 p i ( z i ) = P Y ( λ = c ) N · · · N n , which is independent from a . This proves that Z | = λ -Indep hn . (cid:74) The famous theorem by Bell, originally formulated in the groundbreaking work [4], showedthat a certain flavor of local hidden-variable theories cannot reproduce some predictions byquantum mechanics. This property corresponds to the conjunction of Strong-Determinismand λ -Independence – which is by Theorem 5.8 closely related to the conjunction of Localityand λ -Independence – in our framework. Commonly, the class of theories refuted by Bell’sTheorem is referred to by the name of “local realism” (cf. [12]).Since Bell’s work contains one of the most influential results on hidden-variables, we thinkit is worthwhile to discuss how its assumptions compare to our modern formulation. Allimportant assumptions of Bell, which are led to a contradiction, are contained in equation(2) from the original paper [4], namely P ( ~a,~b ) = Z P ( λ ) A ( ~a, λ ) B ( ~b, λ ) d λ. There, ~a,~b range over measurements of two components A and B , λ ranges over a space ofhidden-variables, and A and B denote functions that provide the outcomes of the respectivecomponents deterministically depending on their arguments. Thus, P ( ~a,~b ) denotes theexpectation value of the product of the outcomes of A and B when measurements ~a,~b arechosen.The assumed properties are implicitly encoded in the dependencies of the equation above.Writing A ( ~a, λ ) and B ( ~b, λ ) entails the assumption that the outcome of a component isdeterministically given by the measurement in the component together with the hidden-variable, i.e. Strong Determinism. Writing P ( λ ) instead of P ( λ | ~a,~b ) assumes that thedistribution over λ is independent from the measurements chosen, i.e. λ -Independence. Thus,proving that Strong Determinism and λ -Independence cannot generally be realized togetherconstitutes a proof of Bell’s No-Go-Theorem.The proof of Bell’s Theorem is probabilistic in nature. By Proposition 5.3, a relationalanalog implies the probabilistic formulation. Therefore, we provide a relational no-go theorembased on the Hardy Paradox from [22] instead of the original argument due to Bell. It shallbe noted however that the significance of Bell’s Theorem goes far beyond the theoreticalno-go result as it allows a way to experimentally test Non-Locality (cf. for example [12]).In the scope of this work, we are just interested in the theoretical result for which Hardy’sconstruction more than suffices. Our presentation of the Hardy Paradox is based on [1]. (cid:73) Theorem 6.1.
There exists a relational empirical model without an equivalent hidden-variable model satisfying the conjunction of Strong Determinism and λ -Independence. Proof.
Let X be an arbitrary team over Var e which satisfies the following assumptions: X ( m , m ) = { ( a , b ) , ( a , b ) , ( a , b ) , ( a , b ) } . X ( o , o ) ⊆ { R, G } where R = G ( a b , RR ) ∈ X ( a b , RR ) X ( a b , RR ) X ( a b , GG ) X Of course many such teams exist. Assume that X has an equivalent hidden-variable modelrepresented by a team Y with Y | = StrongDet h ∧ λ -Indep h . Note that Y | = Par-Indep h . Albert and E. Grädel 25 since StrongDet hn | = rel Par-Indep hn . Due to (3), there exists s ∈ Y with s ( m m ) = a b and s ( o o ) = RR . Define c = s ( λ ). There also exists an s ∈ Y with s ( m m ) = a b and s ( λ ) = c due to λ -Independence and ( a , b ) ∈ X ( m ) = Y ( m ). Applying ParameterIndependence on s and s gives s ∈ Y with s ( m ) = s and s ( o ) = s ( o ) = R .Assumption (4) gives s ( o ) = G .Analogously to s , we have s ∈ Y with s ( m ) = a b and s ( m ) = c . Applying ParameterIndependence on s and s gives s ∈ Y with s ( m ) = a b and s ( o ) = RG .Next, we construct s ∈ Y with s ( m ) = a b and s ( o ) = RG . However, s ( m ) = b = s ( m ) but s ( o ) = R = G = s ( o ) which contradicts Strong Determinism. (cid:74) By Theorem 5.8, we immediately obtain a no-go result for local realism; formalizing thenotion that quantum mechanics is non-local. (cid:73)
Corollary 6.2.
There exist a relational empirical model and a probabilistic empirical modelwithout empirically equivalent hidden-variable models satisfying the conjunction of Localityand λ -Independence. We want to conclude this section with a historical note: It shall be remarked that Bellhimself did not regard his theorem as a refutation of hidden-variables. In fact, Bell wasa proponent of Bohmian Mechanics introduced in [6, 7] – an explicitly non-local hidden-variable interpretation of quantum mechanics. Bell clearly emphasizes (see [5], p. 53) thatthe essential argument of Bell’s Theorem does not depend on determinism or any otherproperty of hidden-variables but just on the kind of correlations quantum mechanics predicts.Thus, he said, “it is a merit of the de Broglie-Bohm version to bring this out so explicitlythat it cannot be ignored” ([3], p. 159).
The celebrated Kochen-Specker Theorem about the contextuality of quantum mechanicsis one of the most important results about hidden-variable models. It is rather difficult tounderstand, which is also due to the fact that it is often formulated in an imprecise way andthat the notion of (non-)contextuality is used inconsistently in the literature. We attemptto shed light on the Kochen-Specker Theorem by discussing several variants of it, includinga formulation purely in terms of linear algebra and two different formulations in terms oflogics with team semantics, highlighting (non)-contextuality as a team-semantical property.
We first recall the Kochen-Specker Theorem in (essentially) its original form. A measurementcontext on a Hilbert space is a set X of observables (i.e. Hermitian operators) that arecompatible in the sense that they share a common eigenbasis. In quantum mechanical terms,the Kochen-Specker Theorem states that there is a finite set M of observables such that it isimpossible to assign a unique meaningful value to every observable in M in a non-contextual way, i.e. independent of its measurement context X ⊆ M . (cid:73) Definition 7.1.
Let M be a set of linear operators on a Hilbert space H . A valuation v : M → R respects the algebraic structure of H if for all A, B ∈ M we have thatif A + B ∈ M , then v ( A ) + v ( B ) = v ( A + B ) , andif A · B ∈ M , then v ( A ) · v ( B ) = v ( A · B ) . (cid:73) Theorem 7.2 (Kochen-Specker) . For every Hilbert space H with dim H ≥ , there exists afinite set M of Hermitian operators on H such that no v : M → R with v = 0 respects thealgebraic structure of H . A modern proof [11] for dim
H ≥ R . (cid:73) Lemma 7.3.
There exists a set Z = { a , . . . , a } ⊆ R and nine sets X , . . . , X , eachconsisting of four elements of Z that form an ONB of R , such that no subset S ⊆ Z fulfills | S ∩ X j | = 1 for all j . This lemma relies on an explicit construction of vectors a , . . . , a ∈ R , and a collectionof subsets X , . . . , X that use every a i exactly twice. That is, for every 1 ≤ i ≤
18 thereexist precisely two indices j with a i ∈ X j . Once this construction is complete, the lemmafollows by a simple parity argument. The explicit construction is not important for ourpurposes and can be found in [11].The Kochen-Specker Theorem (for dim H ≥
4) can now be established as follows: Eachvector a ∈ Z induces an orthonormal projection operator A . Let M be the collection ofall operators A i , induced by a i ∈ Z , together with the identity operator I . For an ONB { a, b, c, d } the induced projection operators A, B, C, D , together with I , form a measurementcontext of compatible observables, since they share { a, b, c, d } as a common eigenbasis. Letnow v : M → R be a valuation that respects the algebraic structure. We have to show that v = 0. Otherwise, assume that v ( A ) = 0 for some A ∈ M . Since v ( I ) · v ( A ) = v ( I · A ) = v ( A )it follows that v ( I ) = 1. Further, every projection operator A ∈ M is idempotent, so v ( A ) = v ( A ) and hence v ( A ) ∈ { , } . Finally, if A, B, C, D are projection operatorscorresponding to an orthonormal basis it holds that A + B + C + D = I and thereby that v ( A ) + v ( B ) + v ( C ) + v ( D ) = 1. Since v ( A ) , . . . , v ( D ) ∈ { , } , it follows that precisely oneof the four projections is mapped to 1. But then the set S := { a i ∈ Z : v ( A i ) = 1 } wouldhave the property that | S ∩ X j | = 1 for all j , contradicting Lemma 7.3.For the proof of dimension three, we refer the interested reader to [23, 25]. We now formulate the Kochen-Specker theorem in the language of teams, focusing on thenotion of non-contextuality. We provide two alternative formulations each of which has itsown advantages. To formulate this as elegantly as possible, we first define new team semanticatoms. (cid:73)
Definition 7.4.
For k -tuples x , x and m -tuples y , y of variables, let X | = dep(( x , x ) , ( y , y )) if for all s, t ∈ X with s ( x ) = t ( x ) , also s ( y ) = t ( y ) . Further, we define that X | = nc( x . . . x k , y ) if for all s, t ∈ X with t ( y ) ∈ { s ( x ) , . . . , s ( x k ) } it follows that s ( y ) = t ( y ) . We use this to define the atom ncc( x . . . x k ) for non-contextualchoice , with semantics given by ncc( x . . . x k ) ≡ ∃ y k _ i =1 y = x i ∧ nc ( x, y ) ! . . Albert and E. Grädel 27 Obviously, this new dependence atom is an extension of the regular one, with dep( x, y ) ≡ dep(( x, x ) , ( y, y )). The atom ncc( x . . . x k ) for non-contextual choice expresses that one canchoose for every assignment s ∈ X an element s ( y ) ∈ { s ( x ) , . . . , s ( x k ) } with the additionalconstraint that an element that is selected in an assignment s is also selected in every otherassignment t it which it appears. This explains the name “non-contextual choice”.Since these all atoms are obviously in NP and downwards-closed, it follows by [26] thatthey are definable in dependence logic. Explicit formulae for them are given in Appendix A. (cid:73) Definition 7.5.
A team X over variables Var en = { m , . . . , m n , o , . . . , o n } (and the em-pirical model it represents) is non-contextual if there exists in every component i ≤ n avaluation v i : X ( m i ) → X ( o i ) that is consistent with the empirical model in the sense thatfor measurements m = a the outcome o = v ( a ) . . . v n ( a n ) is possible. Formally, for every n ∈ N , non-contextuality can be defined by the formula NonContext en := ∃ v . . . v n n ^ ≤ i ≤ j ≤ n dep(( m i , m j ) , ( v i , v j )) ∧ mv ⊆ mo . The generalized dependence atom ensures that if the same measurement appears in components i and j then the valuations v i , v j agree there. Notice that NonContext en uses both (extended)dependence atoms and inclusion atoms, so it is a formula of independence logic. Thus, X = NonContext en means that there is no single, non-contextual assignment ofvalues to all measurements that is consistent with the empirical model. The term contextuality refers to the fact that in this situation measurement outcomes are inherently dependent ontheir measurement context, i.e. other measurements performed simultaneously. There is noway to assign values to measurements independent of each other in a consistent way.With this formula we can formulate an analogue of the Kochen-Specker Theorem. Let e i = ( δ i , . . . , δ i ) ∈ { , } with δ ij = 1 if, and only if, i = j . (cid:73) Theorem 7.6 (Logical formulation of the Kochen-Specker Theorem) . There exists a team X over variables { m , . . . , m } such that every extension Y of X to variables { m , . . . , m ,o , . . . , o } which satisfies the constraint that Y ( o ) ⊆ { e , . . . , e } violates non-contextuality,i.e. Y = NonContext en . The Kochen-Specker Theorem gives us a class of empirical models which are necessarilycontextual: no non-contextual outcome of measurements is possible, due to the violation ofNonContext en . Note that the constraint Y ( o ) ⊆ { e , . . . , e } corresponds to the requirementthat valuations must respect the algebraic structure of the measurement operators. Thisimplies that in each ONB, the valuations actually define a choice of one of the basis vectorsin the ONB. It is a very remarkable feature of the theorem that the measurement setupalone, encoded by X , suffices to ensure that all compatible extensions are contextual.Using our non-contextual choice atom we can give a different team semantical formulationof the Kochen-Specker Theorem that makes this aspect of choice more explicit. While theversion given above is closer to the rest of our framework and structurally more similar tothe general formulation of the Kochen-Specker Theorem, the alternative formulation belowdirectly corresponds to the linear algebraic argument formalized by Lemma 7.3 and is morecompact and elegant. (cid:73) Theorem 7.7.
There is a team X over the variables { m , . . . , m } such that X = ncc ( m ) .Furthermore, it is possible to choose X with values in R so that for every s ∈ X the set { s ( m ) , . . . , s ( m ) } forms an ONB of R . Proof.
By Lemma 7.3 we can form the team X consisting of assignments s , . . . s withvalues in R such that for each i ≤
9, the set B i := { s i ( m ) , . . . , s ( m ) } is an ONB, consistingof four vectors from the collection Z = { a , . . . , a } constructed in the proof of Lemma 7.3.We claim that X = ncc ( m ). Otherwise there exists a choice function f mapping each of thesets B , . . . , B to one of its elements such that, whenever f ( B i ) = a and a ∈ B k , then also f ( B k ) = a . But this means that the image of f forms a set S ⊆ Z such that | S ∩ X i | = 1 forall i ≤
9, contradicting Lemma 7.3. (cid:74)
Thus, Theorem 7.7 is a team semantical formulation of Lemma 7.3 which readily impliesthe Kochen-Specker Theorem.
References S. Abramsky. Relational Hidden Variables and Non-Locality.
Studia Logica , 101(2):411–452,2013. doi:10.1007/s11225-013-9477-4 . J. S. Bell. On the Problem of Hidden-Variables in Quantum Mechanics.
Reviews of ModernPhysics , 1966. doi:10.1103/RevModPhys.38.447 . J. S. Bell. de Broglie-Bohm, Delayed-Choice, Double-Slit Experiment, and Density Matrix.
International Journal of Quantum Chemistry , 18(S14):155–159, 1980. doi:10.1002/qua.560180819 . J. S. Bell. On the Einstein Podolsky Rosen paradox.
Physics Physique Fizika , 1(3):195–200,1964. doi:10.1103/PhysicsPhysiqueFizika.1.195 . J. S. Bell. Bertlmann’s Socks and the Nature of Reality.
Journal de Physique Colloques ,42(C2):41–62, 1981. doi:10.1051/jphyscol:1981202 . D. Bohm. A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables.I.
Physical Review , 85(2):166–179, 1952. doi:10.1103/PhysRev.85.166 . D. Bohm. A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables.II.
Physical Review , 85(2):180–193, 1952. doi:10.1103/PhysRev.85.180 . A. Brandenburger and J. Keisler. Observable Implications of Unobservable Variables. access-ible via http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.167.3791 (August2020), 2010. A. Brandenburger and N. Yanofsky. A classification of hidden-variable properties.
Journalof Physics A: Mathematical and Theoretical , 41(42), 2008. doi:10.1088/1751-8113/41/42/425302 . J. Bub. Von Neumann’s “No Hidden Variables” Proof: A Re-Appraisal.
Foundations ofPhysics , 40(9-10):1333–1340, 2010. doi:10.1007/s10701-010-9480-9 . A. Cabello, J. Estebaranz, and G. García-Alcaine. Bell-Kochen-Specker theorem: A proof with18 vectors.
Physics Letters A , 212(4):183–187, 1996. doi:10.1016/0375-9601(96)00134-X . J. Clauser and A. Shimony. Bell’s Theorem. Experimental tests and implications.
Reports onProgress in Physics , 41(12):1881–1927, 1978. doi:10.1088/0034-4885/41/12/002 . L. de Broglie. La mécanique ondulatoire et la structure atomique de la matière et durayonnement.
Journal de Physique et le Radium , 8(5):225–241, 1927. doi:10.1051/jphysrad:0192700805022500 . A. Durand, M. Hannula, J. Kontinen, A. Meier, and J. Virtema. Probabilistic Team Semantics.In F. Ferrarotti and S Woltran, editors,
Foundations of Information and Knowledge Systems.FoIKS 2018 , volume 10833 of
Lecture Notes in Computer Science , pages 186–206. SpringerInternational Publishing, 2018. doi:10.1007/978-3-319-90050-6_11 . A. Einstein, B. Podolsky, and N. Rosen. Can Quantum-Mechanical Description of PhysicalReality be Considered Complete?
Physical Review , 47(10):777–780, 1935. doi:10.1103/PhysRev.47.777 . . Albert and E. Grädel 29 P. Galliani. Inclusion and exclusion in team semantics — On some logics of imperfectinformation.
Annals of Pure and Applied Logic , 163(1):68–84, 2012. doi:10.1016/j.apal.2011.08.005 . P. Galliani and J. Väänänen. On Dependence Logic. In A. Baltag and S. Smets, editors,
Johanvan Benthem on Logical and Informational Dynamics , volume 5 of
Outstanding Contributionsto Logic , pages 101–119. Springer, 2014. doi:10.1007/978-3-319-06025-5_4 . D. Geiger, A. Paz, and J. Pearl. Axioms and algorithms for inferences involving probab-ilistic independence.
Information and Computation , 91(1):128–141, 1991. doi:10.1016/0890-5401(91)90077-F . E. Grädel and J. Väänänen. Dependence and Independence.
Studia Logica , 101(2):399–410,2013. doi:10.1007/s11225-013-9479-2 . M. Hannula, A. Hirvonen, J. Kontinen, V. Kulikov, and J. Virtema. Facets of DistributionIdentities in Probabilistic Team Semantics. In F. Calimeri, N. Leone, and M. Manna, editors,
Logics in Artificial Intelligence. JELIA 2019 , volume 11468 of
Lecture Notes in ComputerScience , pages 304–320. Springer, 2019. doi:10.1007/978-3-030-19570-0_20 . M. Hannula, J. Kontinen, J. Van den Bussche, and J. Virtema. Descriptive complexity ofreal computation and probabilistic independence logic. In
Proceedings of the 35th AnnualACM/IEEE Symposium on Logic in Computer Science , LICS ’20, pages 550–563. Associationfor Computing Machinery, 2020. doi:10.1145/3373718.3394773 . L. Hardy. Nonlocality for two particles without inequalities for almost all entangled states.
Physical Review Letters , 71(11):1665–1668, 1993. doi:10.1103/PhysRevLett.71.1665 . C. Held. The Kochen-Specker Theorem. In E. Zalta, editor,
The Stanford Encyclopedia ofPhilosophy (Spring 2018 Edition). Metaphysics Research Lab, Stanford University, 2018. URL: https://plato.stanford.edu/archives/spr2018/entries/kochen-specker/ . W. Hodges. Compositional semantics for a logic of imperfect information.
Logic Journal ofIGPL , 5(4):539–563, 1997. doi:10.1093/jigpal/5.4.539 . S. Kochen and E. Specker. The Problem of Hidden Variables in Quantum Mechanics.
Journalof Mathematics and Mechanics , 17(1):59–87, 1967. URL: . J. Kontinen and J. Väänänen. On Definability in Dependence Logic.
Journal of Logic,Language, and Information , 18(3):317–332, 2009. doi:10.1007/s10849-009-9082-0 . J. Puljujärvi and J. Väänänen. Team Semantics and Independence Notions in QuantumPhysics (Personal communication), 2021. M. Studeny. Multiinformation and the Problem of Characterization of Conditional Inde-pendence Relations.
Problems of Control and Information Theory , 18(1):3–16, 1989. URL: http://ftp.utia.cas.cz/pub/staff/studeny/multiinformation-PCIT-89.pdf . J. Väänänen.
Dependence Logic: A New Approach to Independence Friendly Logic . LondonMathematical Society Student Texts. Cambridge University Press, 2007. doi:10.1017/CBO9780511611193 . J. von Neumann.
Mathematische Grundlagen der Quantenmechanik . Springer, 1932. doi:10.1007/978-3-642-61409-5 . M. Wong and C. Butz. On the Implication Problem for Probabilistic Conditional Independency.
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans ,30(6):785–805, 2000. doi:10.1109/3468.895901 . A Explicit Formulae for Extended Dependence and Non-ContextualChoice (cid:73)
Proposition A.1.
The following formula ψ ∈ FO(dep) is equivalent to the generalizeddependence atom dep(( x , x ) , ( y , y )) : ψ = ∀ z z ∀ w w ∃ u ∃ u ∃ u (cid:0) ^ i =1 dep( z z w w , u i ) ∧ (cid:2) ( u = u = u ∧ z w | x y ) ∨ ( u = u = u ∧ z w | x y ) ∨ ( u = u = u ∧ ( z = z ∨ w = w )) (cid:3)(cid:1) Proof.
In team semantics, we can compare equality only within an assignment. However, weneed to compare values of x y and x y across different tuples s, t . We therefore create copiesof x y and x y and regard all possible recombinations: z w is to be interpreted as copyof x y and similarly z w as copy of x y . With ‘recombining’ we mean that if ac appearsas value of s ( x y ) and bd as value of t ( x y ), we want abcd to appear as value of z z w w within one assignment v . We can then check whether s ( x ) = t ( x ) → s ( y ) = t ( y ) holdsby checking whether v ( z ) = v ( z ) ∨ v ( w ) = v ( w ) – and this is what is done in the thirddisjunct.The role of the u i and the first two disjuncts is to ensure that we only regard values thatare actually taken on by x y and x y in the third disjunct.We first extend the team via all possible values of z , z , w , w using the universalquantifier. We than add flags u , u , u to every assignment that are functionally dependenton z z w w . This ensures that all assignments who agree on these copies share the sameflags. The disjuncts then ensure that all assignments with identical flags must be assigned tothe same disjunct. Therefore, we partition not really our assignments in the disjunction butrather the values of our copies.The first disjunct handles all copies where the value of z w does not in fact appear asvalue of x y . Similarly, the second disjunct handles all copies where the value of z w doesnot appear as value of x y . The elements that cannot be handled in either the first or thesecond disjunct are precisely those of interest to us and must satisfy the z = z → w = w condition from the final disjunct. This gives us the semantic of the generalized dependenceatom. (cid:74) A similar approach works for the atom nc( x . . . x k , y ). Recall that X | = nc( x . . . x k , y )if for all s, t ∈ X with t ( y ) ∈ { s ( x ) , . . . , s ( x k ) } it follows that s ( y ) = t ( y ). (cid:73) Proposition A.2.
The following ψ ∈ FO(dep) is equivalent to nc( x . . . x k , y ) : ψ = ∀ z . . . z k ∀ w w ∃ u ∃ u ∃ u (cid:0) ^ i =1 dep( zw, u i ) ∧ (cid:2) ( u = u = u ∧ zw | xy ) ∨ ( u = u = u ∧ w | y ) ∨ ( u = u = u ∧ ( w = w ∨ k _ i =1 w = z i ))))