Counting Steps: A New Approach to Objective Probability in Physics

We propose a new interpretation of objective deterministic chances in statistical physics based on physical computational complexity. This notion applies to a single physical system (be it an experimental set--up in the lab, or a subsystem of the uni…

Authors: Amit Hagar, Giuseppe Sergioli

Counting Steps: A New Approach to Objective Probability in Physics
Counting Steps: A New Appr oach to Objective Pr obability in Physics Amit Hagar ∗ and Giuseppe Ser gioli † August 20, 2021 Abstract W e propose a new interpr etation of objective deterministic chances in statistical physics based on physical computational complexity . This notion applies to a single physical system (be it an experimental set– up in the lab, or a subsystem of the universe), and quantifies (1) the difficulty to realize a physical state given another , (2) the ‘distance’ (in terms of physical r esources) of a physical state from another , and (3) the size of the set of time–complexity functions that are compatible with the physical resour ces requir ed to reach a physical state from another . This view (a) exorcises “ignorance” from statistical physics, and (b) underlies a new interpretation to non–relativistic quantum mechanics. 1 Introduction Probabilistic statements in a deterministic dynamical setting are commonly understood as epistemic (Lewis ,1986). Since in such a setting a complete specification of the state of the system at one time – together with the dy- namics – uniquely determine the state at later times, the inability to pre- dict an outcome exactly (with probability 1 ) is predicated on the notion of ignorance, or incomplete knowledge. Such a subjective interpretation is ∗ Indiana University , Department of History & Philosophy of Science, Bloomington † Universit ´ a di Cagliari, Sardegna, Italy 1 natural in the context of classical statistical mechanics (SM), wher e a phys- ical state is represented as a point on phase space, and the dynamics is a trajectory in that space, or in the context of Bohmian mechanics, wher e the phase space is replaced with a configuration space and the ontology is augmented with the quantum potential, but recently it has been suggested as a viable option also in the context of orthodox non–r elativistic quantum mechanics (QM), where the state is r epresented as a vector in the Hilbert space, and the dynamics is a unitary transformation, i.e., a rotation, in that space (Caves, Fuchs, & Schack, 2002). In all three cases the dynamics is strictly deterministic, and the only differ ence – apart from the repr esenta- tion of the state – is in the character of the probabilities: in classical SM or Bohmian mechanics they are subsets of phase space (or configuration space) obeying a Boolean structur e; in QM they are angles between sub- spaces in the Hilbert space obeying a non–Boolean structure, whence the famous non–locality , contextuality , and the violation of Bell’s inequalities. Such an epistemic notion of pr obability in statistical physics appears to many inappropriate. The problem is not how lack of knowledge can bring about physical phenomena (Albert, 2000, p. 64); it can’t. Neither is it a problem about ontological vagueness (Hagar , 2003). Rather , the problem is that an epistemic interpretation of probability in statistical physics, be it classical SM or QM, turns these theories into a type of statistical infer ence: while applied to physical systems, these theories become theories about epistemic judgments in the light of incomplete knowledge, and the prob- abilities therein do not r epresent or influence the physical situation, but only repr esent our state of mind (Frigg, 2007; Uffink, 2011). In recent years an alternative, objective view of pr obability , has been defended, both in the foundations of classical SM and in the context of Bohmian mechanics in the foundations of QM, based on the notion of typ- icality (Maudlin, 2007). T ypicality claims tell us what most physical states are, or which dynamical evolution are overwhelmingly more likely , by as- signing measure 1 to the set of such states or the set of such dynamical evolutions. 1 Such claims make an analytical connection between a de- terministic dynamics and a characterization of certain empirical distribu- tions, hence can be interpreted as objective, having nothing to do with one’s cr edence or state of knowledge. W ith this notion, or so the story 1 Examples are “ most quantum states are mixed, i.e., entangled with their environ- ment”, or “ most systems relax to thermodynamic equilibrium if left to themselves”. 2 goes, one can treat pr obabilistic statements in a deterministic physical set- ting as arising from an objective state of affairs, and the theories that give rise to these statements as theories about the physical world, rather than theories about our state of knowledge. This objective view of probability , however , is not problem–free. First, as its proponents admit (Goldstein et al. 2010), the notion of typicality is too weak: a theorem saying that a condition is true of the vast major- ity of systems does not prove anything about a concrete given system. Next, the notion lacks logical closure: a pair of typical states is not neces- sarily a typical pair of states, which means that “being typical” is not an intrinsic property of an initial condition, not even for a single system, but depends on the relation between the state and other possible initial condi- tions (Pitowsky , 2011). Possible ways ar ound these dif ficulties have been suggested, 2 but while these problems may be circumvented, there exists a deeper lacuna underlying the notion of typicality which threatens the entire pr oject. The point is that typicality claims depend on a specific choice of mea- sure, usually the Lebesgue measure or any other measure absolutely con- tinuous with it. But what justifies this choice of measure when an infinite number of possible measur es are equally plausible (Hemmo & Shenker , 2011a)? Moreover , even if we have established somehow that, relative to a preferr ed choice of measure, a certain set of states T is typical, i.e., its members ar e overwhelmingly mor e probable with respect to all possible states, what justifies the claim that we are likely to observe, or “pick–up”, members of that set T more often than members of the complement set ˜ T ? After all, the measure we have imposed on the space of all possible potential states ( T ∪ ˜ T ) need not dictate the measure we impose on the space of our actual observations. Indeed, while under the choice of the Haar measure “most” quantum states are mixed, hence entangled with their environment, we can still realize (hence observe) pure states in the lab, at least to a certain extent. In what sense, then, are these states “rar e”? The twofold problem of justification of the measure is, on final account, a manifestation of the problem of induction (Pitowsky , 1985, pp. 234–238). On a strictly empiricist view , the effort to justify typicality claims is just an- 2 Maudlin (2007, p. 287), for example, r ejects the requir ement to assign probabilities to single systems, and Pitowsky (2011) proposes to r etain most of the advantages of typical- ity but to retreat to a full–fledged Lebesgue measure, with its combinatorial interpreta- tion. 3 other (futile) attempt to give demonstrations to matters of fact, or to derive contingent conclusions from necessary truths. The point here is that there is no surrogate for experience in the empirical sciences, and that inductive reasoning is the best one can do in one’s attempts to understand the world. W e tend to agree with the above criticism, but we also believe the alterna- tive (or rather the lack ther eof) it leaves us with is equally unsatisfactory . While typicality ar guments do seem to achieve too much, the above pes- simism seems to leave us with too little: not only are we unable to make sense of objective deterministic chances on this empiricist account, we are also unable to justify the standar d statistical methods in the scientific prac- tice. These methods are commonly understood within the context of the frequentist approach to pr obability , and yet the latter approach is based on typicality arguments (these appear , e.g., implicitly in the weak law of large numbers, or explicitly in the definition of a random sequence), and so the criticism raised against typicality equally undercuts the attempts to apply fr equentist methods in the empirical sciences (Hemmo & Shenker , 2011b). Since we believe ther e is more to statistical physics than statistical in- ference, we here pr opose a new non–fr equentist interpretation of physical probability as an alternative to typicality . Our notion is objective, dynam- ical, applies to individual states or systems, and its definition requir es no convergence theorems. Die hard empiricist as we ar e, we offer little re- lief from the pr oblem of justification of the measure imposed on the space of all possible states. W e do address directly , however , the problem of justification of the measur e imposed on the space of all actual states: on our view , objective physical probabilities are transition probabilities that supervene on the time–complexity of the actual dynamical evolution. Our proposal is thus consistent with the above criticism marshaled against typ- icality and fr equentism, and can serve as a viable alternative to the current epistemic view of probability in statistical physics, turning the latter once again into a physical theory about the natural world. The paper is structur ed as follows. In section 2 we spell out the ba- sic assumptions behind our proposal. W arming up, in section 3 we show how by equating “probable” with “easy” (in terms of computational com- plexity), one can assign measure 1 ( 0 ) to a set of states whose realiza- tion requir es a dynamical evolution with polynomial (exponential) time– complexity . In section 4 we move to a full–fledged definition of objective probability that quantifies how hard it is to realize a physical state, and 4 measures (in terms of physical resour ces) the ‘distance’ between any such pair of states. In section 5 we apply our new interpretation to classical, statistical, and quantum mechanics. In doing so we intr oduce a new inter - pretation to the pr obabilities of QM. Section 6 concludes. 2 Assumptions W e start by spelling out the five basic assumptions that underlie our mod- els. They are Determinism , P ⊂ EXPTIME , Boundedness , Discreteness , and Lo- cality . These assumptions are working hypotheses in the framework from which our interpretation of probability stems, namely , physical computa- tional complexity . In this framework (Geroch & Hartle, 1986; Pitowsky , 1990; Pitowsky , 1996), the performance of physical systems is analyzed with notions and concepts that originate in computational complexity the- ory , by appr oximating dynamical evolutions with a discrete set of compu- tational steps to an arbitrary degree of accuracy . These assumptions help us delineate the two probability spaces in our models: the space of phys- ically allowable states, and the space of physically allowable dynamical evolutions. A. Determinism Our models rest on the assumption of strict determinism. This assump- tion follows from the strong physical Church–T uring thesis (PCTT hence- forth), 3 according to which actual dynamical evolutions of physical sys- tems in our world can be regar ded as computations carried by determinis- tic T uring machines. Agreed, some physical theories do allow in principle for non–T uring–computable phase trajectories (trajectories that cannot be repr esented by recursive functions), and, in addition, ther e exists a vast lit- erature on the physical possibility of supertasks and “hyper computation”, that aims to show that T uring–computability is not a natural property , and need not apply a priori in the physical world. Nevertheless, if the str ong PCTT holds, then as a contingent matter of fact , non–T uring–computable tra- jectories are ruled out, and the above, rather contrived, counterexamples 3 The physical Church–T uring thesis is logically independent of the original Church– T uring thesis. See e.g., (Pitowsky & Shagrir , 2003). 5 are not r ealizable in the actual universe. 4 In what follows we thus disre- gard naked singularities, closed timelike curves, non–globally hyperbolic spacetime models, ill–posed problems, divergences, and the like, adher- ing to the idea that every dynamical evolution takes a physical state to one and only one physical state. 5 B. P ⊂ EXPTIME The fact that each computation requir es physical r esources (energy and time) that incr ease with the size (the number of degr ees of fr eedom) of the system allows us to classify differ ent dynamical evolutions as either “easy” (i.e., having polynomial time–complexity such as O ( n c ) ) or “hard” (i.e., having exponential time–complexity such as O ( c n ) ). 6 That there exists a meaningful differ ence between these two classes (and between differ ent degrees of time–complexity within each class) is the consequence of the T ime Hierarchy Theor em (Hartmanis & Stearns, 1965). C. Boundedness Assumption (A) allows us to apply the machinery of complexity theory to dynamical evolutions, by treating them as computations. Assumption (B) allows us to classify states (and the dynamical evolutions that r ealize them) as “easy” or “hard”. Assumption (C) allows us to impose upper and lower bounds on the set of all possible dynamical evolutions in the actual universe, based on the following two facts: • The physical resour ces (energy and number of particles) in the uni- verse are bounded from above; beyond a certain degree of an expo- nential or a polynomial time evolution, the next computational step would requir e resources that would supersede this bound. 4 So far there are two such counterexamples: Pour el & Richar ds’ (1989) wave equation in 3 dimensions and Pitowsky’s (1990) spacetime model that allows finite–time execution of an infinite number of computational steps. See also (Hogarth, 1994) for an elaboration on the latter , and (Earman & Norton, 1993) for further discussion. 5 Note that from a strictly dynamical perspective, quantum dynamics is fully determin- istic: Schr ¨ odinger ’s equations takes any quantum state to one and only one quantum state. 6 Here n is the input size – in our case the dimension of the system at hand, and c is a (rational, as we shall assume below) coefficient. 6 • The minimum number of computational steps is 1, and so for a given n (the size of the system) there always exists a lower bound on the set of possible dynamical evolutions below which the number of steps requir ed for this input size n is smaller than 1. D. Discreteness Assumption (D) allows us to discretize the set of the physically allowable dynamical evolutions. T wo facts warrant the elimination of real coeffi- cients in our classification of dynamical evolutions into time–complexity classes. First, each dynamical evolution is governed by a Hamiltonian (the total energy function). Second, the time–ener gy uncertainty r elation limits the ability to r esolve arbitrary ener gy differ ences between any two Hamil- tonians (Childs, Preskill & Renes, 2000; Aharonov , Massar , & Popescu, 2002). This means that we cannot distinguish between two unknown Hamil- tonians with infinite pr ecision, hence the space of possible Hamiltonians is discrete. E. Locality Finally , and consistent with the current state of affairs in physics, in physi- cally realizing the Hamiltonians that govern the dynamical evolutions, we allow only local interactions. Models The above assumptions allow us to propose two possible models for ob- jective physical pr obability . W e do not claim that these models are unique, optimal, or in any sense canonical. Our purpose is only to demonstrate that it is possible to define a finite notion of objective probability in physics on the basis of considerations from physical computational complexity . Our first model is constructed on the space of all possible physical states of a given physical system with a given number of degrees of free- dom n in a given moment in time t , confined to a given energy shell E . Each such state, given assumption (A), is a result of a certain dynami- cal evolution, which, in turn, is generated by a certain Hamiltonian. If we assume further that all dynamical evolutions start from a common, 7 “mother ” state, say , the initial state of the universe, we can then assign (using assumptions (B)–(E)) a non –uniform probability distribution on the set of all possible states according to the time–complexity of the dynamical evolution that r ealizes each state. 7 As we shall see below , for a large num- ber of degrees of fr eedom, this assignment has interesting consequences. Our second model is constructed on the space of all possible dynamical evolutions. Here, again, to precisely define the notion of objective phys- ical probability we requir e the above triplet, i.e., the number of degr ees of freedom n , time t , and energy E . Given such a triplet, we construct a probability space out of a functional that relates the power ( E /t ) of a com- putation – seen as a dynamical evolution from one state to another – with the relative size of the set of the possible dynamical evolutions that ar e compatible with it. Our probability function is thus a distance measur e on the above functional, that quantifies how hard it is to realize a state, or how far a given system is from that state, in terms of the physical resources available to it, relative to the r equired resour ces. 3 W arming Up: Not All States Are Born Equal The standard story about typicality (in classical SM, or in Bohmian me- chanics in the context of QM) r equires the notion of equipr obability , or a uniform measure, relative to which a set is declared typical . The foun- dations of SM are saturated with failed attempts to justify this choice of measure, 8 the most famous of which is Boltzmann’s er godic hypothesis. 9 7 The move from the space of states to the space of dynamical evolutions is licit given our assumptions (A) and (B) above: in any given moment in time, and for each physi- cal state, there can be one and only one time–complexity class of dynamical evolutions that “realizes” it from the common “mother ” state. This means that while many time– complexity classes of dynamical evolutions can r ealize the same state, no two evolutions that belong to differ ent time–complexity classes can do so at the same moment if they start at the same common “mother ” state. 8 See, e.g., (Sklar 1993, pp. 156–195) for a summary . 9 That the ergodic hypothesis falls short of justifying the assumption of equiprobability follows fr om three facts: (1) many thermodynamic systems are not ergodic, (2) ergodicity holds only at infinite time scales, and (3) such a justification is plainly circular , as it is valid only for a set of “normal” states whose measure 1 is fixed, again, relative to the choice of measure we are trying to justify from the outset. The last two facts ar e, essentially , equivalent to the recent criticism against typicality . 8 Our first stab at the notion of physical probability based on computational complexity suggests how to deflate this problem. At the crux of the matter lie the primitive notions of number and count- ing . The choice of the Lebesgue measure is deemed “natural” when one extends the standard notion of counting fr om the finite case to the infinite one (Pitowsky , 2011). But why treat each state as equal (in number) to an- other even in the finite case? Agreed, there ar e physical situations involv- ing symmetries, such as the case of a fair die, that warrant such a treatment (Strevens, 1998), but in general there is no a priori reason to count this way . Consequently , in what follows we shall tr eat physical states in an utterly politically incorrect manner , assigning states with non –equal weights in- verse proportionally to the degree of time–complexity of the dynamical evolution they ar e associated with. As it turns out, few empirical facts al- low us to get as close to equipr obability as one can get in a finite setting, without r elying on any a priori notion of equipr obability or uniform proba- bility distribution from the outset. Our probability triplet thus consists of the following elements: • Ω is the state space of all physical states of a system with n degr ees of freedom on a given energy shell in a given moment in time, obeying the current laws of physics, i.e., realized by a certain physical process whose time–complexity is either polynomial or exponential (“easy” or “hard”). Note that Ω is a bounded and discrete set of polynomial functions ∈ O ( n c ) and exponential functions ∈ O ( c n ) . • F is the σ –algebra of Ω , i.e., a non–empty class of subsets of Ω , con- taining Ω itself, the empty set, and closed under the formation of complements, finite unions, and finite intersections (i.e., F is a dis- crete, bounded subset of the power set of Ω ). The elements of F are possible physical states, realized by physical pr ocesses with a com- bined time–complexity , either exponential or polynomial. • P is the probability measure that maps members of F onto [0 , 1] , where P ( ∅ ) = 0 , P ( F ) = 1 , such that P is additive. W e now proceed to the assignment of measures on sets of states. Let’s denote with Exp and Poly a partition of F into two subsets with some prior measures µ e and µ p = 1 − µ e . 9 Next, for every function f ∈ ( Poly ∪ Exp ) we define the weight ξ of f at an arbitrary (natural) point n as: 10 ξ f ( n ) = µ e − arctan f 0 ( n ) Π / 2 µ e (1) when f ( n ) ∈ Exp , and ξ f ( n ) = (1 − µ e ) + [(1 − µ e ) − arctan f 0 ( n ) Π / 2 (1 − µ e )] (2) when f ( n ) ∈ Poly . For every function f ∈ ( Poly ∪ Exp ) we define the probability of f in an arbitrary (natural) point n as P ( f ( n )) = αξ f ( n ) . (3) where α is a normalization parameter given by: α = 1 P i ξ f i ( n ) . (4) It is easy to show that P ( f ( n )) ∈ [0 , 1] , that P ( ∅ ) = 0 , and that by construc- tion, the total probability is equal to 1. Furthermore, if we take into account the complexity degr ee of each element of the bounded set ( Poly ∪ Exp ) , we can obtain a partition of ( Poly ∪ Exp ) where every element of the partition (indicate by I 1 , ..., I n ) corresponds to a differ ent degree of complexity . W e now define P ( I 1 ) = P f i ∈ I 1 P ( f i ) . It follows that: ∀ i, j, P ( I i ∪ I j ) = X f i ∈ I i P ( f i ) + X f j ∈ I j P ( f j ) . (5) Thus P satisfies Kolmogor ov’s axioms, and as such it is an admissible probability function. Note that ξ is inverse proportional to the first derivative of the time– complexity function of the dynamical evolution that realizes the state. Consequently , for a lar ge input size (i.e., a lar ge number of degrees of free- dom), the following results hold (see appendix A for details): 10 Metaphorically , dynamical evolutions in Exp “lose weight” in direct proportion to their degr ee of time–complexity . This total lost weight is now (non–uniformly) dis- tributed on Poly in such a way that the dynamical evolutions in Poly gain relative weights inverse proportionally to their degr ee of time–complexity . 10 I. The set of states whose time–complexity is exponential gets assigned a measure close to zero, while the set of states whose time–complexity is polynomial gets assigned a measure close to one. II. The polynomial states are (almost) uniformly distributed, i.e., their distribution is (almost) a resolution of the identity . W e would like to clarify the following two points: • Since the σ –algebra F is a bounded and discrete, α is always finite (albeit very small for a large n ). For this reason, in a finite model such as ours, the assignment of actual measure 0 (or 1 for that matter), as well as the assignment of equiprobability (as a resolution of the identity on Poly ) are fictions, as the set Exp remains with a finite (albeit very small) measur e, and the partition on the set Poly is never strictly uniform. For a macroscopic system, however , ‘very small’ is an understatement, and the above partition is very close to uniform. When the state at hand is of the universe as a whole (where n ≈ 10 80 ), the understatement is literally a cosmic one, and equiprobability of all polynomial time–complexity functions is a practical certainty . • The normalization factor , α , is inverse proportional to the number of possible time–complexity functions, and yet for combinatorial rea- sons, this number , i , is directly related to n , the number of degrees of freedom, hence, in effect, α is inverse proportional to n . One can object here that we implicitly “sneak in” an assumption about equal weights by treating each degr ee of freedom on equal footing. 11 In r e- sponse, we stress that the this result rests not on an a priori notion of counting, but rather on our assumption (E) above and on the empir- ical fact that a concatenation of nearest–neighbor interactions is not physically equivalent to one nonlocal interaction. 12 In other words, it is because of this (contingent) nature of physical interactions (and the resour ces they require) that r esults (I) and (II) above hold. 11 Recall that each time–complexity function represents a specific interaction Hamilto- nian that generates it. For the normalization factor to be inverse proportional to the size of the system, i.e., α ∝ n − 1 , we must assume that a two–body system possesses less pos- sible interactions (hence less possible dynamical evolutions) than a many–body system. This assumption, we argue, follows fr om the the locality condition. 12 For an analysis of the complexity costs involved in simulating a nonlocal operator with local ones see, e.g., (V idal & Cirac, 2002). 11 Are these results sufficient to support the common lore, according to which “most” states ar e thermodynamic normal, i.e., typical , hence mor e likely to be observed? The answer is clearly negative, but the r easons for the insufficiency ar e quite subtle. First, note that we have deliberately distributed weights on differ ent time–complexity functions in such a way that favors members of Poly and disfavors members of Exp (see equations (1) & (2) above), but noth- ing justifies such a distribution – we could just as well have constructed a symmetric model in which Exp turned out to be assigned a measure close to 1 and Poly a measure close to 0 . This is exactly the pr oblem typicality arguments face (Hemmo & Shenker 2011a), and in this respect, our model fares no better . What are model does show , however , is that for a large di- mension, and given several plausible assumptions such as ours, equipr ob- ability (or some distribution close to it) holds among members of one of the two sets . It is still a contingent matter of fact which set “wins over ”, and in this sense, experience remains the only source for justifying the choice of measure. In order to highlight another interesting feature of our model, let’s as- sume that (I) and (II) above hold. One could still argue that in order to support the common lor e, it is necessary to demonstrate that the time– complexity of the dynamical evolutions associated with those normal states is polynomial. According to our model this would endow such states with high probability . The problem is that the common lore also asso- ciates thermodynamically normal behavior with non–integrable systems, yet our probability model is discr ete. As such, it harbors only integrable, or periodic dynamical systems. One could still observe chaotic behavior in this context, 13 but this would requir e redefining notions such as “sen- sitivity to initial conditions” or “dynamical instability” to fit the discrete background, and would also r equire a careful analysis of time scales. 14 But even such a demonstration would still fall short of supporting the common lore. The standard notion of probability concerns a sequence of 13 There is no compelling reason to associate chaos only with the car dinality of the reals. See (W innie, 1992) on the idea of “computable chaos”. 14 At this point, and for the recor d, let us acknowledge the discrepancy between our discrete model and the continuous nature of the time evolutions. The former is used in computer science; the latter in physics. Both ar e consistent, and the question whether the former approximates the latter or vice versa, i.e., which is more fundamental, seems to us, at least at the current stage of physics, pur ely metaphysical. 12 events, and in particular , a random choice of such sequences. W e could, of course, translate this notion to fit our new definition by exchanging events with physical states, yet nothing constrains us to tr eat the above pr oba- bility space of all possible states as isomorphic to the probability space that contains our actual observations. In particular , even if on phase space ther - modynamic normal states were members of a set of measure 1 and ther- modynamic abnormal states were members of a set of measur e 0 , the mea- sure imposed on the space of our actual observations could be different : we could, for example, chose between thermodynamic normal or abnormal states by tossing a fair coin, thus endowing them with equiprobability! In fact, we know from experience that thermodynamic abnormal states can be realized in the lab. The most famous examples for these are the spin echo experiment (Hahn, 1950) and the Fermi–Pasta–Ulam (1955) discovery of a violation of the equipartition theorem. If we call such ‘anomalies’ “rare”, we must explain how is it that we can repeat such “rare” events ever so often. In what follows we shall demonstrate how our new view on physical probability can meet these challenges. 4 One Step at a T ime For an empiricist, the problem of justification of the choice of measure is just a pseudo–problem, experience being the only source of justification requir ed. Our first model, however , may serve as a consistency proof, demonstrating how equiprobability may arise from certain contingent as- sumptions about (local) physical interactions and complexity consider - ations. It thus motivates us to pr opose a new interpr etation of objec- tive probability based on computational time–complexity . W e empha- size again that all we offer here is a new meaning for the term “proba- bility”. Quantitatively , at this stage we can only propose the conjecture that, if worked out to its finest details, our proposal will converge to the actual (and so far well–confirmed) pr obabilities that are curr ently being employed in statistical physics. W e shall say more on this in section (5). W e thus suggest to interpret objective probability as a physical magni- tude that quantifies how hard it is to r ealize a physical state, given a triplet of physical r esources (ener gy , time, space). Equivalently , this magnitude quantifies how ‘far ’ a given physical system is from a certain state in terms 13 of the physical resour ces available to it, relative to those requir ed for that state’s realization. • T ake any physical system with dimension n in a given ener gy state E and in a given moment in time t , and let Ω be the bounded and dis- crete set of possible dynamical evolutions obeying the curr ent laws of physics, whose time–complexity is either polynomial or exponen- tial (“easy” or “hard”), that may govern the system’s behavior . The set Ω contains all possible dynamical evolutions that can realize a sin- gle actual state. • Given a certain couple ( n, E t ) , where n is the dimension of the state, E is the total possible energy , and t is the total possible time, we consider the set S ¯ n = { g n ∈ Ω | O ( g ¯ n ) ≤ #( E t ) } (6) where g ¯ n is a dynamical evolution that for a given n “consumes” at most the r esources E in time t (we denote the power allowed for the computation as Pw = E /t ). 15 • F is the σ –algebra of S , i.e., a non–empty class of subsets of S , con- taining S itself, the empty set, and closed under the formation of complements, finite unions, and finite intersections. The elements of F ar e dynamical evolutions with a combined time–complexity , ei- ther exponential or polynomial. F is thus a subset of the power set of S , and is bounded and discrete. Our probability measur e P is given by the mapping: ∀ A ∈ F : P { n A , ( E t ) A } ( A ) = | A | | S | (7) 15 By “consumes” we mean the following. T ake an arbitrary computation. Each compu- tational step “costs” the same amount of time; but if, as in our case, the total time allowed for the computation is fixed, the differ ence in time–complexity is cashed out in terms of the difference in the frequency of the computation, i.e., the time–dif ference between any two computational steps. Thus, for a given n and for a given t , the higher the degree of time–complexity of the function, the higher the frequency of the computation. Since higher frequency means higher power , by setting a bound on Pw , one immediately sets a bound on the number of computational steps allowed for the computation ( # ), and subsequently , a bound on the number of time–complexity functions that can realize the computation. 14 Figure 1: Probability fr om time–complexity Where  E t  A is the available power . 16 T o calculate this magnitude we embed it in a continuous function of the general concave form # = ( n α Pw ) 1 /β , where α and β are free parameters. By construction P ( A ) ∈ [0 , 1] , P ( ∅ ) = 0 , P ( S ) = 1 , and P is additive: ∀ A, B ∈ F such that A ∩ B = ∅ , P ( A ∪ B ) = P ( A ) + P ( B ) . In order to satisfy further constraints imposed by the axioms of probability theory , the curvature of ( n α Pw ) 1 /β – controlled by α and β – must satisfy further conditions. W e spell out these in the appendix. From a computational complexity perspective, the meaning of P is straightforward: • P = 0 means that the desired state is non–T uring–computable, i.e., 16 Mathematically speaking, the mapping between # and Pw is discontinuous. W e can still define P with an integral, however , using an approximation, by embedding this mapping into the continuous function f , This embedding has one advantage, namely , it allows us to constrain our model: the curve # = f ¯ n ( Pw ) relates time–complexity functions (indirectly via the number of steps they requir e for the computation) with the power of the computation, and is of the general concave form ( n α Pw ) 1 β – since all time–complexity functions are monotonic, have a common origin, and are otherwise non–intersecting, ∀ Pw i , Pw j , x such that Pw i < Pw j , x ∈ N : f ¯ n ( Pw i + x ) − f ¯ n ( Pw i ) > f ¯ n ( Pw j + x ) − f ¯ n ( Pw j ) – where α and β are free parameters, constrained by the theorems of probability theory (e.g., independence, conditional probability). See appendix B–E. 15 its realization requir es a non–algorithmic process, such as a measure- ment with infinite precision. • P = 1 means that we ar e ‘at’ the desired state hence its realization requir es constant resources. In complexity theory , such a process would be assigned complexity O (1) . • 0 < P < 1 means that the given system is “P–distant’ fr om the de- sired final state. In other words, P denotes the transition pr obability between the intermediate states, captured by the size of the set of possible time–complexity functions that are compatible with the re- sources r equired for that transition. 5 Ignorance of what? Consistent with our goal to turn epistemic probability in statistical physics into an objective one, t he notion of physical pr obability her e proposed has nothing to do with one’s credence or degrees of belief. It measures, as we have seen, thr ee equivalent physical properties that each pair of physical states objectively possess: • The difficulty (in terms of physical resour ces) to realize the transition from one state to another . The more probable a state, the easier it is to reach it fr om a given state with a given amount of resources. • The distance (in terms of physical resources) between one state and another . The more probable a state, the shorter is its distance (in terms of physical resour ces) from the initial state. • The relative size of the set of time–complexity functions that are com- patible with the above two properties. The more probable a state, the larger is this size. T o see how this notion of probability can turn subjective “ignorance” in statistical physics into an objective feature of the world, we propose the following intuition. 16 5.1 Classical mechanics Start with classical mechanics. Here the common lore traces pr obabilis- tic statements to dynamical sensitivity of initial conditions. Omniscient beings such as Laplace’s demon, or so the story goes, could predict with certainty any possible outcome of a dynamical evolution. 17 Short of this a power , finite creatur es such as ourselves are constrained to introduce error into their pr edictions. What we suggest here is that this error has physical meaning, captured by our notion of pr obability . 18 Suppose we would like to predict a certain outcome of an experiment in the lab, described by classical mechanics. Unless we alr eady possess the initial state of our experiment, we need to create, or prepar e it. The preparation of the initial state, starting from a specific state we do pos- sess , requires physical resources. If these are limited, then our probability P complexity quantifies how far we ar e from the ideal initial state, or equiv- alently , what is the error ,  , in our preparation, where P complexity = 1 −  . Thus if resour ces ar e insuf ficient, we start an experiment not in the ideal initial state, but in another , actual state,  –distant from the ideal state, and so, even with deterministic dynamics, we have an error in the final state, which turns out to be differ ent than the one we’d expected. This actual er- ror in the preparation of the state doesn’t mean that the system possesses no definite state. On the contrary , the system is always in a definite state; it is just in a state distant fr om the desired state with a certain err or  , which in turn is (inverse) proportional to the resour ces we employ in the prepa- ration. As our second model shows, this distance can be regar ded as a probability measur e. When we move from mechanics to statistical mechanics, we introduce a distinction between micro–states and macr o–states. The evolution of the former on phase space is constrained by Liouville’s theorem, that tell us that a region of phase space (call it ”a blob”), occupied by a set of micro– states all compatible with a certain macro–state, may change is shape but not its volume. The ”evolution” of the latter is dictated by the kind of mea- surements we make, i.e., by the differ ent partitions we impose on phase 17 Given our assumption (A), such evolutions are restricted to T uring–computable ones. See (Pitowsky , 1996). 18 On another interesting relation between error and complexity costs see (T raub & W er- schultz, 1999). 17 space. These two evolutions are independent , 19 and they allow us to de- fine the transition probability of a physical system from one macro–state to another as the partial overlap between the blobs and the macro–states: P ([ M 1 ] t 1 | [ M 0 ] t 0 ) = µ ( B t 1 ∩ [ M 1 ]) (8) This means that the probability that a system that starts at a macro–state [ M 0 ] at time t 0 (when the size of the dynamical blob B completely satu- rates the volume [ M 0 ] ) will end in a macr o–state [ M 1 ] at time t 2 , is given by the partial overlap (the relative size) µ of the dynamical blob B at t 2 with the macr o–state [ M 1 ] . Note that there is nothing subjective in this kind of transition probability . ”Ignorance” here simply means lack of resolu- tion power , i.e., lack of precision or lack of control, which is expressed by the r elation between dynamical blobs and macro–states, both of which ar e objective features of the physical world. One can describe the evolution of a dynamical system either by follow- ing its dynamical blob, or , equivalently by following the macro–states to which the exact state belongs. In the first description probability signifies lack of precision; in the second, lack of control. W e have already shown that our probability measure describes the amount of missing resources for an exact description in the first case. In the second case, we can define our probability as an objective physical magnitude, a transition probabil- ity between two macro–states M 0 and M 1 , that signifies how ”far” is M 1 from M 0 , where the ”distance” P ( M 0 , M 1 ) is defined in terms of the phys- ical resour ces (energy , space, and time) that an observer who observes M 0 has , relative to what she needs in or der to observe M 1 . In this sense, probability is an objective measure of the difficulty to pro- duce the macro–state M 1 from the macro–state M 0 given the physical re- sources (energy , space, and time) at one’s disposal. Moreover , this measure is identical , conceptually and formally , to the one used in the foundations of statistical mechanics (8), as one can interpr et any probability less than 1 as signifying the lack of physical resour ces that can allow one to partition phase space into a macro–state more accurately in such a way that it will include all of the dynamical blob. Here is how: when the system starts in a given initial macro–state M 0 , its dynamical blob completely saturates the volume [ M 0 ] ; this is what we 19 see Hemmo and Shenker (2012) for the trouble one gets into when one ignores this independence. 18 mean when we say that we know the system to be in state M 0 . 20 W e now make a measurement, hoping to find the system in M 1 after it. In other words, we now carve up phase space into a differ ent macro–state. If we have enough r esources available, we can accurately carve the macr o–state M 1 , so that the dynamical blob will, again, saturate its volume [ M 1 ] ; if we do not have enough resour ces, only part of the dynamical blob would overlap with [ M 1 ] . The empirical conjecture we make, over and above the requir ement for conformity with the observed relative frequencies, is that this relative volume of the dynamical blob in [ M 1 ] should be a function of the physical resour ces we have relative to what we need in order to observe M 1 with certainty . This conjecture is in principle testable in many scenarios within control theory , where one is trying to steer a physical process to a desirable outcome. 5.2 Quantum mechanics In quantum mechanics the situation is no different. Her e the (determinis- tic!) evolution of a physical system is given by the propagator U β = e − iH t where H is the Hamiltonian of the system. Now , our attempt to estimate this propagator introduces an error , and results in an approximate propa- gator , U α , wher e the err or is given by the distance between the two propa- gators: d ( U β , U α ) = || U β − U α || op = sup ||| ψ i|| =1 || ( U β − U α ) | ψ i|| C 2 . (9) where U α and U β are two different rotation operators along, say , the y axis, and 0 < α < β < π 2 . Her e, again, the error  , and therefore our notion of probability , P complexity = 1 −  , quantify the distance (in terms of energy and time) between two physical states – in this case, the distance, relative to a common initial energy state | ψ 0 i , between the “ideal” energy state | ψ 2 i and the energy state | ψ 1 i we can prepar e with the physical resour ces that are available to us (see figure 2). One can prove that d ( U β , U α ) = 20 The opposite case, when we are uncertain of the initial state, can also be accommo- dated in this framework, by identifying the the error  with lack of physical resour ces, and by defining the probability P = 1 −  . In this case P measures the distance between the state we think the system is in and the actual state it is in, i.e., the macr o–state that the dynamical blob does saturate. In other words, in this case M 0 and M 1 switch places. 19 Figure 2: Quantum probabilities as distance measur es 2(1 − cos( β − α )) . By denoting the err or  = d ( U β , U α ) / 2 , it follows that our probability is r elated to the quantum Born rule: P complexity = h ψ 2 | ψ 1 i = p P QM . (10) What is the physical meaning of “preparation” and “estimation”? Phys- ically , these are measurements – in our case, measurements of energy . But a proper ener gy measurement probes the time evolution, and there- fore cannot be done with arbitrary time, unless one knows in advance the Hamiltonian of the system at hand. 21 Rather , the time required to perform the measurement and the precision of the measurement are constrained by the time–energy uncertainty relation. For this reason, unless we know in advance the Hamiltonian that governs the dynamics of a physical system, we can never predict its future behavior with probability 1 . T o know the Hamiltonian, however , we need to measure the system’s energy , i.e., to 21 Aharonov & Bohm (1961), for example, proposed a clever way to bypass the time– energy uncertainty principle by measuring a spin–half particle in a magnetic field. Later it was shown that this ability to bypass the time–energy uncertainty relation rests on the prior knowledge of the Hamiltonian: if one knows the Hamiltonian, one need not spend any time (hence computational – qua physical – resour ces) in estimating it, but whenever the Hamiltonian of a system is unknown, determining it to precision ∆ H requir es a time ∆ t given by ∆ t ∆ H ≥ 1 . See (Aharonov , Massar , & Popescu, 2002). 20 prepar e it in a certain energy state, and this preparation will not be error– free, whence objective uncertainty . In this sense our proposal can serve as a basis for a new interpreta- tion of QM. Recall that on the subjective view of QM, the quantum state is treated as a state of knowledge, and quantum probabilities (calculated by the Born rule) are interpreted as “gambling bets” of agents on results of ex- periments, a-la Ramsey–De Finetti (Fuchs, 2010). In contrast, in Bohmian mechanics, the alternative epistemic approach in the foundations of QM, the probabilities are for particles to have certain positions; they signify our ignorance thereof. The above insistence on “agents”, or “observers” – con- sidered by pr oponents of the subjective view as their claim to fame (Fuchs & Per es, 2000) – is r ejected by Bohmians as ontologically vague; the whole point behind arguments about typicality is to free the discussion from such notions. Our new idea about probability simply avoids this debate alto- gether by supplying a possible thir d way: what quantum probabilities ar e probabilities for is neither the positions of particles, nor the gambling bets of learned observers. Rather , quantum probabilities simply quantify how hard it is to realize a physical state; they measure the ‘distance’ between the current state of a physical system and any other state thereof, given the resour ces (energy/time) that are available to that system at that mo- ment. This alternative allows us to interpret quantum probabilities as ob- jective deterministic chances (and in so doing to turn QM once again into a physical theory about the world), without having to support typicality and nonlocal hidden variables. These two examples demonstrate the promise of our approach. In both cases – classical and quantum mechanics – the dynamics is deterministic, and yet probability arises from measurement errors which in turn result from lack of sufficient physical resources. In the classical framework re- sources ar e in principle unbounded (potentially infinite) but finite in prac- tice ; in the quantum case they are finite in principle . 5.3 Quantum vs. Classical probabilities In the approach presented here, the origins of both quantum and classi- cal probabilities is identical – they both stem from objective deterministic chances which supervene on time–complexity classes and relative avail- ability of physical resour ces. This is not surprising. The physical Church– 21 T uring thesis collapses two separate distinctions, namely , (un)predictability and (in)determinism, into one (Earman, 1986), tr eating both on a par as epistemic. Some maintain (Pitowsky , 1996) that such an alleged subor- dination of metaphysics to epistemology makes it is impossible to dis- tinguish between quantum and classical pr obabilities, and stands in flat contrast to the famous no–hidden–variables results, and the r eceived wis- dom about the difference between classical and quantum pr obabilities, captured by the violations of Bell’s inequalities. That metaphysics is ignor ed is a natural consequence of our empiricist framework. But what the critics fail to notice is that an identity in kind need not entail an identity in measur e , and that one can still distinguish between quantum and classical probability measur es despite their common origin; metaphysics, we suggest, has nothing to do with this. 22 In the classical case the err or (and hence the pr obability) can decrease (increase) arbitrarily with the increase of these resour ces. In contrast, in the quantum case the uncertainty principle imposes an actual cut–off on such a potential infinity of resources, which makes it impossible in princi- ple to eliminate the error . Thus the inability of Laplace’s demon to pr edict quantum phenomena results not fr om some vague metaphysical notion of quantum indefiniteness (“no–hidden–variables”), but from the actual finite bounds on physical resour ces in our world. Consequently , while the notion of probability here pr oposed is pr edicated in both the quantum and the classical cases on error , or lack of sufficient physical resour ces, this shared origin results not fr om subordinating ontology to epistemol- ogy . Rather , it is the pr oduct of a physical cut–of f which excludes an un- bounded increase of physical resour ces in the preparation of any given state. That quantum and classical probabilities may be traced to the same origins stems from assumptions the actual bound imposed on physical r e- sources in the world, and the discreteness of energy . In such a finite and discrete world, classical unpredictability that r esults from “lack of knowl- edge” is on a par with quantum uncertainty; both arise from measurement errors as a consequence of insufficient physical resour ces. But the new view also changes the rules of the game, as it simply r ejects vague meta- physical notions such as “quantum indefiniteness” as possible criteria for 22 T o be fair to Pitowsky (1996), his analysis concerned the notion of computability , while ours concerns the notion of complexity , or efficiency . 22 distinguishing quantum and classical probabilities. On the other hand, that a physical system is always in a definite state need not entail hidden variables, as on the view proposed here it is simply not the case that the system is in a definite but unknown state. Rather , the actual definite state the system possesses is just differ ent fr om the ideal state we would like it to be in. It is this discr epancy between the actual and the ideal is what generates probabilities in both the quantum and the classical cases. Finally , despite the lack of metaphysical differ ence, we can still distin- guish between quantum and classical probabilities. Instead of “hidden variables” vs. “indefiniteness” we suggest a dif ference in measure , which in our case is complexity–induced. Indeed, it is a working hypothesis within quantum information scientists that any classical computation that would be harnessed for the simulation of quantum phenomena would do so inefficiently . 23 Our approach can easily accommodate such a putative differ ence: the notion of probability we propose here is defined as the r el- ative size of the set of time–complexity classes that can realize a physical state. That quantum and classical probabilities share the same origins need not entail that for every physical state the above r elative size is also identi- cal. Quantum dynamical evolutions may “consume” (in the sense devel- oped in fn. (15)) less resour ces than classical ones, and so the probability of some physical states may as well be different when r ealized by quantum or by classical dynamics. W e suggest to view the violations of Bell’s in- equality as designating exactly this difference; a difference in complexity , not in metaphysics (Buhrman et al. , 1998). Note, moreover , that such a cri- terion is completely in accord with our current empirical knowledge, and yet, contrary to its metaphysical counterpart, it leaves open the question of the universality of quantum theory . 24 6 Conclusion: Probability as Distance Measure Our distance measure satisfies Kolmogor ov axioms (see appendix B–E), hence, at least mathematically , it is worthy of the name “probability”. It 23 This conjecture was first voiced by Feynman (1982). Computer scientists have for- malized it as BPP ⊆ BQP (Aaronson, 2009). 24 Our probability measure depends on the dimension of the system, which appears to be a key factor in the open problem of scaling–up quantum information pr ocessing devices. 23 also explains away ignorance by tying error (in the preparation of the ini- tial state, or propagator) to probability (of the desired state, or propagator), and by supervening this probability on time–complexity and physical re- sources. In the classical context, it appears to be a natural physical inter - pretation of the epistemic probabilities that arise in statistical mechanics (see eq. (8)). Preliminary results (see eqs. (9) and (10)) suggest that such a distance measur e is also a natural interpr etation of quantum pr obabilities. W e emphasize again that we are only proposing a new interpr etation to the meaning of probability: instead of interpreting probability as a mea- sure of ignorance (which is the standard way in a deterministic dynamical context), we propose an interpretation in terms of the distance (in terms of the relative physical r esources) between an actual state and an ideal one. In this sense our proposal is only qualitative. Quantitatively , we conjectur e that one can repr oduce the observed relative frequencies and standar d re- sults in statistical physics from such a notion which supervenes on time– complexity classes; our two models should be thus seen as plausibility arguments in support of this conjecture. 25 Moreover , we do not pretend in any way to go beyond objective pr obabilities in statistical physics in our interpretation. Whatever problems exist in connecting these with the ordinary notion of probability , namely the connection to relative frequen- cies, or to betting behavior , also exist in our interpretation, and we do not purport to solve them here. Concluding, we have ar gued that he amount of physical resour ces that separate two physical states is an objective feature of the world, and that computational complexity theory allows us to map this feature onto [0 , 1] . This mapping, as we have shown, has all the characteristics of a discrete probability function, that can be interpreted as a measure of precision and control. 25 Albert (2000, ch. 5) offers a similar conjectur e when he proposes that the probabilities of SM supervene on transition probabilities of a more fundamental collapse dynamics. Our view is deterministic, hence excludes collapse, but we too suggest that probabilities in statistical physics are dynamical transition probabilities. In our story , however , they supervene on time–complexity and relative physical r esources. 24 Appendix 6.1 Measures Let us consider an arbitrary exponential function c n and its associated probability: P ( c n ) = [ µ − arctan( c n l nc ) Π / 2 µ ] α. (11) where α = 1 P f i | f i ∈ P oly ξ f i + P f i | f i ∈ E xp ξ f i . (12) It is straightforward to show that for a lar ge dimension n ∀ c : P ( c n ) ≈ ( µ − µ ) α = 0 . (13) In contrast, since for a large dimension arctan( cn c − 1 1 ) Π / 2 ≈ 1 , the unnormalized measure ξ p on Poly doesn’t change and is 1 − µ e , but when we normalize, we get a resolution of the identity P ( f ( n )) = αξ p = 1 − µ e i (1 − µ e ) = i − 1 (14) where i is the number of polynomial functions in Poly . 6.2 Joint Probability T o calculate joint probability in case of independence one needs to realize a new pr obability space S 3 from two given probability spaces S 1 and S 2 , where the new couple { n 3 ,  E t  3 } is given by the respective sums, i.e., n 3 = n 1 + n 2 and  E t  3 =  E t  1 +  E t  2 . • S 3 is defined accordingly as: S ¯ n 3 = { g ∈ Ω | O ( g ¯ n 3 ) ≤ #(  E t  3 ) } (15) • The probability measure P is given, again, by the mapping ∀ A ∈ F : P { n A , ( E t ) A } ( A ) = | A | | S 3 | (16) 25 Figure 3: Independence • One can calculate the joint probability for a combined state C = A ∩ B by taking into considerations the way the physical r esources ar e dis- tributed between the two components of the combined system. Since we ar e constructing a new probability space, additional constraints must be satisfied to maintain the appropriate relations between this new space and the earlier , atomic ones. 1. W e define equivalent processes to have the same power and the same dimension: •  E t  1 =  E t  2 . • n A ∩ B = n 1 + n 2 = 2 n 1 = 2 n 2 . So in this case, since all interactions are local, the total available power is  E t  A ∩ B =  E t  1 n 1 n 1 + n 2 +  E t  2 n 2 n 1 + n 2 . 2. For such processes, since n increases and the power remains the same, it follows that P 3 ( A ∩ B ) < P 1 ( A ) ; P 3 ( A ∩ B ) < P 2 ( B ) (where the P s are calculated for each case separately). 3. If B depends on A , P 3 ( A ∩ B ) > P 1 ( A ) P 2 ( B ) . This case is accom- modated in our model by noticing that the computation time for B 26 Figure 4: Conditional probability already includes the computation time for A , hence the total com- putation time is shorter than the computation time in the case of in- dependence, hence the power of the computation is higher than the case of independence, as requir ed. 6.3 Conditional Probability Conditional pr obability P ( A | B ) is calculated by r escaling S 1 to fit S 2 (in terms of n ), and by calculating the ratio P ( A ∩ B ) /P ( B ) 6.4 Constraints These requirements can be used to constrain f : when we blow up the dimension by a factor of γ and keep the power fixed, the derivative goes down by a factor of γ α ; when we blow up the dimension and keep the derivative fixed, the power goes up by a factor of γ α β − 1 . For non–equivalent processes, wher e  E t  A = δ  E t  B ( 0 < δ < 1 ), we can show that  E t  A <  E t  A ∩ B <  E t  B . (17) 27 If we r equire further that the power of A gr ows less than the “blow–up effect” (that is, that as n increases the Power axis increases less rapidly than the number–of–steps axis)  E t  A ∩ B  E t  A <  1 + 1 − δ δ γ  . (18) we get a constraint on α and β : α β − 1 > log γ  1 + 1 − δ δ γ  . (19) 6.5 Proofs For the sake of simplicity , and without loss of generality , instead of taking the function # = ( n α Pw ) 1 /β , we concentrate on the inverse function Pw = n − α # β , while recalling that for any continuous, derivable, and monotonic function g , ( g − 1 ) 0 ( x ) = 1 g 0 ( y ) . Let  E t  A = Pw A ,  E t  B = Pw B , and n A = ∆ n B , where ∆ ∈ R , ∆ > 1 , and let 0 < Pw A Pw B = δ ≤ 1 . W e pr ove that: Pw A ≤ Pw A ∩ B ≤ Pw B (equality holds only when Pw A = Pw B ) : Pw A ∩ B = Pw A ∆ n B (∆ + 1) n B + Pw B n B (∆ + 1) n B = (20) = δ Pw B ∆ ∆ + 1 + Pw B 1 (∆ + 1) n B = (21) = ∆ δ + 1 ∆ + 1 Pw B = ∆ δ + 1 ∆ δ + δ Pw A . (22) But, ∆ δ + 1 ∆ + 1 ≤ 1 ; ∆ δ + 1 ∆ δ + δ ≥ 1 . (23) If we blow up n by a real factor γ > 1 and we keep the power fixed: f ¯ n (#) = 1 ¯ n α # β ; f 0 ¯ n (#) = β ¯ n α # β − 1 . (24) Analogously: f 0 γ ¯ n (#) = β ( γ ¯ n ) α # β − 1 . (25) 28 So, f 0 γ ¯ n (#) f 0 ¯ n (#) = γ − α . (26) If we blow up n by a real factor γ > 1 and we keep the derivative fixed: Pw = f ¯ n (#) = 1 ¯ n α # β . (27) The derivative for a number of steps ¯ # will be f 0 ¯ n ( ¯ #) = β ¯ n α ¯ # β − 1 . (28) Analogously , for the new (blown–up) power ˜ Pw ˜ Pw = f 0 γ ¯ n ( ˜ #) = β ( γ ¯ n ) α ˜ # β − 1 . (29) In or der to maintain the same derivative, we find the value of ˜ # such that: β ( γ ¯ n ) α ˜ # β − 1 = β ¯ n α ¯ # β − 1 . (30) W e obtain: ˜ # = γ α β − 1 ¯ # So, f γ ¯ n ( γ α β − 1 ¯ #) = 1 ( γ ¯ n ) α ( γ α β − 1 ¯ #) β = γ αβ β − 1 f ¯ n ( ¯ #) . (31) Hence, ˜ Pw Pw = γ α β − 1 . Finally , we know that Pw A ∩ B Pw A = ∆ δ +1 (∆+1) δ and that ˜ Pw A Pw A = (∆ + 1) α β − 1 . Let us recall that γ = ∆ + 1 and that 0 < δ ≤ 1 . W e have to prove that ther e is some constraint on α and β such that Pw A ∩ B Pw A < ˜ Pw A Pw A . From aδ + 1 (∆ + 1) δ < (∆ + 1) α β − 1 . (32) we obtain that α β − 1 > log γ ∆ δ + 1 ∆ δ + δ . (33) But ∆ δ + 1 ∆ δ + δ = ∆ δ + 1 + δ − δ δ (∆ + 1) = δ (∆ + 1) + 1 − δ δ (∆ + 1) = 1 + 1 − δ δ γ . (34) so our constraint is α β − 1 > log γ  1 + 1 − δ δ γ  . (35) 29 References [1] Aaronson, S. (2009). BQP and the polynomial hierarchy . http://arxiv .org/abs/0910.4698. [2] Aharonov , Y ., and D. Bohm (1961). T ime in the quantum theory and the uncertainty r elation for time and ener gy . Physical Review , 122(5), 1649–1658. [3] Aharonov , Y ., S. Massar , and S. Popescu (2002). Measuring energy , estimating hamiltonians, and the time–energy uncertainty r elation. Physical Review A , 66, 5107. [4] Albert, D. (2000). T ime and Chance . Harvard: Harvard University Press. [5] Buhrman, H. Cleve, and R. W igderson, A. (1998). Quantum vs. clas- sical communication and computation. Proceedings of the 13th annual ACM symposium on Theory of computing , 63–68. [6] Caves, C., C Fuchs, and R. Schack (2002). Quantum pr obabilities as bayesian probabilities. Physical Review A , 65, 022305. [7] Childs, A., J. Preskill, and J. Renes (2000). Quantum information and precision measur ement. Journal of Modern Optics , 47(213), 155–176. [8] Earman, J., and J. Norton (1993). For ever is a day: supertasks in Pitowsky and Malament–Hogarth spacetimes. Philosophy of Science , 60, 22–42. [9] Feynman, R. (1982). Simulating physics with computers. International Journal of Theoretical Physics , 21, 467–488. [10] Fermi, E., J. Pasta, S. Ulam, and M. T singou (1955). Studies of nonlin- ear problems I. Los Alamos pr eprint LA-1940. [11] Frigg, R. (2007). Probability in Bolzmannian statistical mechanics. In G. Ernst and A. H ¨ Uttemann (eds.) T ime, Chance and Reduction, Philo- sophical Aspects of Statistical Mechanics . Cambridge: Cambridge Uni- versity Press. 30 [12] Fuchs, C. (2010). QBism, the perimeter of Quantum Bayesianism. http://arxiv .org/abs/1003.5209. [13] Fuchs, C., and A. Peres (2000). Quantum theory needs no interpreta- tion. Physics T oday 53, 70–71. [14] Geroch, R., and J. Hartle (1986). Computability and physical theories. Foundations of Physics , 16(6), 533–550. [15] Goldstein, S. , J. L. Lebowitz, R. T umulka, and N. Zanghi (2010). Long–time behavior of macroscopic quantum systems: commentary accompanying the English translation of John von Neumann’s 1929 article on the quantum ergodic theorem. http://arxiv .org/abs/1003.2129v1. [16] Hagar , A. (2003). A philosopher looks at quantum information theory . Philosophy of Science , 70, 752–775. [17] Hahn, E. (1950). Spin echoes. Physical Review , 80, 580–594. [18] Hartmanis, J., and R.E. Stearns (1965). On the computational com- plexity of algorithms. T ransactions of the American Mathematical Society , 117, 285–306. [19] Hemmo, M., and O. Shenker (2011a). Probability and typicality in physics. In M. Hemmo and Y . Ben Menachem (eds.) Probability in Physics, Essays in Memory of Itamar Pitowsky . The Fr ontiers Collection, Berlin: Springer . [20] ————————————- (2011b). A new problem in the frequen- tist approach to pr obability . Forthcoming in Mind . [21] ———————————— (2012). The Road to Maxwell’ s Demon . Cambridge: Cambridge University Pr ess. [22] Hogarth, M., Non–T uring computers and non–T uring computability . In D. Hull, M. Forbes, and R. M. Burian (eds)., PSA 1994, V ol. 1. , pp. 126–138. East Lansing: Philosophy of Science Association. [23] Lewis, D. (1986). Philosophical Papers (V ol. 2). Oxford: Oxfor d Univer- sity Press. 31 [24] Maudlin, T . (2007). What could be objective about probabilities? Stud- ies in History and Philosophy of Modern Physics 38, 275–291. [25] Pitowsky , I. (1985). On the status of statistical infer ences. Synthese 63(2), 233–247. [26] ————– (1990). The physical Church thesis and physical computa- tional complexity”, Iyyun 39, 87–99. [27] ————– (1996). Laplace’s demon consults an oracle: the compu- tational complexity of prediction. Studies in History and Philosophy of Modern Physics , 17, 161–180. [28] ————— (2011). T ypicality and the role of the Lebesgue measure in statistical mechanics. In M. Hemmo and Y . Ben Menachem (eds.) Probability in Physics, Essays in Memory of Itamar Pitowsky , The Fron- tiers Collection, Berlin: Springer . [29] Pitowsky , I., and O. Shagrir (2003). Physical hypercomputation and the Church–T uring thesis”, Minds & Machines , 13, 87–101. [30] Pour–el, M. and I. Richar ds (1989). Computability in Analysis and Physics , Berlin: Springer . [31] Sklar , L. (1993). Physics and Chance . Cambridge: Cambridge Univer- sity Press. [32] Strevens, M. (1998). Inferring physical probabilities from symmetries. Nous 32, 231–246. [33] T raub, J., and A. W erschultz (1999). Complexity and Information . Cam- bridge: Cambridge University Pr ess. [34] Uffink, J. (2011). Subjective probability and statistical physics. In C. Beisbart and S. Hartmann (eds.) Probabilities in Physics . Oxford: Ox- ford University Pr ess. [35] V idal, G., and I. Cirac (2002). Optimal simulation of nonlocal Hamil- tonians using local operations and classical communication. Physical Review A , 66, 022315. [36] W innie, J. A. (1992). Computable chaos. Philosophy of Science , 59(2), 263–275. 32

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment