Friday, November 18, 2011

Are quantum states only figments of imagination?

The article The quantum state cannot be interpreted statistically by Fusey, Barrett and Rudolf has created a lot of turmoil after Nature's hype for it. Lubos reacted strongly. Also Sean Carroll commented the article in so politically correct manner that it remained unclear what he was actually saying.

The starting point is the orthodox probabilistic interpretation stating that quantum states are only mathematical constructs allowing to make predictions and therefore lack a genuine ontological status. The main point of the authors is that quantum states are real and that this destroys the probabilistic interpretation. They argue that if quantum states are only probabilistic constructs one ends up with a contradiction with quantum measurement theory. I share the belief that quantum states are real but I cannot understand how this could destroy probabilistic interpretation.

The argument

I scanned through the paper and indeed found very difficult to understand how their argument could prove that probabilistic interpretation of quantum theory is wrong.

  1. The authors assume that probabilistic interpretation of quantum theory makes possible to prepare the system in two different manners yielding two non-orthogonal states which cannot be distinguished by quantum measurements. The reason for having several preparation processes producing different states which cannot be distinguished by quantum measurements would be that the state preparation process is probabilistic. Why this should be the case I cannot understand: in fact, they use argument based on classical probability but quantum probability is not classical!

  2. The authors assume a pair of this kind of systems giving rise to four in-distiguishable states and apply quantum measurement theory to show that one of these states is produced with vanishing probability in quantum measurement so that the states are actually distinguishable. From this reduction ad absurdum they conclude that the probabilistic interpretation is wrong.

What goes wrong with the argument?

What could go wrong with this argument? I think that the problem is that the notions used are not defined precisely and classical and quantum probabilities are confused with each other. The following questions might help to clarify what I mean.

  1. What one means with probabilistic interpretation? The idea that probability amplitudes can be coded by classical probabilities is certainly wrong: the probabilities defined by coefficients in the quantum superposition correspond to probabilities only in the state base used and there is infinite number of them and also the phases of coefficients matter. A good example comes from classical physics: the relative phases of Fourier components in sound wave are very important for how the sound is heard, the mere intensities of Fourier components are not enough. For instance, time reversed speech has the same power spectrum as ordinary speech but sounds totally different. The authors however use a plausibility argument based on classical probability to argue that the system can be prepared to two non-orthogonal states. This destroys the already foggy argument.

  2. What really happens in quantum measurement? What do state function reduction and state preparation really mean? Does state function reduction really occur? The authors do not bother to ponder these questions. The same can be said about any mainstream physicist who has a healthy does of opportunism in his genes. State function reduction is in a blatant conflict with the determinism of the Schrödinger equation and this alone is an excellent reason to shut up and calculate. This is also what Lubos does although "shut up" in the case of Lubos has only symbolic meaning.

  3. If one begins to ponder what might happen in state function reduction and preparation, one soon ends up to make questions about the relationship between experienced time and the time of physicists and free will, and eventually one becomes a consciousness theorist. The price payed is life long unemployment as a pariah of academic community regarded as intellectually retarded individual by the brahmins. I can tell this on basis of personal experience.

Why I believe that quantum states are real?

Although I was not convinced by the argument of the authors, I agree with their main point that quantum states are indeed real.

  1. In TGD Universe quantum states are as real as something can be real. Mathematically quantum states correspond to modes of WCW ("world of classical worlds") spinor fields. They are definitely not fuzzy thought constructs of theorist needed to predict probabilities (ironically enough, WCW spinor fields allow to model the Boolean cognition among other things). And the overall important point is that this in no way excludes quantum probabilistic interpretation. There is also ontological minimalism: WCW spinor fields are the fundamental objects, there is no physical reality behind them: they are the physical realities. About this aspect more in the next section.

  2. The non-determinism of state function reduction is the stone in the shoe of the main stream physicist and the claim that quantum states represent an outcome of probabilistic imagination is an attempt to get rid of this stone or at least forget its painful presence. The idea that Schrödinger equation would temporarily cease to be true but in such a manner that various conservation laws are obeyed is of course nonsense and the manner to escape this conclusion is to give up the notion of objective reality altogether. This does not make sense physically. The alternative trick is to assume infinitude of quantum parallel realities. This does not make sense mathematically. What is the preferred basis for these parallel realities or are all basis allowed? The attempts to answer these questions make clear that the idea is absurd.

What is the anatomy of quantum jump?

What is the anatomy of quantum jump in TGD Universe?

  1. In TGD Universe quantum quantum jump consisting of a unitary process - I call it U - followed by a state function reduction. Unitary process acts on initial prepared state by universal unitary matrix U and generates a lot of entanglement. Quantum jump as a whole replaces WCW spinor field with a new one and the laws of physics determining the modes of WCW spinor field are not given up temporarily in quantum jump. One can say that quantum jump replaces entire time evolution with a new one and this non-determinism is outside the realm of geometric time and state space. It is this non-determinism which corresponds to that associated with subjective time identified as a sequence of quantum jumps. The reduction of act of what we call free will to quantum jump is an attractive starting point of quantum consciousness theorizing.
  2. Our view about world is created by quantum jumps between quantum states in accordance with the basic finding that conscious experience is always accompanied change (visual conscious disappears if saccadic motion is made impossible). Consciousness is between two worlds, not in the world.
  3. There is no need to assume physical reality behind WCW spinor fields: they are the physical realities. Note however that I use plural: the idea about unique physical reality introduced in physics by Galilei must be given up. This does not require giving up the notion of objective reality completely. One only assumes hat there is infinitude of them as any physical theory indeed predicts. The key aspect of consciousness is that it makes possible to study these realities by simply staying conscious! Every quantum jump recreates the Universe and cosmic and biological and all other evolutions reduce to this endless re-creation. God as a clock smith who built the clock and forgot the whole thing after that is replaced with Divine identified as a moment of re-creation. The new ontology also liberates us from the jailhouse of materialism.
  4. What really happens in the state function reduction? U process in general entangles the initial state with environment and creates enormously entangled universe. State function reduction proceeds for a given CD as cascade of state function reductions downwards in the the fractal hierarchy of CDs inside CDs inside... For a given subsystem the process stops when it is not anymore possible to reduce entanglement entropy by state function reduction (by Negentropy Maximization Principle). If one accepts the notion of number theoretic entropy making sense in the intersection of real and p-adic worlds, the entanglement entropy can be also negative. This kind of entanglement is stable against state function reduction and is carrier of conscious information. Hence the reduction process stops when the system is negentropic. The conjecture is that living matter resides in the intersection of real and p-adic worlds (matter and cognition) and is therefore a carrier of negentropic entanglement.

Zero energy ontology unifies state function reduction and state preparation

Zero energy ontology (ZEO) brings in additional constraints and actually unifies state function reduction and preparation.

  1. Zero energy states are pairs of positive and negative energy states at opposite boundaries of causal diamond identified as Cartesian product of CP2 and intersection of future and past directed light cones of M4. The counterparts of positive (negative) energy states in positive energy ontology are initial (final) states. In ZEO quantum jumps have fractal structure: quantum jumps occur within quantum jumps. This corresponds also to hierarchy of conscious entities: selves having sub-selves as mental images. CD is the imbedding space correlate for self.

  2. The arrow of time is broken already the level of zero energy states and even at single particle level: in standard ontology it is broken at the level of quantum dynamics defined by state function reductions. The positive energy part (initial state) of zero energy state corresponds to a prepared state having among other things well-defined particle numbers. For non-trivial M-matrix (and S-matrix) the negative energy part of the state (final state) cannot have these properties. In state function reduction negative energy (final state) obtains these properties.

  3. The following question shows how a genuinely new idea immediately leads to science fictive considerations. Does ZEO imply that state preparation correspond to state function reduction for positive energy part of the state (initial state) and state function reduction to the same process for the negative energy part of the state (final state)? Could these processes would be completely symmetrical? What this symmetry would imply? Can one imagine that there reduction-preparation-preparation-.... does reduction for negative-positive-negative... energy state? One would have a kind of time flip-flop: state function reduction at the other end of CD would generate non-prepared state at the other end. The arrow of time would change in each reduction.

    Mystic might talk about cycle of birth and rebirth as eternal return to youth. Eternal return to youth would be nice but would make sense only if one forgets that quantum jump involves the unitary process U. if one assumes that each quantum jump involves also U, the situation changes. Some processes in living matter - say spontaneous self assembly- seem to have reversed geometric arrow of time and one might wonder whether this kind of flip-flop can occur in living matter. Here is a home exercise for the alert reader;-).

15 comments:

Ulla said...

http://www.scottaaronson.com/blog/?p=822

Orwin said...

There is now a lot of evidence that life accumulates information while passing entropy down the solar energy cascade. But Shannon information is not negative entropy, as Schroedinger conceded in a footnote, so the situation is not clear.

Quarks fracture charge by thirds, and carbon rings allow frictionless packaging, as demonstrated in graphene superconduction. Enzymes are built around symmetry-broken rings: if this facilitates operating with fractional charge effects, intact parity would pass the effect to time.

The Fungi that establish circadian rhythm also differentiate cell types, and thereby organic time, at the cellular level. That's the original declaration of independence, and the rest is (natural) history.

Orwin said...

The other option is to conjure a monopole and appropriate the charge quantization (Dirac's trick), so contesting the CPT constraint at semantic level. Such, if you like, is the guise of mind.

Satama said...

Lee Smolin, who also rejects birds eye view universalism and lives in participatory localized self-organizing space has a nice definition of "objectivity" - getting similar answers to similar *questions*. Quantum state reductions of shared realities of similar questions and answers, sharing same causal narratives - not just as individuals but cultures, languages, etc. Smolin's narrative on causality, however, is still web of only unidirectional relations (cf Feynman diagrams of causal statistical arrows), whereas Matti's zero-energy ontology comes very close to what Buddha said on causality: "If this arises, that arises; if this ceases, that ceases." How do Feynman diagrams arise, what was the question and where did it come from? Predictability in psychological time requires a shared history in psychological time. Predictability and testability - in psychological time - are cornerstones of the so called scientific method, shared assumptions behind the questions science generally asks. How does gradable hbar affect the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound: "This implies that the number of different states (Ω=2I) of the human brain is at most 10^7.79640×10^41.")? If space-time relations reduce to generalized number theory, what is the number theoretical Bekenstein bound? Is there such between p-adics and reals? The known unknowns and unknown unknowns of irrational and transcendental numbers?

matpitka@luukku.com said...

Some comments to Santeri.

*It seems that Buddha said it for millenia before me;-). If I were not zero energy ontologists, I might easily fail to realize how incredibly deep the statement Buddha is.

*Feynman diagram pop up from vacuum in ZEO in accordance what Buddha said.

*I would be cautious in applying Bekenstein bound on human brain. Even the existence of black holes is highly questionable and in TGD a mroe natural structure is space-time regions with Euclidian signature of metric: just 4-D generalized Feynman diagram!

I will respond separately in Bekenstein bound. Very interesting question.

matpitka@luukku.com said...

Continuation....

I would not take the precise value of Bekenstein bound too seriously. For some reason quite few of us questions the notion of black hole although it is the singularity of Einstein's theory and all evidence comes from the existence of objects with very high mass having exterior region behaving like black hole exterior. About interior the observations say nothing.

The question about Bekenstein bound is bery interesting. There are two notions of information: classical and quantal. Classical information is a property of a system. Quantum information is assignable to entanglement characterizing the relationship between two systems. I try to answer in the classical framework first.

If one assumes that information is classical information representable as bits assignable to single system ) Bekenstein bound is the simplest dimensionally consistent guess based on additivity of information. The idea about the partonic 2-surface as a collection of bits with constant maximum bit density would give area law. Maximum information would be proportional to the area S of the partonic 2-surface and the coefficient of proportionality would tell what is the minimum area needed by a bit. The naive guess is that the area corresponds to CP_2 scale R so that the maximum information would be about G/R^2 =about 10^(-7) smaller than for a black-hole.

matpitka@luukku.com said...

Continuation....

In the case of quantum information one cannot start from a direct generalization of Bekenstein formula.

a) Suppose that one identifies information as entanglement negentropy in the intersection of real and p-adic worlds (so that we would be talking about living systems). How should one define entanglement negentropy?

*Should one imagine decomposing system in all manners to two parts and choose the pair with maximum entanglement negentropy and imagine continuing the process in similar manner for the parts and define entanglement negentropy as sum of negentropies over all steps?

*Or could one use quantum computer based vision and ideas from DNA as topological quantum computer vision. I choose the latter option.


b) The braiding of braid strands emerging from partonic 2-surfaces are responsible for quantum computation like processing.

*For a space-like surface the braid strands connecting different partonic 2-surfaces inside it give rise to space-like braiding: braiding is associated with the threads connecting the dancers.

*The 2-D ends at ends of the orbits of light-like surface at upper and lower boundaries of CD of partonic 2-surface are connected by a light-like braid. Braiding is the temporal dancing pattern. These would be the dual manners to see the situation if I accept the idea that the system is like group of dancers (light-like braid strands) connected by threads (space-like braid strands).

b) Duality would mean that the number theoretic negentropy can be assigned either with the light-like entanglement or space-like entanglement. Information carrying entanglement would have either space-like or light-like braiding as a correlate [Recall DNA as topological quantum computer model: were braids connect DNA and nuclear or cell membrane and the flow of lipids defines the dance].

c) Both the number braid strands (the number of DNA nucleotides) and the amount of braiding matters. Classically only the number of braid strands would matter. Is there an upper bound to the complexity of braiding perhaps coming from measurement resolution? Dancers cannot dance too fast/ the threads connecting dancers cannot not get too twisted? This looks very plausible.

d) Does the density of braid strands for a given 2-surface have a maximum coming from an upper bound to measurement resolution? Does stability give upper bound on the density of braid strands? Intuitively, the density of dancers cannot bee too high since dancing becomes impossible if there are always dancers like me at the parquette;-).

Consider now the upper bound.

a) The analog of Bekenstein bound would say that the maximum information is proportional to the area S of dancing parquette: each dancer takes some minimum room.

b) This formula effectively assumes that dancers are like bits and does not take into account the complexity of the dance patterns. The information capacity due to braiding/entanglement increases exponentially with the number of qubits. Does it give multiplicative term proportional to exponent exd(N/N_0), N the number of braid strands.

c) Could this mean something like

S-->Sexp(S/S_0)

in Bekenstein formula?

To sum up, could this mean that the information storage capacity of the system in quantum TGD Universe is exponentially larger than in standard model world? I am sorry if this sounds like a commercial: in market economy everyone must sell his own Universe;-).

Orwin said...

Read the exponential as a Boltzmann temperature, and you have a temperature gradient or potential, the problem that blew up between Fourier and Biot. Reaction rates are similar. But the length-scale in the gradient raises further problems, signified by Bjerrum length for ion-dressed colloids, or renormalization with partons. The logic here is breaking of radial symmetry for an appearance/substance distinction. Hence my interest in observer-space, but as I said, I found it only in Wikibin.

So we have to scout our own way here.

*Brillouin parsed experiment not as gathering information but reducing uncertainty, a distinct construct.

*ZEO seems to converge on continuum mechanics?

Satama said...

Smolin gives exact definition: "Bekenstein bound means that quantity of information in bits can be at most quarter of the surface area, when surface area is expressed in squares of Planck lengths."

That is why I asked, how does gradable hbar change the picture - for classical information?

For quantum (or "active information", as Bohm called it), situation is of course very different. Bohm's intuition about active/quantum information was etymological and as deep as ZEO: giving form. Analogy with biological processes brings to mind formative processes from larger inclusive wholes to their parts; e.g. biological self-organisation. To paraphrase Buddha, if reductionistic causation arises, then holistic causation arises (and vice versa). With this in mind, the holistic analogue of Bekenstein bound would be that of a part towards a larger inclusive whole. In terms of classical information observer looks at a black whole or some other surface from outside and deciphers the information content by reduction to atomistic planck scales. Asking the other way around, what can a cell deduce and know about the information amount and content of the organism it is part of, according to some holography principle? And, probably most importantly, how can we get rid of any and all newtonian notions of absolute (euclidean or other) space when thinking about insides and outsides and hierarchies of Russian dolls? Hence the question about number theory, preferably in terms of pure math instead of biological similes... :)

matpitka@luukku.com said...

For years years ago I pondered what happens to black hole entropy when hbar is increased.

*My first answer is that is reduced by a factor 1/n, hbar =n*hbar_0.

*It took sometime to realize that this happens to the single sheet of n-sheeted structure assignable to the sector imbedding space in question (Using more precise jargon: many-sheeted covering describing the situation when canonical momentum densities for Kahler action correspond locally to n different values of time derivatives of imbedding space coordinates). Each of the n sheets give 1/n-fold contribution so that the sum is just the ordinary entropy.

*By the way, Planck length scales as sqrt(hbar) and
this would mean that for the typical astrophysical Planck constant Planck length would be of order blackhole horizon radius.


I do not really believe in black holes in TGD context. Apologies for M-theorists but a theory that "predicts" the singularities of previous theory as such is not a real theory. All physical predictions of M-theory are based on "effective theory" thinking: a convenient intellectual loop hole when theory does not work.

I admitted to myself around 1986 that if I want to proceed I must give attempt to apply path integrals to TGD since they simply fail and this led to the geometry of world of classical worlds: around 1990 I found what could be said the first formulation capturing the most essential aspects of the problem.


I do not believe that one can do without biological similes. Pure mathematics as such is completely useless as such. One should have a concrete quantum model for the cell and I have proposed such. To me the most interesting questions relate to the role of braidings in the buildup of sensory and cognitive representations. I think that entanglement is absolutely essential and the idea of neuroscience that neuron is single classical bit or a collection of bits is fatally wrong.

Each cell is genetically a the hologram of organism. I have guessed that cell (neuronal) membranes are like computer monitors with lipids acting as pixels representing qualia in the sensory representations about organism itself (organism and environment).

If one wants to answer how cell can get information about external world it is best to take seriously neuroscience. The best manner to get it wrong is to use pure mathematics also this would create the illusion of being "scientific". Sorry for my skepticism: I have read quite too many nonsense papers full of nonsense formulas.

*Nerve pulses and very probably also light-signal consisting of dark photons are used to build up representations of the external worlds on the tiny computer monitors defined by the neurons.

*Think of a gigantic hall of computer monitors showing images about details of environment, abstractions, overall views.

*This is very much what we do to the data nowadays but using much smaller computer monitors. Our computer monitors would be like neuronal membranes but in a bigger scale.

It would do good for neuroscientists to ask seriously: to the really believe that a net consisting of say 10 billion bits can really write poems or have spiritual experiences. Can they write a formula in terms of these 10 billion bits for what they can experience when they listen Chopin's Polonaise Brilliante?

Ulla said...

This has popped up in my mind too. Can it be true? Seems as - no?

http://www.science20.com/hammock_physicist/square_root_supernova-84806

Satama said...

Matti, plants and mushrooms don't have neurons but there are no good reasons to suppose that neural networks are in any way superior to plants and mushrooms, or that humans are the most intelligent life form on this planet. Does your theory have something to say about plant and mushroom conscience and the role of neural networks compared to those? Neuroplasticity seems to work rather as filter than creator of information - classic or quantum.

matpitka@luukku.com said...

Santeri,

as I mentioned both communications by nerve pulses and dark photons are possible between cells and dark photons would be responsible for sensory feedback needed to generate standardized sensory mental images. In plants dark photons could be responsible for the analog of nerve pulse based communications.

Nerve pulse patterns would be needed when organism must both receive data about environment and perform motor actions. The life of mushroom is not very hectic and if motor actions are they are very very slow and do not require lightning fast reactions. Communications between the cells would be basically to keep the mushroom as a quantum coherent unit. Do not forget the entanglement. Dark photons propagating along magnetic flux tubes defined the braids would be the natural guess for the analog of predecessor of nervous system of mushroom.

Nerve pulse patterns would in my own model more analogous to memory representations than mere communications. Not forgetting the control aspect. Dark photons would provide the fastest manner to communicate sensory data used to form cognitive representations (such as decomposition of sensory field to objects etc...) and perhaps also secondary sensory qualia at the level of cells.

Neuroplasticity - generation of neural connection- is certainly associated with the formation of associations which indeed relate to memory. Generation of new magnetic flux tubes between two parts of organism might be the analog of neuroplasticity and also behind neuroplasticity.

What functions magnetic flux tube network makes possible? What new nervous system does bring in? These are interesting questions. Certainly new kind of highly standardized sensory and cognitive representations is what neural systems provide. Some kind of digitalization helping to standardize?

matpitka@luukku.com said...

To Ulla:

The speculations about magic particle numbers and masses defining thresholds for life look one example of numerology which tends to drive me in very negative mood;-).

One must not forget that these speculations are inspired by the belief that life and consciousness suddenly and magically emerge in some scale as the complexity of the system reaches some threshold. This beliefs are materialist's variants for the idea of virginal birth. Emergence of space-time is the latest variant of virginal birth belief. Also this idea tends to spoil my mood.

If I take seriously the interpretation of p-adic physics as physics of cognition, I must conclude that even elementary particles possess elementary cognitive abilities. Therefore the emergence paradigm in its simplest form falls down. It can make sense only if one considers levels of complexity.

*Certainly the complexity of braidings increases exponentially as particle number increases but I am unable to see any obvious thresholds here.


*The evolution as increase of p-adic prime and algebraic complexity and as generation of larger values of Planck constant suggests that some primes might define thresholds. Mersenne primes and Gaussian Mersennes indeed do and in thelength scale 10 nm-2.5 micrometers crucial for life there are as many as 4 Gaussian Mersennes: this is a real number theoretic miracle and must be highly relevant for life.

I have discussed the longer length scales associated with Gaussian Mersennes somewhere but I am too lazy to find where the discussion is. Highest order Mersenne that can be considered corresponds to electron (M_127) and indeed defines the fundamental time scale of life (.1 seconds, 10 Hz biorhythm, alpha rhythm,..)-

Ulla said...

This high number popps up when shifts or big changes happen. From Black holes on. That was my interest.

I don't want to give you bad mood. What does it depend on?

Masses as contributors to consciousness as emergent phenomenon is due to the wrong definition, or confusion from overlapping definitions? Collapsed consciousness (presens) containers (measurements?) create awareness (past) as a sum? So it is the awareness that is emergent. Consciousness is a flow in now?