Wednesday, December 31, 2014

Transition from quantum classicality as transition from P to NP?

There was an interesting link in Thinking Allowed Original related to computation. The argument is that Schrödinger equation becomes computationally so complex above some particle number/related scale that as a computational problem it belongs to the class NP above this scale thus requiring in practice exponentially increasing time as function of points involved with discretization defining the calculational accuracy. In class P the depence would be only polynomial.

For some systems exact solutions are known and they would be computational simple. Hydrogen atom and harmonic oscillator are the standard examples of this. If NP does not reduce to P by some ingenious computational trick, one can conclude that for some critical particle number Schrödinger equation cannot be solved in given accuracy in time depending only polynomially on accuracy characterized as a number of points involved with the discretization.

The author concludes that world becomes classical above this particle number to which also a scale is associated with system has constant density. For instance, macroscopic quantum coherence (superconductivity, superfluidity, and quantum biology!) would be impossible because it would make the solution of Schrödinger equation computationally so complex that it would not be possible in polynomial time. I definitely refuse to swallow this argument! I do not personally believe arguments about what is physically possible on what we can do with our recent technology and limited ideas about what computation is.

Some objections

Below some more detailed objections to the arguments of the article.

  1. The notion of computation developed by Turing began from a model for how the women in the offices of that time did calculations (men preferred warfare over boring calculations!) might be too simplistic. It was not based on physics, and the philosophy about consciousness behind Turing test is behavioristic. The view describing us as automatons reacting to inputs has been dead for a long time.

    What was manipulated by these women were basically natural numbers. I believe that reduction to the manipulation of rational numbers or numbers in their extensions is an essential element of not only computation but also of formation of cognitive representations (to be distinguished from cognition!). The reason would be that cognition has p-adic physics as a physical correlate and the systems doing calculations are in the intersections of realities and p-adicity where everything is expressible in terms of rationals or numbers in extension of rationals. Algebraic geometers for have centuries studied this intersection of cognition and matter consisting of subsets of rational or algebraic points of algebraic surfaces. Fermat's theorem says that the 2-D surface xn + yn = zn of 3-D space has no rational solutions for n>2.

  2. Very intricate questions relate to what is discretized. The naivest discretization is at the level of space-time, where one has continuum. The less naive discretization is at the level of Hilbert space of solutions of Schrödinger equation, where discretization is very natural. For instance, superpositions of basic states can be based on complex rational numbers at the level of individual solution ansätze everything is smooth and continuous in this case.

  3. Problem solving is much more than application of algorithm. For instance, we have not found exact solutions of Schrödinger equation numerically but by discovery involving conceptual thinking totally outside the capabilities of classical computer. Here I think is the fundamental limitation of AI approach to brain.

  4. Schrödinger equation is assumed as a universal equation applying in all scales. This is a highly questionable assumption but blindly made even in quantum gravity in Planck scale although Schrödinger equation is non-relativistic. This is one of the strange implications of formula blindness.

Does a phase transition from P to NP occur for some critical particle number?

Above some scale which corresponds to a critical particle number N P would transform to NP in a kind of phase transition. Mathematically this looks imaginable if one talks only about particle number translating to the dimension D= 3N of the effective configuration space E3N of N-particle system for which one is solving Schrödinger equation. One could speak of critical dimension. P to NP phase transition could occur but the assumption that it would imply that world becomes classical looks non-sense to me.

Personally I do not believe that dimension is relevant at fundamental level. To my opinion, the symmetries of the system are the relevant thing. For Schrödinger equation they would be symmetries of potential function and it can have in arbitrary dimensions symmetries making the equation exactly solvable. Complexity of potential function would be the relevant thing. It is easy to believe that ar randomly potential function is not equivalent with Coulomb potential but whether computer can ever discover this by performing discrete algorith t is far from clear to me. Perhaps here is the basic difference between conscious intellect and computer!

In TGD framework the geometry of the infinite-dimensional world of classical worlds (WCW) exists only if it has maximal group of isometries and there are excellent reasons to believe that this geometry and thus physics is unique. If hyperfinite factors of type II1 describe the physics of TGD world then these infinite-dimensional systems are finite-dimensional in arbitrary good approximation. The resulting physics is such that it is calculable. If the calculations we want to do reduce to simulations of physics, P might be achieved always. There could be of course also calculations which do not reduce to P! Finite measurement resolution to be considered later could reduce NP to P and also prevent the drowning to irrelevant information.

Does a phase transition from P to NP occur in some scale?

Authors assign to critical particle number N a critical scale. That a transition from quantum to classical would occur at some scale has been belief for a long time but we have began to learn that the idea is wrong. Already superconductivity is in conflict with it and bio-systems are probably macroscopically quantum coherent. The only logical conclusion is that the world is quantal in all scales. It is our conscious experience about which makes it look like classical (but only in those aspects about which it is!). When Schrödinger cat is perceived it becomes either dead or alive.

  1. I believe that the notion of scale is fundamental but in the sense of fractality according to which the world looks roughly the same in all scales. The notion of finite measurement/cognitive resolution is the essential concept implicitly involved. Resolution hierarchies allow to avoid the weird conclusion that the world becomes classical in some scale or for some critical particle number. This is not not taken into account in the formulation of Schrödinger equation. Turing paradigm doesn't speak about computational precision either.

  2. In quantum field theories the notion of length scale resolution cannot be avoided and is part of renormalization program but has remained the ugly ducling of theoretical physics. I have worked with the problem in TGD framework and proposed that von Neumann algebras known as hyperfinite factors II1 already mentioned provide and extremely elegant realization for a hierarchy of resolutions in terms of their inclusions.

  3. Length scale resolution means that one is not interested on details below some resolution scale. There would be infinite hierarchy of resolution scales. Actually several kinds of them: p-adic length scale hierarchy, dark matter hierarchy labelled by the values of Planck constant heff=n×h, the hierarchy of causal diamonds of ZEO with increasing sizes, and self hierarchy representing a hierarchy of abstractions. The longer the scale, the more abstract and less detailed the representation of world by conscious experience: good boss at the top does not waste time with details un-essential for this job.

  4. The sticking to rational numbers in given measurement resolution is a kind of gauge fixing and also forced by cognitive representability (cognitively representable must be in the intersection of p-adicities and reality!). There are indeed good arguments suggesting that finite measurement resolution realized in terms of inclusions of hyper-finite factors can be represented physically as a dynamical gauge invariance. Answers to computation in finite resolution would be gauge equivalence classes. Usually finite measurement resolution is regarded as a limitation. It however makes possible to avoid drowning to inessential information and the connection with dynamical gauge invariance fits this aspect nicely.

Could one achieve P=NP by some new physics?

  1. It is not known whether P is different from NP even for Turing paradigm: we cannot exclude the possibility that all computational problems could be solved in finite time even using Turing machines. We do neither know whether there could exist much more effective manners to compute allowing the reduction of NP to P. If this is the case, the argument of the article fails. Here biology might reach something to us.

  2. Zero Energy Ontology (ZEO) suggested a possible mechanism to transform NP to P, which I considered seriously for some time. One can imagine doing the calculation in finite geometric time by decomposing it to parts which happen in opposite time directions. One goes forth and back in time so long that the calculation is done rather than single time direction. This looked nice but a problem emerged when I understood in detail how the arrow of time emerges. The quantum state is superposition over CDs with various sizes and in given sequence of state function reductions on fixed boundary the average distances of the non-fixed boundary from it increases in statistical sense. This gives rise to self and flow of time as increase of average CD size. It also means that one cannot do calculations instantaneously by this time zig-zag mechanism.

Is Turing machine the final model for computation?

Turing machines can emulate, that is mimic each other. Mimicry is basic manner to learn. Could one generalize computation to simulation? Could simulation allow to achieve what one achieves by computing? Computer as a simulator simulating other physical systems? Or can one imagine other kinds of computation like activities in which one could avoid the unreasonable restrictions of un-necessary numerical accuracy.

In TGD framework one can consider a view about computation like processes which need not reduce to Turing machine or its quantum analog realized as unitary evolution for qubits.

  1. A concrete realization for an analog of topological quantum computer in TGD Universe might be in terms of braids realized as magnetic flux tubes. The braiding of flux tubes induced by the interaction with extremal world would form topological representations for the interaction. For instance, nerve pulses pattern passing through axon would generate braiding of flux tube connecting the lipids of lipid layers of axonal membrane to the DNA codons of cell nucleus or to the to tubulins of microtubules.

    One could call these braidings quantum computer programs but the computation would the topological and could be much more general than numerical computation and free from the slavery of precise numbers. Rational numbers in a given resolution realized in terms of HFFs would be only representatives of numbers in given computational resolution. Kind of gauge fixing would be in question.

  2. Communication is part of communication and reconnections of flux tubes making possible transfer of dark photons or dark charged particles between subsystems would build communication lines. Large heff would make possible macroscopic quantum coherence. Computer could modify itself by heff changing phase transitions which in biology would be essential for basic processes such as DNA replication. Resonant communications in which communication occurs only between systems with same cyclotron frequencies would become possible. This kind of problem solving might be very different from that using Turing machine or quantum computation as it is usually understood.

  3. Zero energy ontology forces to change the view about computation. In ZEO space-time surface connecting the boundaries of causal diamond would be the analog of classical computer program. The counterpart of quantum computation in TGD Universe would be self/mental image and correspond to a sequence of repeated state function reductions to a fixed boundary of CD leaving the part of state at it unchanged. It would define the counterpart of unitary time evolution for Schrödinger equation.

  4. In quantum computation the outcome is represented in terms of probabilities. In ZEO these probabilities would be for states associated with ensembles of sub-CDs affected by the classical fields associated with CD. Kind of quantum classical correspondence in which field patterns (wave lengths, frequencies) correspond of states would be realized. EEG could be an example of this in case of brain.

  5. In classical computation the question is local with respect to time: What is the final state at later time given initial state. Field equations such as Schrödinger equation would give the altgorithm to calculate the final state.

    In ZEO the notion of classical computation is different. The question is formulated in terms of boundary values and non-local with respect to time: Can one find space-time surface connecting two 3-surfaces at opposite boundaries of CD as solutions of field equations?

    How this view affects the notion of computation? In mathematics theorem proving corresponds to the first kind of classical computation. The answer to the question whether some theorem holds true or not would be in spirit with TGD view.

  6. Can one speak about P and NP if conscious entity becomes the counterpart of computation? What about quantum jump to opposite boundary of CD at some level of hierarchy of selves? Could it correspond to an eureka moment having no representation as a classical computation? Could these eureka moments bring in discovery (of a new axiom) as something transcending the limitations of scientist who only computes and is bound to remain to the confines of what is already known.

To conclude, Turing paradigm gains its power from precise axiomatics and I can only admire the incredibly refined theorems from people studying quantum computation theoretically. To my opinion the Turing paradigm and its quantum counterpart are however quite too limited approaches and based on questionable assumptions (behavioristic view about consciousness, reduction of problem solving to application of algorithm, separation of computation completely from physics and biology) and the less axiomatic approaches of physicist and consciousness theorist look more attractive to me.

No comments: