Sunday, July 16, 2017

Generalization of Riemann zeta to Dedekind zeta and adelic physics

A further insight to adelic physics comes from the possible physical interpretation of the L-functions appearing also in Langlands program (see this. The most important L-function would be generalization of Riemann zeta to extension of rationals. I have proposed several roles for ζ, which would be the simplest L-function assignable to rational primes, and for its zeros.

  1. Riemann zeta itself could be identifiable as an analog of partition function for a system with energies given by logarithms of prime. In ZEO this function could be regarded as complex square root of thermodynamical partition function in accordance with the interpretation of quantum theory as complex square root of thermodynamics.

  2. The zeros of zeta could define the conformal weights for the generators of super-symplectic algebra so that the number of generators would be infinite. The rough idea - certainly not correct as such except at the limit of infinitely large CD - is that corresponding functions would correspond to functions of radial light-like coordinate rM of light-cone boundary (boundary of causal diamond) of form (rM/r0)sn, sn=1/2+iy, sn would be radial conformal weight. Periodic boundary conditions for CD do not allow all possible zeros as conformal weights so that for given CD only finite subset corresponds to generators of supersymplectic algebra. Conformal confinement would hold true in the sense that the sum sn for physical states would be integer. Roots and their conjugates should appear as pairs in physical states.

  3. On basis of numerical evidence Dyson (see this) has conjectured that the Fourier transform for the set formed by zeros of zeta consists of primes so that one could regard zeros as one-dimensional quasi-crystal. This hypothesis makes sense if the zeros of zeta decompose into disjoint sets such that each set corresponds to its own prime (and its powers) and one has piy= Um/n=exp(i2π m/n) (see the appendix of this). This hypothesis is motivated by number theoretical universality.

  4. I have considered the possibility (see this) that the inverse of the electro-weak U(1) coupling constant for a gauge field assignable to the Kähler form of CP2 corresponds to poles of the fermionic zeta ζF(s)= ζ(s)/ζ(2s) coming from sn/2 (denominator) and pole at s=1 (numerator) zeros of zeta assignable to rational primes. Here one can consider scaling of argument of ζF(s). More general coupling constant evolutions could correspond to ζF(m(s)) where m(s)= (as+b)/(cs+d) is Möbius transformation performed for the argument mapping upper complex plane to itself so that a,b,c,d are real and also rational by number theoretical universality.

Suppose for a moment that more precise formulations of these physics inspired conjectures hold true and even that their generalization for extensions K/Q of rationals holds true. This would solve quite a portion of adelic physics! Not surprisingly, the generalization of zeta function was proposed already by Dedekind (see this).
  1. The definition of Dedekind zeta function ζK relies on the product representation and analytic continuation allows to deduce ζK elsewhere. One has a product over prime ideals of K/Q of rationals with the factors 1/(1-p-s) associated with the ordinary primes in Riemann zeta replaced with the factors X(P) =1/(1-NK/Q(P)-s), where P is prime for the integers O(K) of extension and NK/Q(P) is the norm of P in the extension. In the region s>1 where the product converges, ζK is non-vanishing and s=1 is a pole of ζK. The functional identifies of ζ hold true for ζK as well. Riemann hypothesis is generalized for ζK.

  2. It is possible to interpret ζK in terms of a physical picture. By the general results (see this) one NK/Q(P)= pr, r>0 integer. One can deduce for r a general expression. This implies that one can arrange in ζK all primes P for which the norm is power or given p in the same group. The prime ideals p of ordinary integers decompose to products of prime ideals P of the extension: one has p= ∏r=1g Prer, where er is so called ramification index. One can say that each factor of ζ decomposes to a product of factors associated with corresponding primes P with norm a power of p. In the language of physics, the particle state represented by p decomposes in an improved resolution to a product of many-particle states consisting of er particles in states Pr, very much like hadron decomposes to quarks.

    The norms of NK/Q(Pr) = pdr satisfy the condition ∑r=1g dr er= n. Mathematician would say that the prime ideals of Q modulo p decompose in n-dimensional extension K to products of prime power ideals Prer and that Pr corresponds to a finite field G(p,dr) with algebraic dimension dr. The formula ∑r=1g dr er = n reflects the fact the dimension n of extension is same independent of p even when one has g<n and ramification occurs.

    Physicist would say that the number of degrees of freedom is n and is preserved although one has only g<n different particle types with er particles having dr internal degrees of freedom. The factor replacing 1/(1-p-s) for the general prime p is given by ∏r=1g 1/(1-p-erdrs).

  3. There are only finite number of ramified primes p having er>1 for some r and they correspond to primes dividing the so called discriminant D of the irreducible polynomial P defining the extension. D mod p obviously vanishes if D is divisible by p. For second order polynomials P=x2+bx+c equals to the familiar D=b2-4c and in this case the two roots indeed co-incide. For quadratic extensions with D= b2-4c>0 the ramified primes divide D.

    Remark: Resultant R(P,Q) and discriminant D(P)= R(P,dP/dx) are elegant tools used by number theorists to study extensions of rationals defined by irreducible polynomials. From Wikipedia articles one finds an elegant proof for the facts that R(P,Q) is proportional to the product of differences of the roots of P and Q, and D to the product of squares for the differences of distinct roots. R(P,Q)=0 tells that two polynomials have a common root. D mod p=0 tells that polynomial and its derivative have a common root so that there is a degenerate root modulo p and the prime is indeed ramified. For modulo p reduction of P the vanishing of D(P) mod p follows if D is divisible by p. There exist clearly only a finite number of primes of this kind.

    Most primes are unramified. If one has maximum number n of factors in the decomposition and er=1, maximum splitting of p occurs. The factor 1/(1-p-s) is replaced with its n:th power 1/(1-p-s)n. The geometric interpretation is that space-time sheet is replaced with n-fold covering and each sheet gives one factor in the power. It is also possible to have a situation in which no splitting occurs and one as er=1 for one prime Pr=p. The factor is in this case equal to 1/(1-p-ns).

From Wikipedia one learns that for Galois extensions L/K the ratio ζLK is so called Artin L-function of the regular representation (group algebra) of Galois group factorizing in terms of irreps of Gal(L/K) is holomorphic (no poles!) so that ζL must have also the zeros of ζK. This holds in the special case K=Q. Therefore extension of rationals can only bring new zeros but no new poles!
  1. This result is quite far reaching if one accepts the hypothesis about super-symplectic conformal weights as zeros of ζK and the conjecture about coupling constant evolution. In the case of ζF,K this means new poles meaning new conformal weights due to increased complexity and a modification of the conjecture for the coupling constant evolution due to new primes in extension. The outcome looks physically sensible.

  2. Quadratic field Q(m1/2) serves as example. Quite generally, the factorization of rational primes to the primes of extension corresponds to the factorization of the minimal polynomial for the generating element θ for the integers of extension and one has p= Piei, where ei is ramification index. The norm of p factorizes to the produce of norms of Piei.

    Rational prime can either remain prime in which case x2-m does not factorize mod p, split when x2-m factorizes mod P, or ramify when it divides the discriminant of x2-m = 4m. From this it is clear that for unramfied primes the factors in ζ are replaced by either 1/(1-p-s)2 or 1/(1-p-2s)= 1/(1-p-s)(1+p-s). For a finite number of unramified primes one can have something different.

    For Gaussian primes with m=-1 one has er=1 for p mod 4=3 and er=2 for p=~mod~4=1. zK therefore decomposes into two factors corresponding to primes p ~mod~4=3 and p mod 4=1. One can extract out Riemann zeta and the remaining factor

    p mod 4=3 1/(1-p-s) × ∏p mod 4=1 1/(1+p-s)

    should be holomorphic and without poles but having possibly additional zeros at critical line. That ζK should possess also the poles of ζ as poles looks therefore highly non-trivial.

See the article p-Adization and adelic physics or chapter Philosophy of adelic physics.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Thursday, July 13, 2017

How to demonstrate quantum superposition of classical gravitational fields?

There was rather interesting article in Nature (see this) by Marletto and Vedral about the possibility of demonstrating the quantum nature of gravitational fields by using weak measurement of classical gravitational field affecting it only very weakly. There is also an article in arXiv by the same authors (see this). The approach relies on quantum information theory.

The gravitational field would serve as a measurement interaction and the weak measurements would be applied to gravitational witness serving as probe - the technical term is ancilla. Authors claim that weak measurements giving rise to analog of Zeno effect could be used to test whether the quantum superposition of classical gravitational fields (QSGR) does take place. One can however argue that the extreme weakness of gravitation implies that other interactions and thermal perturbations mask it completely in standard physics framework. Also the decoherence of gravitational quantum states could be argued to make the test impossible.

One must however take these objections with a big grain of salt. After all, we do not have a theory of quantum gravity and all assumptions made about quantum gravity might not be correct. For instance, the vision about reduction to Planck length scale might be wrong. There is also the mystery of dark matter, which might force considerable motivation of the views about dark matter. Furthermore, General Relativity itself has conceptual problems: in particular, the classical conservation laws playing crucial role in quantum field theories are lost. Superstrings were a promising candidate for a quantum theory of gravitation but
failed as a physical theory.

In TGD, which was born as an attempt to solve the energy problem of TGD and soon extended to a theory unifying gravitation and standard model interactions and also generalizing string models, the situation might however change. In zero energy ontology (ZEO) the sequence of weak measurements is more or less equivalent to the existence of self identified as generalized Zeno effect! The value of heff/h=n characterizes the flux tubes mediating various interactions and can be very large for gravitational flux tubes (proportional to GMm/v0, where v0<c has dimensions of velocity, and M and m are masses at the ends of the flux tube) with Mm> v0mPl2 (mPl denotes Planck mass) at their ends. This means long coherence time characterized in terms of the scale of causal diamond (CD). The lifetime T of self is proportional to heff so that for gravitational self T is very long as compared to that for electromagnetic self. Selves could correspond sub-selves of self identifiable as sensory mental images so that sensory perception would correspond to weak measurements and for gravitation the times would be long: we indeed feel the gravitational force all the time. Consciousness and life would provide a basic proof for the QSGR (note that large neutron has mass of order Planck mass!).

See the article How to demonstrate quantum superposition of classical gravitational fields? or the chapter Quantum criticality and dark matter of "Hyper-finite factors, p-adic length scale hypothesis, and dark matter hierarchy".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Wednesday, July 12, 2017

Retrocausality and TGD

The comments below were inspired by a popular article ("Physicists provide support for retrocausal quantum theory, in which the future influences the past" in telling about the preprint ("Is a time symmetric interpretation of quantum theory possible without retrocausality?" of Leifer and Pusey related to the notion of retrocausality (I am grateful to Maria Vihervaara for the link).

Retrocausality means the possibility of causal influences propagating in non-standard time direction. Retrocausality has been also proposed by Cramer as a possible manner to obtain deterministic quantum mechanics and allowing to interpret wave functions as real objects. Bell theorem and Kochen-Specker theorem however pose difficult challenges for this program and the condition that the theory is classical in strong sense (all observables have well-defined values) seems impossible.

The work is interesting from TGD view point for several reasons.

  1. TGD leads to a new view about reality solving the basic problem of quantum measurement theory. In ZEO quantum states are replaced by zero energy states which are analogous to pairs of initial and final states in ordinary ontology and can be regarded as superpositions of classical deterministic time evolutions. The sequence of state function reductions means sequence of re-creations of the superpositions of classical realities. The TGD based view about scattering amplitudes has a rather concrete connection with the view of Cramer as I interpret it. There is however no attempt to reduce quantum theory to a purely classical theory. The notion of "world of classical worlds" consisting of classical realities identified as space-time surfaces replaces space-time as a fixed observer independent reality in TGD.

  2. Retrocausality is basic aspect of TGD. Zero Energy Ontology (ZEO) predicts that both arrows of time are possible. In this sense TGD is time symmetric. On the other hand, the twistor lift of TGD predicts a violation of time reflection T and this might imply that second arrow of causality dominates in some sense. The ZEO based view about state function reduction essential for TGD inspired theory of consciousness and implying generalized Zeno effect giving rise to conscious entities -"selves" - is also essential. One might say that when conscious entity dies it re-incarnates as time-reversed self.

  3. The possibility of superposing states with opposite causal arrows (see this) is a fascinating idea and its plausibility is discussed already earlier in TGD framework (see this).

In the article Retrocausality and TGD I will discuss the articles from TGD point of view criticizing the hidden assumptions about the nature of time leading to the well-known problems of quantum measurement theory and consider also the concrete implications for theories of consciousness. Also the empirical evidence for retrocausality is discussed briefly. Contrary to the article the discussion is non-technical: I do not believe that the introduction of technicalities helps to understand the deep conceptual problems involved and possible solutions to them.

See the article Retrocausality and TGD or the chapter Topological quantum computation in TGD Universe of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, July 09, 2017

Encountering the puzzle of inert neutrinos once again

Sabine Hossenfelder had an interesting link to Quanta Magazine article "On a Hunt for a Ghost of a Particle" telling about the plans of particle physicist Janet Conrad to find the inert neutrino.

The attribute "sterile" or "inert" (I prefer the latter since it is more respectful) comes from the assumption this new kind of neutrino does not have even weak interactions and feels only gravitation. There are indications for the existence of inert neutrino from LSND experiments and some Mini-Boone experiments. In standard model it would be interpreted as fourth generation neutrino which would suggest also the existence of other fourth generation fermions. For this there is no experimental support.

The problem of inert neutrino is very interesting also from TGD point of view. TGD predicts also right handed neutrino with no electroweak couplings but mixes with left handed neutrino by a new interaction produced by the mixing of M4 and CP2 gamma matrices: this is a unique feature of induced spinor structure and serves as a signature of sub-manifold geometry and one signature distinguishing TGD from standard model. Only massive neutrino with both helicities remains and behaves in good approximation as a left handed neutrino.

There are indeed indications in both LSND and MiniBoone experiments for inert neutrino. But only in some of them. And not in the ICECUBE experiment performed at was South Pole. Special circumstances are required. "Special circumstances" need not mean bad experimentation. Why this strange behavior?

  1. The evidence for the existence of inert neutrino, call it νbar;I, came from antineutrino mixing νbar;μ→ νbar;&e; manifesting as mass squared difference between muonic and electronic antineutrinos. This difference was Δ m2(LSND)= 1-10 eV2 in the LSND experiment. The other two mass squared differences deduced from solar neutrino mixing and atmospheric neutrino mixing were Δ m2(sol)= 8×10-5 eV2 and Δ m2(atm)= 2.5×10-3 eV2 respectively.

  2. The inert neutrino interpretation would be that actually νbar;μ→ νbar;I takes place and the mass squared difference for νbar;μ and νbar;I determines the mixing.
1. The explanation based on several p-adic mass scales for neutrino

The first TGD inspired explanation proposed for a long time ago relies on p-adic length scale hypothesis predicting that neutrinos can exist in several p-adic length scales for which mass squared scale ratios come as powers of 2. Mass squared differences would also differ by a power of two. Indeed, the mass squared differences from solar and atmospheric experiments are in ratio 2-5 so that the model looks promising!

Writing Δ m2(LSND) = x eV2 the condition m2(LSND)/ m2(atm)= 2k has 2 possible solutions corresponding to k= 9, or 10 and x=2.5 and x=1.25. The corresponding mass squared differences 2.5 eV2 and 1.25 eV2.

The interpretation would be that the three measurement outcomes correspond to 3 neutrinos with nearly identical masses in given p-adic mass scale but having different p-adc mass scales. The atmospheric and solar p-adic length scales would comes as powers (L(atm),L(sol))= (2n/2, 2(n+10)/2)× L(k(LSND)) , n=9 or 10. For n=10 the mass squared scales would come as powers of 210.

How to estimate the value of k(LSND)?

  1. Empirical data and p-adic mass calculations suggest that neutrino mass is of order .1 eV . The most natural candidates for p-adic mass scales would correspond to k=163, 167 or 169. The first primes k=163, 167 correspond to Gaussian Mersenne primes MG,n= (1+i)n-1 and to p-adic length scales L(163) = 640 nm and L(167)= 2.56 μm.

  2. p-Adic mass calculations predict that the ratio x=Δ m2/m2 for μ-e system has upper bound x∼ .4. This does not take into account the mixing effects but should give upper bound for the mass squared difference affected by the mixing.

  3. The condition Δ m2/m2=.4× x, where x≤ 1 parametrizes the mass difference assuming Δ m(LSND)2= 2.5 eV2 gives m2(LSND) ∼ 6.25 eV2/x.

    x= 1/4 would give (k(LSND),k(atm),k(sol))=(157, 167, 177). k(LSND) and k(atm) label two Gaussian Mersenne primes MG,k= (1+i)k in the series k=151, 157, 163, 167 of Gaussian Mersennes. The scale L(151)=10 nm defines cell membrane thickness. All these scales could be relevant for DNA coiling. k(sol)=177 is not Mersenne prime nor even prime. The correspoding p-adic length scale is 82 μm perhaps assignable to neuron. Note that k=179 is prime.

What really happens when neutrino characterised by p-adic length scale L(k1) transforms to a neutrino characterized by p-adic length scale L(k2).
  1. The simplest possibility would be that k1→ k2 corresponds to a 2-particle vertex. The conservation of energy and momentum however prevent this process unless one has Δ m2=0. The emission of weak boson is not kinematically possible since Z0 boson is so massive. For instance, solar neutrinos have energies in MeV range. The presence of classical Z0 field could make the transformation possible and TGD indeed predicts classical Z0 fields with long range. The simplest assumption is that all classical electroweak gauge fields except photon field vanish at string world sheets. This could in fact be guaranteed by gauge choice analogous to the the unitary gauge.

  2. The twistor lift of TGD however provides an alternative option. Twistor lift predicts that also M4 has the analog of Kähler structure characterized by the Kähler form J(M4) which is covariantly constant and self-dual and thus corresponds to parallel electric and magnetic components of equal strength. One expects that this gives rise to both classical and quantum field coupling to fermion number, call this U(1) gauge field U. The presence of J(M4) induces P, T, and CP breaking and could be responsible for CP breaking in both leptonic and quark sectors and also explain matter antimatter asymmetry (see this and this) as well as large parity violation in living matter (chiral selection). The coupling constant strength α1 is rather small due to the constraints coming from atomic physics (U couples to fermion number and this causes a small scaling of the energy levels). One has α1∼ 10-9, which is also the number characterizing matter antimatter asymmetry as ratio of the baryon density to CMB photon density.

    Already the classical long ranged U field could induce the neutrino transitions. k1→ k2 transition could become allowed by conservation laws also by the emission of massless U boson. The simplest situation corresponds to parallel momenta for neutrinos and U. Conservation laws of energy and momentum give E1= (p12+m12)1/2=E2+E(U)= (p22+m221/2+ E(U), p1=p2+p(U). Masslessness gives E(U)=p(U). This would give in good approximation
    p2/p1= m12/m22 and E(U)= p1-p2=p1(1-m12/m22).

    One can ask whether CKM mixing for quarks could involve similar mechanism explaining the CP breaking. Also the transitions changing heff/h=n could involve U boson emission.

This explanation looks rather nice because the mass squared difference ratios come as powers of two and one ends up to a detailed mechanism for the transition changing the p-adic length scale.

2. The explanation based on several p-adic mass scales for neutrinos

Second TGD inspired interpretation would be as a transformation of ordinary neutrino to a dark variant of ordinary neutrino with heff/h=n occurring only if the situation is quantum critical (what would this mean now?). Dark neutrino would behave like inert neutrino.

This proposal need not however be in conflict with the first one since the transition k(LSND)→ k1 could produce dark neutrino with different value of heff/h= 2Δ k scaling up the Compton scale by this factor. This transition could be followed by a transition back to a particle with p-adic length scale scaled up by 22k. I have proposed that p-adic phase transitions occurring at criticality requiring heff/h>1 are important in biology.

There is evidence for a similar effect exists in the case of neutron decays. Neutron lifetime is found to be considerably longer than predicted. The TGD explanation is that part of protons resulting in the beta decays of neutrino transform to dark protons and remain undetected so that lifetime looks longer than it really is. Note however that also now conservation laws give constraints and the emission of U photon might be involved also in this case. As a matter of fact, one can consider the possibility that the phase transition changing heff/h=n involve the emission of U photon too. The mere mixing of the ordinary and dark variants of particle would induce mass splitting and U photon would take care of energy momentum conservation.

See the chapter New Physics Predicted by TGD: I or the article Encountering the inert neutrino once again.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, July 04, 2017

Could McKay correspondence generalize in TGD framework?

McKay correspondence states that the McKay graphs for the irreducible representations (irreps) of finite subgroups of G⊂ SU(2) characterizing their fusion algebra is given by extended Dynkin diagram of ADE type Lie group. Minimal conformal models with SU(2) Kac-Moody algebra (KMA) allow a classification by the same diagrams as fusion algebras of primary fields. The resolution of the singularities of complex algebraic surfaces in C3 by blowing implies the emergence of complex lines CP1. The intersection matrix for the CP1s is Dynkin diagram of ADE type Lie group. These results are highly inspiring concerning adelic TGD.

  1. The appearance of Dynkin diagrams in the classification of minimal conformal field theories (CFTs) inspires the conjecture that in adelic physics Galois groups Gal or semidirect products of Gal with a discrete subgroup G of automorphism group SO(3) (having SU(2) as double covering!) classifies TGD generalizations of minimal CFTs. Also discrete subgroups of octonionic automorphism group can be considered. The fusion algebra of irreps of Gal would define also the fusion algebra for KMA for the counterparts of minimal fields. This would provide deep insights to the general structure of adelic physics.

  2. One cannot avoid the question whether the extended ADE diagram could code for a dynamical symmetry of a minimal CFT or its modification? If the Gal singlets formed from the primary fields of minimal model define primary fields in Cartan algebra of ADE type KMA, then standard free field construction would give the charged KMA generators. In TGD framework this conjecture generalizes.

  3. A further conjecture is that the singularities of space-time surface imbedded as 4-surface in its 6-D twistor bundle with twistor sphere as fiber could be classified by McKay graph of Gal. The singular intersection of the Euclidian and Minkowskian regions of space-time surface is especially interesting: the twistor spheres at the common points defining light-like partonic orbits need not be same but have intersections with intersection matrix given by McKay graph for Gal. The basic information about adelic CFT would be coded by the general character of singularities for the twistor bundle.

  4. In TGD also singularities in which the group Gal is reduced to its subgroup Gal/H, where H is normal group are possible and would correspond to phase transition reducing the value of Planck constant. What happens in these phase transitions to single particle states would be dictated by the decomposition of representations of Gal to those of Gal/H and transition matrix elements could be evaluated.

See the new chapter Are higher structures needed in the categorification of TGD? or the article Could McKay correspondence generalize in TGD framework?.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.