Tuesday, March 21, 2017

Early galactic collision gives support for TGD based model of galactic dark matter

The discoveries related to galaxies and dark matter emerge with an accelerating pace, and from TGD point of view it seems that puzzle of galactic dark matter is now solved.

The newest finding is described in popylar article This Gigantic Ring of Galaxies Could Bring Einstein's Gravity Into Question. What has been found that in a local group of 54 galaxies having Milky Way and Andromeda near its center the other dwarf galaxies recede outwarts as a ring. The local group is in good approximation in plane and the situation is said to look like having a spinning umbrella from which the water droplets fly radially outwards.

The authors of the article Anisotropic Distribution of High Velocity Galaxies in the Local Group argue that the finding can be understood aif Milky Way and Andromeda had nearly head-on collision about 10 billion light-years ago. The Milky Way and Andromeda would have lost the radially moving dwarf galaxies in this collision during the rapid acceleration turning the direction of motion of both. Coulomb collision is good analog.

There are however problems. The velocities of the dwards are quite too high and the colliding Milky Way and Andromeda would have fused together by the friction caused by dark matter halo.

What says TGD? In TGD galactic dark matter (actually also energy) is at cosmic strings thickened to magnetic flux tubes like pearls along necklace. The finding could be perhaps explained if the galaxies in same plane make a near hit and generate in the collision the dwarf galaxies by the spinning umbrella mechanism.

In TGD Universe dark matter is at cosmic strings and this automatically predicts constant velocity distribution. The friction created by dark matter is absent and the scattering in the proposed manner could be possible. The scattering event could be basically a scattering of approximately parallel cosmic strings with Milky Way and Andromeda forming one pearl in their respective cosmic necklaces.

But were Milky Way and Andromeda already associated with cosmic strings at that time? The time would be about 10 billion years. One annot exclude this possibility. Note however that the binding to strings might have helped to avoid the fusion. The recent finding about effective absence of dark matter about 10 billion light years ago - velocity distributions decline at large distances - suggests that galaxies formed bound states with cosmic strings only later. This would be like formation of neutral atoms from ions as energies are not too high! How fast the things develope becomes clear from the fact that I posted TGD explanation to my blog yesterday and replaced with it with a corrected version this morning!.

See the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article TGD interpretation for the new discovery about galactic dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 20, 2017

Velocity curves of galaxies flatten for large redshifts

Sabine Hossenfelder gave a link to a popular article "Declining Rotation Curves at High Redshift" (see this) telling about a new strange finding about galactic dark matter. The rotation curves are declining in the early Universe meaning distances about 10 billion light years (see this). In other words, the rotation velocity of distant stars decreases with radius rather than approaching constant - as if dark matter would be absent and galaxies were baryon dominated. This challenges the halo model of dark matter. For the illustrations of the rotation curves see the article. Of course, the conclusions of the article are uncertain.

Some time ago also a finding about correlation of baryonic mass density with density of dark matter emerged: the ScienceDaily article "In rotating galaxies, distribution of normal matter precisely determines gravitational acceleration" can be found here. The original article can be found in arXiv.org (see this). TGD explanation (see this) involves only the string tension of cosmic strings and predicts the behavior of baryonic matter on distance from the center of the galaxy.

In standard cosmology based on single-sheeted GRT space-time large redshifts mean very early cosmology at the counterpart of single space-time sheet, and the findings are very difficult to understand. What about the interpretation of the results in TGD framework? Let us first summarize the basic assumptions behind TGD inspired cosmology and view about galactic dark matter.

  1. The basic difference between TGD based and standard cosmology is that many-sheeted space-time brings in fractality and length scale dependence. In zero energy ontology (ZEO) one must specify in what length scale the measurements are carried out. This means specifying causal diamond (CD) parameterized by moduli including the its size. The larger the size of CD, the longer the scale of the physics involved. This is of course not new for quantum field theorists. It is however a news for cosmologists. The twistorial lift of TGD allows to formulate the vision quantitatively.

  2. TGD view resolves the paradox due to the huge value of cosmological constant in very small scales. Kähler action and volume energy cancel each other so that the effective cosmological constant decreases like inverse of the p-adic length scale squared because these terms compensate each other. The effective cosmological constant suffers huge reduction in cosmic scales and solves the greatest (the "most gigantic" would be a better attribute) quantitative discrepancy that physics has ever encountered. The smaller value of Hubble constant in long length scales finds also an explanation (see this). The acceleration of cosmic expansion due to the effective cosmological constant decreases in long scales.

  3. In TGD Universe galaxies are located along cosmic strings like pearls in necklace, which have thickened to magnetic flux tubes. The string tension of cosmic strings is proportional to the effective cosmological constant. There is no dark matter hallo: dark matter and energy are at the magnetic flux tubes and automatically give rise to constant velocity spectrum for distant stars of galaxies determined solely by the string tension. The model allows also to understand the above mentioned finding about correlation of baryonic and dark matter densities (see this) .

What could be the explanation for the new findings about galactic dark matter?
  1. The idea of the first day is that the string tension of cosmic strings depends on the scale of observation and this means that the asymptotic velocity of stars decreases in long length scales. The asymptotic velocity would be constant but smaller than for galaxies in smaller scales. The velocity graphs show that in the velocity range considered the velocity decreases. One cannot of course exclude the possibility that velocity is asymptotically constant.

    The grave objection is that the scale is galactic scale and same for all galaxies irrespective of distance. The scale characterizes the object rather than its distance for observer. Fractality suggests a hierarchy of string like structures such that string tension in long scales decreases and asymptotic velocity associated with them decreases with the scale.

  2. The idea of the next day is that the galaxies at very early times have not yet formed bound states with cosmic strings so that the velocities of starts are determined solely by the baryonic matter and approach to zero at large distances. Only later the galaxies condense around cosmic strings - somewhat like water droplets around blade of grass. The formation of these gravitationally bound states would be analogous to the formation of bound states of ions and electrons below ionization temperature or formation of hadrons from quarks but taking place in much longer scale. The early galaxies are indeed baryon dominated and decline of the rotation velocities would be real.

See the chapter TGD and Astrophysics of "Physics in Many-Sheeted Space-time" or the article TGD interpretation for the new discovery about galactic dark matter.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Getting quantitative about violations of CP, T, and P

The twistor lift of TGD led to the introduction of Kähler form also in M4 factor of imbedding space M4×CP2. The moduli space of causal diamonds (CDs) introduced already early allow to save Poincare invariance at the level of WCW. One of the very nice things is that the self-duality of J(M4) leads to a new mechanism of breaking for P,CP, and T in long scales, where these breakings indeed take place. P corresponds to chirality selection in living matter, CP to matter antimatter asymmetry and T could correspond to preferred arrow of clock time. TGD allows both arrows but T breaking could make other arrow dominant. Also the hierarchy of Planck constant is expected to be important.

Can one say anything quantitative about these various breakings?

  1. J(M4) is proportional to Newton's constant G in the natural scale of Minkowski coordinates defined by twistor sphere of T(M4). Therefore CP breaking is expected to be proportional to lP2/R2 or to its square root lP/R. The estimate for lP/R is X== lP/R≈ 2-12≈ 2.5× 10-4.

    The determinant of CKM matrix is equal to phase factor by unitarity (UU=1) and its imaginary part characterizes CP breaking. The imaginary part of the determinant should be proportional to the Jarlskog invariant J= +/- Im(VusVcbV*ub V*cs) characterizing CP breaking of CKM matrix (see this).

    The recent experimental estimate is J≈ 3.0× 10-5. J/X≈ 0 .1 so that there is and order of magnitude deviation. Earlier experimental estimate used in p-adic mass calculations was by almost order of magnitude larger consistent with the value of X. For B mesons CP breading is about 50 times larger than for kaons and it is clear that Jarlskog invariant does nto distinguish between different meson so that it is better to talk about orders of magnitude only.

    The parameter used to characterize matter antimatter asymmetry (see this) is the ratio R=[n(B-n(B*)]/n(γ)) ≈ 9× 10-11 of the difference of baryon and antibaryon densities to photon density in cosmological scales. One has X3 ≈ 1.4 × 10-11, which is order of magnitude smaller than R.

  2. What is interesting that P is badly broken in long length scales as also CP. The same could be true for T. Could this relate to the thermodynamical arrow of time? In ZEO state function reductions to the opposite boundary change the direction of clock time. Most physicist believe that the arrow of thermodynamical time and thus also clock time is always the same. There is evidence that in living matter both arrows are possible. For instance, Fantappie has introduced the notion of syntropy as time reversed entropy. This suggests that thermodynamical arrow of time could correspond to the dominance of the second arrow of time and be due to self-duality of J(M4) leading to breaking of T. For instance, the clock time spend in time reversed phase could be considerably shorter than in the dominant phase. A quantitative estimate for the ratio of these times might be given some power of the the ratio X =lP/R.
For background see chapter Some questions related to the twistor lift of TGD of "Towards M-matrix" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, March 18, 2017

Is there a duality between associative and co-associative space-time surfaces?

A more appropriate title of this posting would be "A new duality or an old duality seen from a number theoretic perspective?". The original proposal turned out to be partially wrong and I can only blame myself for breaking the rule "Wait for a week before posting!".

M8-H duality maps the preferred extremals in H to those M4× CP2 and vice versa. The tangent spaces of an associative space-time surface in M8 would be quaternionic (Minkowski) spaces.

In M8 one can consider also co-associative space-time surfaces having associative normal space. Could the co-associative normal spaces of associative space-time surfaces in the case of preferred extremals form an integrable distribution therefore defining a space-time surface in M8 mappable to H by M8-H duality? This might be possible but the associative tangent space and the normal space correspond to the same CP2 point so that associative space-time surface in M8 and its possibly existing co-associative companion would be mapped to the same surface of H.

This dead idea however inspires an idea about a duality mapping Minkowskian space-time regions to Euclidian ones. This duality would be analogous to inversion with respect to the surface of sphere, which is conformal symmetry. Maybe this inversion could be seen as the TGD counterpart of finite-D conformal inversion at the level of space-time surfaces. There is also an analogy with the method of images used in some 2-D electrostatic problems used to reflect the charge distribution outside conducting surface to its virtual image inside the surface. The 2-D conformal invariance would generalize to its 4-D quaterionic counterpart. Euclidian/Minkowskian regions would be kind of Leibniz monads, mirror images of each other.

  1. If strong form of holography (SH) holds true, it would be enough to have this duality at the informational level relating only 2-D surfaces carrying the holographic information. For instance, Minkowskian string world sheets would have duals at the level of space-time surfaces in the sense that their 2-D normal spaces in X4 form an integrable distribution defining tangent spaces of a 2-D surface. This 2-D surface would have induced metric with Euclidian signature.

    The duality could relate either a) Minkowskian and Euclidian string world sheets or b) Minkowskian/Euclidian string world sheets and partonic 2-surfaces common to Minkowskian and Euclidian space-time regions. a) and b) is apparently the most powerful option information theoretically but is actually implied by b) due to the reflexivity of the equivalence relation. Minkowskian string world sheets are dual with partonic 2-surfaces which in turn are dual with Euclidian string world sheets.

    1. Option a): The dual of Minkowskian string world sheet would be Euclidian string world sheet in an Euclidian region of space-time surface, most naturally in the Euclidian "wall neighbour" of the Minkowskian region. At parton orbits defining the light-like boundaries between the Minkowskian and Euclidian regions the signature of 4-metric is (0,-1,-1,-1) and the induced 3-metric has signature (0,-1,-1) allowing light-like curves. Minkowskian and Euclidian string world sheets would naturally share these light-like curves aas common parts of boundary.

    2. Option b): Minkowskian/Euclidian string world sheets would have partonic 2-surfaces as duals. The normal space of the partonic 2-surface at the intersection of string world sheet and partonic 2-surface would be the tangent space of string world sheets so that this duality could make sense locally. The different topologies for string world sheets and partonic 2-surfaces force to challenge this option as global option but it might hold in some finite region near the partonic 2-surface. The weak form of electric-magnetic duality could closely relate to this duality.
    In the case of elementary particles regarded as pairs of wormhole contacts connected by flux tubes and associated strings this would give a rather concrete space-time view about stringy structure of elementary particle. One would have a pair of relatively long (Compton length) Minkowskian string sheets at parallel space-time sheets completed to a parallelepiped by adding Euclidian string world sheets connecting the two space-time sheets at two extremely short (CP2 size scale) Euclidian wormhole contacts. These parallelepipeds would define lines of scattering diagrams analogous to the lines of Feynman diagrams.
This duality looks like new but as already noticed is actually just the old electric-magnetic duality seen from number-theoretic perspective.

For background see chapter Some questions related to the twistor lift of TGD of "Towards M-matrix" or the article with the same title.

About the generalization of dual conformal symmetry and Yangian in TGD

The discovery of dual of the conformal symmetry of gauge theories was crucial for the development of twistor Grassmannian approach. The D=4 conformal generators acting on twistors have a dual representation in which they act on momentum twistors: one has dual conformal symmetry, which becomes manifest in this representation. These two separate symmetries extend to Yangian symmetry providing a powerful constraint on the scattering amplitudes in twistor Grassmannian approach fo N=4 SUSY.

In TGD the conformal Yangian extends to super-symplectic Yangian - actually, all symmetry algebras have a Yangian generalization with multi-locality generalized to multi-locality with respect to partonic 2-surfaces. The generalization of the dual conformal symmetry has however remained obscure. In the following I describe what the generalization of the two conformal symmetries and Yangian symmetry would mean in TGD framework.

One also ends up with a proposal of an information theoretic duality between Euclidian and Minkowskian regions of the space-time surface inspired by number theory: one might say that the dynamics of Euclidian regions is mirror image of the dynamics of Minkowskian regions. A generalization of the conformal reflection on sphere and of the method of image charges in 2-D electrostatics to the level of space-time surfaces allowing a concrete construction reciple for both Euclidian and Minkowskian regions of preferred extremals is in question. One might say that Minkowskian and Euclidian regions are analogous to Leibnizian monads reflecting each other in their internal dynamics.

See the chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" or the article with the same title.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Tuesday, March 14, 2017

Could second generation of weak bosons explain the reduction of proton charge radius?

The discovery by Pohl et al (2010) was that the charge radius of proton deduced from the muonic version of hydrogen atom - is .842 fm and about 4 per cent smaller than .875 fm than the charge radius deduced from hydrogen atom is in complete conflict with the cherished belief that atomic physics belongs to the museum of science (for details see the Wikipedia article). The title of the article Quantum electrodynamics-a chink in the armour? of the article published in Nature expresses well the possible implications, which might actually go well extend beyond QED.

Quite recently (2016) new more precise data has emerged from Pohl et al (see this). Now the reduction of charge radius of muonic variant of deuterium is measured. The charge radius is reduced from 2.1424 fm to 2.1256 fm and the reduction is .012 fm, which is about .8 per cent (see this). The charge radius of proton deduced from it is reported to be consistent with the charge radius deduced from deuterium. The anomaly seems therefore to be real. Deuterium data provide a further challenge for various models.

The finding is a problem of QED or to the standard view about what proton is. Lamb shift is the effect distinguishing between the states hydrogen atom having otherwise the same energy but different angular momentum. The effect is due to the quantum fluctuations of the electromagnetic field. The energy shift factorizes to a product of two expressions. The first one describes the effect of these zero point fluctuations on the position of electron or muon and the second one characterizes the average of nuclear charge density as "seen" by electron or muon. The latter one should be same as in the case of ordinary hydrogen atom but it is not. Does this mean that the presence of muon reduces the charge radius of proton as determined from muon wave function? This of course looks implausible since the radius of proton is so small. Note that the compression of the muon's wave function has the same effect.

Before continuing it is good to recall that QED and quantum field theories in general have difficulties with the description of bound states: something which has not received too much attention. For instance, van der Waals force at molecular scales is a problem. A possible TGD based explanation and a possible solution of difficulties proposed for two decades ago is that for bound states the two charged particles (say nucleus and electron or two atoms) correspond to two 3-D surfaces glued by flux tubes rather than being idealized to points of Minkowski space. This would make the non-relativistic description based on Schrödinger amplitude natural and replace the description based on Bethe-Salpeter equation having horrible mathematical properties.

The basic idea of the original model of the anomaly (see this) is that muon has some probability to end up to the magnetic flux tubes assignable to proton. In this state it would not contribute to the ordinary Schrödinger amplitude. The effect of this would be reduction of |Ψ|2 near origin and apparent reduction of the charge radius of proton. The weakness of the model is that it cannot make quantitative prediction for the size of the effect. Even the sign is questionable. Only S-wave binding energy is affected considerably but does the binding energy really increase by the interaction of muon with the quarks at magnetic flux tubes? Is the average of the charge density seen by muon in S wave state larger, in other words does it spend more time
near proton or do the quarks spend more time at the flux tubes?


In the following a new model for the anomaly will be discussed.

  1. The model is inspired by data about breaking of universality of weak interactions in neutral B decays possibly manifesting itself also in the anomaly in the magnetic moment of muon. Also the different values of
    the charge radius deduced from hydrogen atom and muonium could reflect the breaking of universality. In the original model the breaking of universality is only effective.

  2. TGD indeed predicts a dynamical U(3) gauge symmetry whose 8+1 gauge bosons correspond to pairs of fermion and anti-fermion at opposite throats of wormhole contact. Throats are characterized by genus g=0,1,2, so that bosons are superpositions of states labelled by (g1,g2). Fermions correspond to single wormhole throat carrying fermion number and behave as U(3) triplet labelled by g.

    The charged gauge bosons with different genera for wormhole throats are expected to be very massive. The 3 neutral gauge bosons with same genus at both throats are superpositions of states (g,g) are expected to be lighter. Their charge matrices are orthogonal and necessarily break the universality of electroweak interactions. For the lowest boson family - ordinary gauge bosons - the charge matrix is proportional to unit matrix. The exchange of
    second generation bosons Z01 and γ1 would give rise to Yukawa potential increasing the binding energies of S-wave states. Therefore Lamb shift defined as difference between energies of S and P waves is increased and the charge radius deduced from Lamb shift becomes smaller.

  3. The model thus predicts a correct sign for the effect but the size of the effect from naive estimate assuming only γ2 and α21== α for M=2.9 TeV is almost by an order of magnitude too small. The values of the gauge couplings α2 and αZ,2 are free parameters as also the mixing angles between states (g,g). The effect is also proportional to the ratio (mμ/M(boson)2. It turns out that the inclusion of Z01 contribution and assumption α1 and αZ,1 are near color coupling strength αs gives a correct prediction.

Motivations for the breaking of electroweak universality

The anomaly of charge radius could be explained also as breaking of the universality of weak interactions. Also other anomalies challenging the universality exists. The decays of neutral B-meson to lepton pairs should be same apart from corrections coming from different lepton masses by universality but this does not seem to be the case (see this). There is also anomaly in muon's magnetic moment discussed briefly here. This leads to ask whether the breaking of universality could be due to the failure of universality of electroweak interactions.

The proposal for the explanation of the muon's anomalous magnetic moment and anomaly in the decays of B-meson is inspired by a recent very special di-electron event and involves higher generations of weak bosons predicted by TGD leading to a breaking of lepton universality. Both Tommaso Dorigo (see this) and Lubos Motl (see this) tell about a spectacular 2.9 TeV di-electron event not observed in previous LHC runs. Single event of this kind is of course most probably just a fluctuation but human mind is such that it tries to see something deeper in it - even if practically all trials of this kind are chasing of mirages.

Since the decay is leptonic, the typical question is whether the dreamed for state could be an exotic Z boson. This is also the reaction in TGD framework. The first question to ask is whether weak bosons assignable to Mersenne prime M89 have scaled up copies assignable to Gaussian Mersenne M79. The scaling factor for mass would be 2(89-79)/2= 32. When applied to Z mass equal to about .09 TeV one obtains 2.88 TeV, not far from 2.9 TeV. Eureka!? Looks like a direct scaled up version of Z!? W should have similar variant around 2.6 TeV.

TGD indeed predicts exotic weak bosons and also gluons.

  1. TGD based explanation of family replication phenomenon in terms of genus-generation correspondence forces to ask whether gauge bosons identifiable as pairs of fermion and antifermion at opposite throats of wormhole contact could have bosonic counterpart for family replication. Dynamical SU(3) assignable to three lowest fermion generations labelled by the genus of partonic 2-surface (wormhole throat) means that fermions are combinatorially SU(3) triplets. Could 2.9 TeV state - if it would exist - correspond to this kind of state in the tensor product of triplet and antitriplet? The mass of the state should depend besides p-adic mass scale also on the structure of SU(3) state so that the mass would be different. This difference should be very small.

  2. Dynamical SU(3) could be broken so that wormhole contacts with different genera for the throats would be more massive than those with the same genera. This would give SU(3) singlet and two neutral states, which are analogs of η' and η and π0 in Gell-Mann's quark model. The masses of the analogs of η and π0 and the the analog of η', which I have identified as standard weak boson would have different masses. But how large is the mass difference?

  3. These 3 states are expected top have identical mass for the same p-adic mass scale, if the mass comes mostly from the analog of hadronic string tension assignable to magnetic flux tube. connecting the two wormhole contacts associates with any elementary particle in TGD framework (this is forced by the condition that the flux tube carrying monopole flux is closed and makes a very flattened square shaped structure with the long sides of the square at different space-time sheets). p-Adic thermodynamics would give a very small contribution genus dependent contribution to mass if p-adic temperature is T=1/2 as one must assume for gauge bosons (T=1 for fermions). Hence 2.95 TeV state could indeed correspond to this kind of state.

The sign of the effect is predicted correctly and the order of magnitude come out correctly

Could the exchange of massive MG,79 photon and Z0 give rise to additional electromagnetic interaction inducing the breaking of Universality? The first observation is that the binding energy of S-wave state increases but there is practically no change in the energy of P wave state. Hence the effective charge radius rp as deduced from the parameterization of binding energy different terms of proton charge radius indeed decreases.

Also the order of magnitude for the effect must come out correctly.

  1. The additional contribution in the effective Coulomb potential is Yukawa potential. In S-wave state this would give a contribution to the binding energy in a good approximation given by the expectation value of the Yukawa potential, which can be parameterized as

    V(r)= g2 e-Mr/r ,&g2 = 4π kα .

    The expectation differs from zero significantly only in S-wave state characterized by principal quantum number n. Since the exponent function goes exponentially to zero in the p-adic length scale associated with 2.9 TeV mass, which is roughly by a factor 32 times shorter than intermediate boson mass scale, hydrogen atom wave function is constant in excellent approximation in the effective integration volume. This gives for the energy shift

    Δ E= g2| Ψ(0)|2 × I ,

    Ψ(0) 2 =[22/n2]×(1/a03) ,

    a0= 1/(mα) ,

    I= ∫ (e-Mr/r) r2drdΩ =4π/3M2.

    For the energy shift and its ratio to ground state energy

    En= α2/2n2× m

    one obtains the expression

    Δ En= 64π2 α/n2 α3 (m/M)2 × m ,

    Δ En/En= (27/3) π2α2 k2(m/M)2 .

    For k=1 and M=2.9 one has Δ En/En ≈ 3× 10-11 for muon.

Consider next Lamb shift.

  1. Lamb shift as difference of energies between S and P wave states (see this) is approximately given by

    Δn (Lamb)/En= 13α3/2n .

    For n=2 this gives Δ2 (Lamb)/E2= 4.9× 10-7.

  2. The parameterization for the Lamb shift reads as

    Δ E(rp) =a - brp2 +crp3
    = 209.968(5) - 5.2248 × r2p + 0.0347 × r3p meV ,

    where the charge radius rp=.8750 is expressed in femtometers and energy in meVs.

  3. The reduction of rp by 3.3 per cent allows to estimate the reduction of Lamb shift (attractive additional potential reduces it). The relative change of the Lamb shift is

    x=[Δ E(rp))-Δ E(rp(exp))]/Δ E(rp)

    = [- 5.2248 × (r2p- r2p(exp)) + 0.0347 × ( r3p-r3p(exp))]/[209.968(5) - 5.2248 × r2p + 0.0347 × r3p(th)] .

    The estimate gives x= 1.2× 10-3.

This value can be compared with the prediction. For n=2 ratio of Δ En/Δ En(Lamb) is

x=Δ En/Δ En (Lamb)= k2 × [29π2/3×13α] × (m/M)2 .

For M=2.9 TeV the numerical estimate gives x≈ (1/3)×k2 × 10-4. The value of x deduced from experimental data is x≈ 1.2× 10-3. There is discrepancy of one order of magnitude. For k≈ 5 a correct order of magnitude is obtained. There are thus good hopes that the model works.

The contribution of Z01 exchange is neglected in the above estimate. Is it present and can it explain the discrepancy?

  1. In the case of deuterium the weak isospins of proton and deuterium are opposite so that their contributions to the Z01 vector potential cancel. If Z01 contribution for proton can be neglected, one has Δ rp=Δ rd.

    One however has Δ rp≈ 2.75 Δ rd. Hence Z01 contribution to Δ rp should satisfy Δ rp(Z01)≈ 1.75×Δ rp1). This requires αZ,11, which is true also for the ordinary gauge bosons. The weak isospins of electron and proton are opposite so that the atom is weak isospin singlet in Abelian sense, and one has I3pI3μ= -1/4 and attractive interaction. The condition relating rp and rZ suggests

    αZ,11≈ 286=4+13 .

    In standard model one has αZ/α= 1/[sin2W)cos2W)] =5.6 for sin2W)=.23 . One has upper bound αZ,11 ≥ 4 saturated for sin2W,1) =1/2. Weinberg angle can be expressed as

    sin2W,1)= (1/2)[1 - (1-4( α1Z,1)1/2] .

    αZ,11≈ 28/6 gives sin2W,1) = (1/2)[1 -(1/7)1/2] ≈ .31.

    The contribution to the axial part of the potential depending on spin need not cancel and could give a spin dependent contribution for both proton and deuteron.

  2. If the scale of α1 and αZ,1 is that of αs and if
    the factor 2.75 emerges in the proposed manner, one has k2≈ 2.75× 10= 27.5 rather near to the rough estimate k2≈ 27 from data for proton.

    Note however than there are mixing angles involved corresponding to the diagonal hermitian family charge matrix Q= (a,b,c) satisfying a2+b2+c2=1 and the condition a+b+c=0 expressing the orthogonality with the electromagnetic charge matrix (1,1,1)/31/2 expressing electroweak universality for ordinary electroweak bosons. For instance, one could have (a,b,c)= (0,1,-1)/21/2 for the second generation and (a,b,c)= (2,-1,-1)/61/2 for the third generation. In this case the above estimate would would be scaled down: α1→ 2α1/3≈ 1/20.5.

To sum up, the proposed model is successful at quantitative level allowing to understand the different changes for charge radius for proton and deuteron and estimate the values of electroweak couplings of the second generation of weak bosons apart from the uncertainty due to the family charge matrix. Muon's magnetic moment anomaly and decays of neutral B allow to test the model and perhaps fix the remaining two mixing angles.

See the article Could second generation of weak bosons explain the reduction of proton charge radius?

For background see the chapters New Physics Predicted by TGD: Part I and New Physics Predicted by TGD: Part II.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Monday, March 13, 2017

What about actual realization of Lorentz invariant synchronization?

I wrote one day ago about synchronization of clocks and found that the clocks distributed at the hyperboloids of light-cone assignable to CD can in principle be synchronized in Lorentz invariant manner (see this). But what about actual Lorentz invariant synchronization of the clocks? Could TGD say something non-trivial about this problem? I received an interesting link relating to this (see this). The proposed theory deals with fundamental uncertainty of clock time due to quantum-gravitational effects. There are of course several uncertainties involved since quantum theory of gravity does not exist (officially) yet!

  1. Operationalistic definition of time is adopted in the spirit with the empiristic tradition. Einstein was also empirist and talked about networks of synchronized clocks. Nowadays particle physicists do not talk much about them. Symmetry based thinking dominates and Special Relativity is taken as a postulate about symmetries.

  2. In quantum gravity situation becomes even rather complex. If quantization attempt tries to realize quantum states as superpositions of 3-geometries one loses time totally. If GRT space-time is taken to be small deformation of Minkowski space one has path integral and classical solutions of Einstein's equation define the background.

    The difficult problem is the identification of Minkowski coordinates unless one regards GRT as QFT in Minkowski space. In astrophysical scales QFT picture one must consider solutions of Einstein's equations representing astrophysical objects. For the basic solutions of Einstein's equations the identification of Minkowski coordinates is obvious but in general case such as many-particle system this is not anymore so. This is a serious obstacle in the interpretation of the classical limit of GRT and its application to planetary systems.

What about the situation in TGD? Particle physicist inside me trusts symmetry based thinking and has been somewhat reluctant to fill space-time with clocks but I am ready to start the job if necessarily! Since I am lazy I of course hope that Nature might have done this already and the following argument suggests that this might be the case!
  1. Quantum states can be regarded as superpositions of space-time surfaces inside causal diamond of imbedding space H= M4× CP2 in quantum TGD. This raises the question how one can define universal time coordinate for them. Some kind of absolute time seems to be necessary.

  2. In TGD the introduction of zero energy ontology (ZEO) and causal diamonds (CDs) as perceptive fields of conscious entities certainly brings in something new, which might help. CD is the intersection of future and past directed light-cones analogous to a big bang followed by big crunch. This is however only analogy since CD represents only perceptive field not the entire Universe.

    The imbeddability of space-time as to CD× CP2 ⊂ H= M4× CP2 allows the proper time coordinate a2 =t2-r2 near either CD bouneary as a universal time coordinate, "cosmic time". At a= constant hyperboloids Lorentz invariant synchronisation is possible. The coordinate a is kind of absolute time near a given boundary of CD representing the perceptive field of a particular conscious observer and serves as a common time for all space-time surfaces in the superposition. Newton would not have been so wrong after all.

    Also adelic vision involving number theoretic arguments selects a as a unique time coordinate. In p-adic sectors of adele number theoretic universality (NTU) forces discretization since the coordinates of hyperboloid consist of hyperbolic angle and ordinary angles. p-Adicallhy one cannot realize either angles nor their hyperbolic counterparts. This demands discretization in terms of roots of unity (phases) and roots of e (exponents of hyperbolic angles) inducing finite-D extension of p-adic number fields in accordance with finiteness of cognition. a as Lorentz
    invariant would be genuine p-adic coordinate which can in principle be continuous in p-adic sense. Measurement resolution however discretizes also a.

    This discretization leads to tesselations of a=constant hyperboloid having interpretation in terms of cognitive representation in the intersection of real and various p-adic variants of space-time surface with points having coordinates in the extension of rationals involved. There are two choices for a. The correct choice
    corresponds to the passive boundary of CD unaffected in state function reductions.

  3. Clearly, the vision about space-time as 4-surface of H and NTU show their predictive power. Even more, adelic physics itself might solve the problem of Lorentz invariant synchronization in terms of a clock network assignable to the nodes of tesselation!

    Suppose that tesselation defines a clock network. What synchronization could mean? Certainly strong correlations between the nodes of the network Could the correlation be due to maximal quantum entanglement (maximal at least in p-adic sense) so that the network of clocks would behave like a single quantum clock? Bose-Einstein condensate of clocks as one might say? Could quantum entanglement in astrophysical scales predicted by TGD via hgr= heff=n× h hypothesis help to establish synchronized clock networks even in astrophysical scales? Could Nature guarantee Lorentz invariant synchronization automatically?

    What would be needed would be not only 3-D lattice but also oscillatory behaviour in time. This is more or less time crystal (see this and this)! Time crystal like states have been observed but they require feed of energy in contrast to what Wilzek proposed. In TGD Universe this would be due to the need to generate large heff/h=n phases since the energy of states with n increases with n. In biological systems this requires metabolic energy feed. Can one imageine even cosmic 4-D lattice for which there would be the analog of metabolic energy feed?

    I have already a model for tensor networks and also here a appears naturally (see this). Tensor networks would correspond at imbedding space level to tesselations of hyperboloid t2-r2=a2 analogous to 3-D lattices but with recession velocity taking the role of quantized position for the point of lattice. They would induce tesselations of space-time surface: space-time surface would go through the points of the tesselation (having also CP2 counterpart). The number of these tesselations is huge. Clocks would be at the nodes of these lattice like structures. Maximal entanglement would be key feature of this network. This would make the clocks at the nodes one big cosmic clock.

    If astrophysical objects serving as clocks tend to be at the nodes of tesselation, quantization of cosmic redshifts is predicted! What is fascinating is that there is evidence for this: for TGD based model for this see (see this and this)! Maybe dark matter fraction of Universe might have taken care of the Lorentz invariant synchronization so that we need not worry about that!

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Sunday, March 12, 2017

Is Lorentz invariant synchronization of clocks possible?

I participated an FB discussion with several anti-Einsteinians. As a referee I have expressed my opinion about numerous articles claiming that Einstein's special or general relativity contains a fatal error not noticed by any-one before. I have tried to tell that colleagues are extremely eager to find a mistake in the work of colleague (unless they can silence the colleague) so that logical errors can be safely excluded. If something goes wrong it is at the level of basic postulates. In vain.

Once I had a long email discussion with a professor of logic who claimed to have found logical mistake in the deduction of time dilation formula. It was easy to find that he thought in terms of Newtonian space-time and this was of course in conflict with relativistic view. The logical error was his, not Einstein's. I tried to tell this. In vain again.

At this time I was demanded to explain why the 2 page article of Stephen Crothers (see this). This article was a good example of own logical error projected to that of Einstein. The author assumed besides the basic formulas for Lorentz transformation also synchronization of clocks so that they show the same time everywhere (about how this is achieved see this).

Even more: Crothers assumes that Einstein assumed that this synchronization is Lorentz invariant. Lorentz invariant synchronization of clocks is not however possible for the linear time coordinate of Minkowski space as also Crothers demonstrates. Einstein was wrong! Or was he? No!: Einstein of course did not assume Lorentz invariant synchronization!

The assumption that the synchronization of clock network is invariant under Lorentz transformations is of course in conflict with SR. In Lorentz boosted system the clocks are not in synchrony. This expresses just Einstein's basic idea about the relativity of simultaneity. Basic message of Einstein is misunderstood! The Newtonian notion of absolute time again!

The basic predictions of SR - time dilation and Lorentz contraction - do not depend on the model of synchronization of clocks. Time dilation and Lorentz contraction follow from basic geometry of Minkowskian space-time extremely easily.

Draw system K and K' moving with constant velocity with respect to K. The t' and x' axis of K' have angle smaller than π/2 and are in first quadrant.

  1. Assume first that K corresponds to the rest system of particle. You see that the projection of segment=(0,t') t'-axis to t-axis is shorter than the segment (0,t'): time dilation.

  2. Take K to be the system of stationary observer. Project the segment L=(0,x') to segment on x axis. It is shorter than L: Lorentz contraction.

There is therefore no need to build synchronized networks of clocks to deduce time dilation and Lorentz contraction. They follow from Minkowskian geometry.

This however raises a question. Is it possible to find a system in which synchronization is possible in Lorentz invariant manner? The quantity a2=t2-x2 defines proper time coordinate a along time like geodesics as Lorentz invariant time coordinate of light-one. a = constant hyper-surfaces are now hyperboloids. If you have a synchronized network of clocks, its Lorentz boost is also synchronized. General coordinate invariance of course allows this choice of time coordinate.

For Robertson-Walker cosmologies with sub-critical mass time coordinate a is Lorenz invariant so that one can have Lorentz invariant synchronization of clocks. General Coordinate Invariance allows infinitely many choices of time coordinate and the condition of Lorentz invariant synchronization fixes the time coordinate to cosmic time (or its function to be precise). To my opinion this is rather intesting fact.

What about TGD? In TGD space-time is 4-D surface in H=M4×CP2. a2= t2-r2 defines Lorentz invariant time coordinate a in future light-cone M4+ ⊂ M4 which can be used as time-coordinate also for space-time surfaces.

Robertson-Walker cosmologies can be imbedded as 4-surfaces to H=M4×CP2. The empty cosmology would be just the lightcone M4+ imbedded in H by putting CP2 coordinates constant. If CP2 coordinates depend on M4+ proper time a, one obtains more general expanding RW cosmologies. One can have also sub-critical and critical cosmologies for which Lorentz transformations are not isometries of a= constant section. Also in this case clocks are synchronized in Lorentz invariant manner. The duration of these cosmologies is finite: the mass density diverges after finite time.

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.

Saturday, March 11, 2017

Are viruses fragments of topological quantum computer code?

I was listening a highly interesting talk about viruses in Helsinki by Dr. Matti Jalasvuori, a molecular biologist working in the University of Jyväskylä as a researcher (see this). He has published a book about viruses in finnish titled "Virus. Elämän synnyttäjä, kuoleman kylväjä, ajatusten tartuttaja" (see this).

I learned an extremely interesting new-to-me fact about viruses. They might be far from a mere nuisance, In TGD Universe they could be quantum memes, short pieces of a code of quantum computer code, wandering around and attaching to the existing quantum computer code represented by DNA! Replication of viruses would be replication of memes. If the infected organism survives the virus attack by taming the virus and making it part of its non-coding DNA, it will gain more strength! If my computer survives the updating of the operating system, it works better!

Some basic facts

Viruses are very small, few nanometers is the size scale. Virus contains short pieces of RNA or DNA coding for the virus, in particular the protein shell around it, which virus must have in the "non-living" state outside the host cell to which it can penerate. Inside its host this shell melts and virus attaches to DNA and is able to to replicate. The copies of virus leave the host cell to search for their own host cells.

Usually viruses are regarded as a nuisance. But a new more holistic vision is evolving about viruses and their actual role. Viruses have been present perhaps even before the cell was present in its recent form, they might have been crucial for the emergence of life as we know it and would be also now. The system would consist of various kinds of cells, not necesary those of single organism. The contain several kinds of DNA and RNA: cell nucleus and mitochondria contain their own genomes; there are circular plasmids, and also viruses.

There is a continual exchange of information between cells including viruses as form form of information exchange. In this framework virus represents a meme represented by its DNA ,which does not code for protein shell. This meme wants to replicate and must use the genetic machinery to achieve this. But does virus do this to only replicate and produce more nuisance?

The organism manages to survive the virus attack if it is able to transform the virus so that it cannot replicate. One manner to achieve this would be transformation of the DNA portion due to the attached virus DNA (possible reverse transcribed from the RNA of virus) to a non-coding DNA often referred to as "junk" DNA. Non-coding DNA includes both intragenic regions - introns - and intergenic regions containing for instance promoters and enhancers crucial for the control of gene expression as proteins (see this). Introns are portions of genes, whose contribution to mRNA is sliced away in translation to proteins. The decomposition to introns and translated regions is dynamical, which gives rise to a rich spectrum of different translations of the gene.

In fact, most of non-coding DNA might be due to viruses! The portion of non-coding DNA increases for speciei at higher evolutionary level. For our species it is estimated to be 98 percent! Most of our genome is "junk" as many biologists still would put it. But can this really be the case? On might think that immune system would have invented some mechanism to prevent the infection of DNA by junk DNA? The size of the trash bin cannot be a measure for evolutionary level! It is also known that virus infections force the organism to change and in some cases to become a better surviver. Viruses would drive evolution.

One can speculate that during the very early period in evolution there were only viruses and proto-cells. There is no need for them to be coded by genes. Self-organization can produce cell membrane like structures: soap films represent an example. The DNA fragments could survive inside these proto-cells but according to simulations done by the Jyäskylä group in which Matti Jalasvuori is working, eventually the evolution would lead to the emergence of parasitic DNA strands, which would soon begin to dominate and kill the protocell.

Viruses might solve the problem. Viruses would attract DNA fragments and replicate with themto build a protein wall around the fragment containg also a piece of DNA of proto-cell. Viruses would leave the proto cell before its death and find another protocell. Gradually genome would be formed as viruses would steal pieces of DNA fragments from protocells. One step in the later evolution could be the elimination of the part of virus coding for the protein shell and the use of the rest as protein coding DNA. For eukariotes the transformation to non-coding DNA including intronic and intergenic DNA becomes possible.

Viruses as pieces of quantum computer code?

Computational thinking would suggest that viruses might make possible the emergence of new biological program modules allowing to use existing program modules coding for proteins more effectively. The different slicings of mRNA dropping some pieces away would correspond to different manners to transform DNA sequences to proteins. But what about intragenic portions of DNA: are they just junk?

Could the non-coding DNA and viruses have a much deeper purpose of existence than mere replication? In TGD Universe this kind of purpose is easy to imagine if the system formed by DNA - say intragenic portions of DNA - and nuclear membrane (or cell membrane) system serves as a topological quantum computer. DNA codons would be connected to lipids of the lipid layer of cell nucleus by magnetic flux tubes carrying dark charged particles. These connections could be also to cell membrane and even to cell membranes of other cells.

The braiding of the flux tubes would define the space-time realization of a quantum computer program. This would represent a new expression of DNA and would explain why so small differences between our DNA and that of our cousins give rise to so huge differences. What is important that genetic code would not be terribly important: it is braiding that matters now. The realization as quantum computer programs would give rise to cultural evolution, the realization as proteins to biological evolution. There would be a transition from the level of genes to that of memes.

Viruses would correspond to pieces of quantum computer code - memes. They would be wandering between cells and infecting them to get fused to the DNA. If DNA is able to transform them to introns it gets the code. Otherwise it dies. Infection is the necessary price for achieving meme replication. Living cells could be seen quantum computer programs updating them continually. Sounds somehow familiar!

See the chapters DNA as topological quantum computer, Three new physics realizations of the genetic code and the role of dark matter in bio-systems, and More Precise TGD Based View about Quantum Biology and Prebiotic Evolution of "Genes and Memes".

For a summary of earlier postings see Latest progress in TGD.

Articles and other material related to TGD.