Friday, September 23, 2011

More on Superluminal Neutrinos

The vixra blog has the most comprehensive update on today's official announcement and scholarly pre-print from the OPERA experiment that their data supports an inference that neutrinos are moving at speeds faster than the speed of light.

The Effect Observed

The news is now officially out with a CERN press release and an arxiv submission at http://arxiv.org/abs/1109.4897 The result they have obtained is that the neutrinos arrive ahead of time by an amount 60.7 ns ± 6.9 ns (statistical) ± 7.4 ns (systematic). On the face of it this is a pretty convincing result for faster than light travel, but such a conclusion is so radical that higher than usual standards of scrutiny are required.

The deviation for the speed of light in relative terms is (v-c)/c = (2.48 ± 0.28 ± 0.30) x 10-5 for neutrinos with an average energy of 28.1 GeV The neutrino energy was in fact variable and they also split the sample into two bins for energies above and below 20 GeV to get two results.

13.9 GeV: (v-c)/c = (2.16 ± 0.76 ± 0.30) x 10-5

42.9 GeV: (v-c)/c = (2.74 ± 0.74 ± 0.30) x 10-5

These can be compared with the independent result from MINOS, a similar experiment in the US with a baseline of almost exactly the same length but lower energy beams.

3 GeV: (v-c)/c = (5.1 ± 2.9) x 10-5

. . .

We also have a constraint from supernova SN1987A where measurement of neutrino arrival times compared to optical observation sets |v-c|/c < 2 x 10-9 for neutrino energies in the order of 10 MeV.










Note that an electron neutrino at rest is believed to have a rest mass on the order of not more than about 1 eV. So, the supernova energy neutrinos have 10,000,000-1 relativistic kinetic energy v. rest mass. The Earth based experiment neutrinos have a 3,000,000,000-1 to 43,000,000,000-1 kinetic energy v. rest mass ratio.

Kinetic energy in special relativity is (mc^2/(1-v^2/c^2)^-1/2)-mc^2, where m is rest mass, c is the idealized speed of light and v is velocity. So the differences in speed between 10,000,000-1 kinetic energy, 3,000,000,000-1 kinetic energy, and 43,000,000,000-1 kinetic energy is a far smaller difference in terms of difference in velocity. In general, the greater the energy level the more slight the difference in velocity between one level of energy and a higher level of energy should be.

As a comment at the post notes: "For CNGS neutrino energies, = 17 GeV, the relative deviation from the speed of light c of the neutrino velocity due to its finite rest mass is expected to be smaller than 10-19, even assuming the mass of the heaviest neutrino eigenstate to be as large as 2 eV [4].Ch. Weinheimer et al., Phys. Lett. B 460 (1999) 219; but this one is old, used here in this new paper?" Thus, we do not naiively expect to see any measurable deviation from "c" in neutrino speed for neutrinos of this energy at all, in this experimental setup, and probably don't even expect to see a difference that is measurable for the 10 MeV neutrinos of the supernovae experiment.

If c is truly about 1+3*10^-5 times the value for c used in these calculations, and if the very high energy earth based calculations are only infintessimally and experimmentally invisibly different from c in that entire energy range, while the lower energy supernovae based calculation is a notch lower than c, then one could in principle infer the mass of the electron neutrino from the difference and that inferred mass is about right given measurements based on other methodologies.

Theoretical Analysis

If we believe in a tachyonic theory, with neutrinos of imaginary mass the value of (v-c)/c would decrease in inverse square of the energy. This is inconsistent with the results above where the velocity excess is more consistent with a constant independent of energy, or a slower variation. . . . For smaller energies we should expect a more significant anomaly . . . perhaps the energy dependence is very different from this expectation.

So if this is a real effect it has to be something that does not affect the cosmic neutrinos in the same way. For example it may only happen over short distances or in the presence . . . a gravitational field. It would still be a strong violation of Lorentz invariance of a type for which we do not really have an adequate theory. . . .

The most striking thing for me was the lack of any energy dependence in the result, a confirmation of what I noted this morning. The energy of the neutrinos have a fairly wide spread. If these were massive particles or light being refracted by a medium there would be a very distinct dependence between the speed and the energy of the particles but no such dependency was observed. . . .

Most physical effects you could imagine would have an energy dependence of some sort. A weak energy dependence is possible in the data but that would still be hard to explain. On the other hand, any systematic error in the measurement of the time or distance would be distinguished by just such a lack of energy dependence.

The only physical idea that would correspond to a lack of energy dependence would be if the universe had two separate fixed speeds, one for neutrinos and one for photons. I don’t think such a theory could be made to work, and even if it did you would have to explain why the SN1987A neutrinos were not affected. I think the conclusion has to be that there is no new physical effect, just a systematic error that the collaboration needs to find.

The are several theoretical concepts that make the most sense to me, if they are real:

(1) the notion of the role of non-speed of light paths (greater than and less than the speed of light that is proportionate in importance to the inverse of "the Interval" (i.e. the deviation of the squared space distance less the squared time distance) in the photon propogator that could carry over to the neutrino's quantum mechanical amplitude to appear at new locations (this could flow for inherent uncertainty in time and space, or from not quite perfect locality in time and space - we might be learning that there are an average of 20 meters of "wormholes" over 750km average and the law of averages would make the long range deviation much smaller than the short range deviation),

The problem with that idea, however, is that this effect should influence only the statistical variability of the observed speed, not the average speed observed.

(2) the possible effects of a gravity field that insert general relativity effects into the mix. In general relativity, time moves more slowly the deeper you are in a gravity well. The neutrinos are traveling through that gravity well at the level of the Earth's surface. The GPS synchronization signals and the precision distance measurements are made by light in a more shallow part of the Earth's gravity well where time passes somewhat faster. Depending on the specifics of the synchronization and precision distance measurement layout, one can imagine that general relativistic differences due to gravitational field strength on the rate of which time passes causes a systemic underestimate of distance and underestimate of elapsed time in the relevant reference frames. While the calculations aren't quite back of napkin, an order of magnitude estimate of this effect should be possible in a quite short academic paper and could materially increase the effect.

I don't have a good intuition on how strong this effect could be in the scenarios where we have data. But, gravity well effects are the rate at which time passes are observable directly with portable atomic clocks over distances on the order of tens of meters, and given the accuracy of these clocks described above, gravity well effects might theoretically, at least, be of an order of magnitude large enough to affect the measurements in this experiment that need to be accounted for expressly.

(3) The platonic ideal physical constant "c" called the speed of light differs from the actual speed of photons in the real world when there is some medium other than a pure vacuum through which it passes, electromagnetic fields have an impact, and they may perhaps be an adjustment due to the graviational well time dilation effect if there are discrepencies between the measuring reference frame and the calculational reference frames. It could be that experimental measurements of "c" by neglecting these effects, that is used in these calculations, is measuring photons that are actually travelling at something slightly less than "c" and that at the 750km scale, high energy neutrinos are not as strongly influenced by these effects.

For example, experimental measurements of "c" may not account for very small but measurable impacts on the measured speed of a photon of Earth's magnetic field, or interactions with the electromagnetic fields of protons and neutrons that have magnetic dipole moments, that do not affect electrically neutral neutrinos.

Perhaps in Earth's magnetic fields and cluttered mass field that are locally not electromagnetically neutral (and in similar fields found in supernovae), extremely high energy neutrinos actually do travel faster than photons because like the turtle and the hare, the theoretically faster photons get bogged down in "conservations" with other photons and electrons in the vicinity of their path, while slightly slower high energy neutrinos that are not diverted by colliding with other matter are not, even though in the mass free, charge free vacuum itself, photons actually travel slightly faster than high energy neutrinos.

In this case, the issue is not that neutrinos travel faster than "c" which is a huge theoretical quandry, but that photons in real life settings that are less than ideal matter free vacuums frequently travel more slowly than "c" which doesn't post the same deep theoretical issues. After all, we already know that photons travel at speeds slower than "c" in all sorts of non-vacuum media already.

Since most speed of light experiments involve photons in less than idealized conditions and not every experiment may adequately adjust for these effects, estimates of "c" from photons may systemically show greater consistency if the adjusted rather than true value of "c" is used in engineering applications.

If this is what is happening, the supernovae effects at a slighter degree can be explained by a very brief part of the neutrino's trip (at the beginning in the star where it originates and at the end in the immediate vicinity of the Earth) taking place in less than idealized conditions where neutrinos move faster than photons since photons interact more with the things around it, while the vast majority of the neutrino's trip takes place in a nearly ideal mass free, electromagnetic field free vacuum, where high energy neutrinos travel at a velocity only infinitessimally different from photons.

NOTE: The more I think about it, the more I like theoretical scenario (3).

Indeed, theoretical scenario (3) could also explain "the Interval" effect incorporated in QED not as being truly fundamental or reflecting the non-locality of space-time, but as reflecting the fact that for QED applications the "effective" value of "c" in normal Earth vicinity applications that is lower than true "c" varies randomly above and below "effective" values of "c" due to slight differences in photon and charged matter density. Real world QED applications don't involve true vacuums and in deep space astronomy observations, the scales are so great that the Interval effect that is relevant only at small distances disappears in all observables to the level of accuracy possible in those observations.

Absent this term in the QED propogator, more and more evidence seems to point to spacetime not being discrete at even a scale as fine as the Planck scale, although a "point-like" fundamental particle still creates general relativity contradictions.

In favor of the pro-"new physics" conclusion (maybe), another comment to the blog post notes that "An exact value for this ratio greater than light speed and with the 3 neutrino flavor is: (v-c)/c =3 x EXP-[SQR (alpha electromagnetic coupling constant]^-1)] = 2,47 x 10^-5" But, a "slow photons" in low density photon and matter fields scenario does make an effect that has some functional relationship to the electromagnetic coupling constant make sense as well, even without new physics, although the relationship would not be so clean and exact in that scenario.

Error Source Analysis

So obviously there could be some error in the experiment, but where?

The distances have been measured to 20cm accuracy and even earthquakes during the course of the experiment can only account for 7cm variations. The Earth moves about 1m round its axis in the time the neutrinos travel but this should not need to be taken into account in the reference frame fixed to Earth. The excess distances by which the neutrinos are ahead of where they should be is in the order of 20 meters, so distance measurements are unlikely to be a source of significant error.

Timing is more difficult. You might think that it is easy to synchronous clocks by sending radio waves back and forward and taking half the two way travel time to synchronise, but these experiments are underground and radio waves from the ground would have to bounce off the upper atmosphere or be relayed by a series of tranceivers. . . . the best atomic clocks lose or gain about 20 pico seconds per day, but portable atomic clocks at best lose a few nanoseconds in the time it would take to get them from one end to the other. . . . the best way to synchronise clocks over such distances is to use GPS which sends signals from satellites in low earth orbit. Each satellite has four atomic clocks which are constantly checked with better groundbased clocks. The ground positions are measured very accurately with the same GPS and in this way a synchronisation of about 0.1 ns accuracy can be obtained at ground level. The communication between ground and experiment adds delay and uncertainty but this part has been checked several times over the course of the experiment with portable atomic clocks and is good to within a couple of nanoseconds.

The largest timing uncertainties come from the electronic systems that are timing the pulses of neutrinos from the source at CERN. The overall systematic error is the quoted 6.9 ns, well within the 60 nanosecond deviations observed. Unless a really bad error has been made in the calculations these timings must be good enough.

The rest of the error is statistical [and the variation does not obviously suggest an error]. . . .

[Background sources:] The speeker showed how the form of the pulse detected by OPERA matched very nicely the form measured at CERN. If there was any kind of spread in the speed of the neutrinos this shape would be blurred a little and this is not seen.

The error source analysis is sufficiently convincing to suggest that the effect observed may not dervive from errors in distance measurement, syncronization of clocks, equipment timing, background sources of neutrinos, or statistical variation. Put as the comment below from the blog post notes, precision and accuracy are not necessarily the same thing and a bad distance formula could lead to this result:

According to the paper the distance measurement procedure use the geodetic distance in the ETRF2000 (ITRF2000) system as given by some standard routine. The european GPS ITRF2000 system is used for geodesy, navigation, et cetera and is conveniently based on the geode.

I get the difference between measuring distance along an Earth radius perfect sphere (roughly the geode) and measuring the distance of travel, for neutrinos the chord through the Earth, as 22 m over 730 km. A near light speed beam would appear to arrive ~ 60 ns early, give or take.

Also, as I noted above, the lack of experimental error does not necessarily imply that we truly have tachyons, as opposed to some other, less theoretically interesting effect.

5 comments:

Maju said...

Your reference says:

"As far as I can tell no measurement of neutrino speed or mass refutes the claim that they are tachyons, it’s just the theory that’s a problem".

The theory should not be a problem for science: theories are here to explain facts - and not the other way around.

However exceptional claims need exceptional evidence. It's an intriguing finding anyhow, specially because it opens the possibility of some form of time travel and "regular" fater-than-light travel, even if these kinds of "travel" are restricted to tachyons (not matter).

With these results, it'd be possible for a scientist to build a tachion receiver device (TRD) and get messages from his elder self or whoever has the TRD in the future, maybe his/her grandchildren or whatever twisted plot you may imagine.

We could hypothetically document the future, assuming some collaboration from there. That's intriguing but until a TRD is created nobody will know.

Andrew Oh-Willeke said...

I'm intrigued enough by the implications, but think it is almost certainly not a case of neutrinos traveling at more than the special relativistic limit. Honestly, assuming a lack of error in this report which isn't obvious but deserves the greatest attention, what it says about photons is probably more interesting than what it says about neutrinos.

Maju said...

A problem I see to your explanation #3 is that the same hypothetical vacuum errors that would be for light measures are there for neutrinos. Neutrinos would still travel faster than light in certain media, like interstellar space or the near-perfect vacuum of the LHC. It looks too suspicious.

Light in glass "only" suffers 33% delay (200,000 km/s compared to almost 300,000 km/s in vacuum), I doubt that in a slightly imperfect (???) vacuum light would suffer more than, say, 1% delay (probably not enough to explain the phenomenon attested).

But your call: I understand that the logical challenge posed by tachyons and reversible time is mind-boggling, however I have no big deal of a problem with a multidimensional universe and with time becoming space in each increase of dimension, like a cartoon does from 2D to 3D. If so, there's no reason why time can't be reversed and the "laws" of our perception can't be overwhelmed by the laws of reality.

Leonardo Rubino said...

I do not agree with the superluminal neutrinos news for very simple reasons. The difference they found with respect to

the speed of light is very small, so some errors in the calulations must have been made. Neutrino is not faster than

light.
The Special Theory of Relativity (STR) of Einstein, through the principle of the speed limit, makes the magnetic force

come from the electric one and the magnetic force is an electric force, as physicists know; an easy demonstration of that

can be found in chapter 3 of my file at the following link (also English inside):

http://www.fisicamente.net/FISICA_2/UNIFICAZIONE_GRAVITA_ELETTROMAGNETISMO.pdf

If you get rid of the speed limit principle, the magnetic field cannot exist anymore.

Moreover, as c=1/square root of(epsilon x µ), if you change c with a c'>c, then you have to accept a µ'<µ, so you have to

accept different intensities of magnetic fields from a given electric current, so you have to get rid of the

electromagnetism, but it's describing so well the currents, the fields, the real world etc. Therefore, there's a mistake

in the computation of the speed of neutrinos, in the calculations on the run lenght, in the interaction time

calculations, during the generation and also the detection of those evanescent particles!

(another interesting file, also related to this subject):

http://www.mednat.org/new_scienza/strani_legami_numerici_universo.pdf

Regards.

Leonardo Rubino.

leonrubino@yahoo.it

Andrew Oh-Willeke said...

"as c=1/square root of(epsilon x µ), if you change c with a c'>c, then you have to accept a µ'<µ, so you have to accept different intensities of magnetic fields from a given electric current, so you have to get rid of the electromagnetism, but it's describing so well the currents, the fields, the real world etc."

The neutrino based value of "c" which is higher by a factor of 10^-5*c is within the margin of error in "c" calculated from the constants you identify. The canonical value of "c" from the particle data group is calculated by another means with far greater accuracy.

If the "c" in the electromagnetism equation and special relativity is really the canonical value of "c" times 1.00003, and there is systemic error in the methodology used to empirically determine "c" that was used to arrive at the canonical value, then the framework of science as we know it won't fall apart.

The effect that I imagine would be functionally equivalent to a refractive index of 1-3*10^5 which only has an effect near massive bodies such as stars or the Earth that is not accounted for someplace or other in the calculations of either canonical "c" or the GPS data. The target value is about 10% of the refractive index of air and is on the same order of magnitude as the refractive index of helium gas at 1 atmosphere's pressure (one of the lowest refractive indexes found in nature). See, e.g. http://en.wikipedia.org/wiki/Refractive_index