"Local Causality in a Friedmann-Robertson-Walker Spacetime"

Comments (11):

Display By:

Richard Gill:

( May 19th, 2015 5:02am UTC )

Edit your feedback below

Write up of Pearle's model: http://arxiv.org/abs/1505.04431

Unregistered Submission:

( May 29th, 2015 10:17pm UTC )

Edit your feedback below

Nice write up! It's amazing that nobody had tried to implement Pearle before. Can you outline the theoretical differences between this kind of detection loophole simulation and this one: https://rpubs.com/gill1109/epr-clocked

?

?

Richard Gill:

( May 31st, 2015 6:27am UTC )

Edit your feedback below

The difference is that Pearle's is an archetypical "detection loophole" model, while the simulation in https://rpubs.com/gill1109/epr-clocked is a "coincidence loophole" model. The difference is all about time. Michel Fodje, whose algorithm I believe I faithfully followed, talks about a "clocked" EPR experiment. It is clocked, in a sense, but it is not pulsed like modern good experiments. There is no independent external frame of time-slots defining separate trials.

In a coincidence loophole LHV, two particles leave the source at an *unpredictable* time. They carry hidden variables with them. At each detector, as they arrive, there is interaction between the hidden variables and the local detector setting causing, in local fashion, a smaller or larger delay. Also a measurement outcome is generated. We get to observe, at the measurement station, a "detection time" and a "measurement outcome".

While this is going on, the measurement settings are getting switched, randomly, from time to time. These times and setting values are also observed.

This continues for a long time. Each of Alice and Bob separately "see" a sequence of times of events: some events are detection events, some are detector setting events.

After the experiment, Alice and Bob get together. Each has a long time series of discrete times and types of events. Some of the events are "new detector setting, value (say) 1 or 2, some of the events are "particle detected, outcome (say) +1 or -1". The data-analyst tries to correlate the two time-series. He chooses a so-called "coincidence window", length delta, say, and selects those pairs of detection events, one by Alice and one by Bob, which are within delta time units of one another. These pairs are what goes into the four measured correlations (we know for each pair what pair of settings are in force at the same time). Because of how Michel Fodje has tuned the parameters, many detection events on either side of the experiment remain unpaired. They are discarded.

One can say that there is 100% detector efficiency. Each emission leads to two particles getting measured. But the "experimental efficiency" is not 100%: unpaired detection events are discarded. It turns out that the coincidence loophole is roughly twice as serious as the detection loophole: for the same amount of unpaired events, LHV can emulate a roughly twice as large violation of CHSH, see http://arxiv.org/abs/quant-ph/0312035

The recent experiment by Giustina et al. (2014) was subject to the coincidence loophole. One can abolish the loophole simply by reanalysing the experimental records, using fixed short time intervals as time-slots, instead of time intervals co-determined by the two particles themselves. One simultaneously abolishes the detection loophole by defining the outcomes to be "+1" versus "-1 or no event at all". When this is done with Giustina's data we still, fortunately for those experimenters, have a statistically significant violation of CHSH. https://pubpeer.com/publications/BAA200E6CDDE845FD20E2149E0F998

In a coincidence loophole LHV, two particles leave the source at an *unpredictable* time. They carry hidden variables with them. At each detector, as they arrive, there is interaction between the hidden variables and the local detector setting causing, in local fashion, a smaller or larger delay. Also a measurement outcome is generated. We get to observe, at the measurement station, a "detection time" and a "measurement outcome".

While this is going on, the measurement settings are getting switched, randomly, from time to time. These times and setting values are also observed.

This continues for a long time. Each of Alice and Bob separately "see" a sequence of times of events: some events are detection events, some are detector setting events.

After the experiment, Alice and Bob get together. Each has a long time series of discrete times and types of events. Some of the events are "new detector setting, value (say) 1 or 2, some of the events are "particle detected, outcome (say) +1 or -1". The data-analyst tries to correlate the two time-series. He chooses a so-called "coincidence window", length delta, say, and selects those pairs of detection events, one by Alice and one by Bob, which are within delta time units of one another. These pairs are what goes into the four measured correlations (we know for each pair what pair of settings are in force at the same time). Because of how Michel Fodje has tuned the parameters, many detection events on either side of the experiment remain unpaired. They are discarded.

One can say that there is 100% detector efficiency. Each emission leads to two particles getting measured. But the "experimental efficiency" is not 100%: unpaired detection events are discarded. It turns out that the coincidence loophole is roughly twice as serious as the detection loophole: for the same amount of unpaired events, LHV can emulate a roughly twice as large violation of CHSH, see http://arxiv.org/abs/quant-ph/0312035

The recent experiment by Giustina et al. (2014) was subject to the coincidence loophole. One can abolish the loophole simply by reanalysing the experimental records, using fixed short time intervals as time-slots, instead of time intervals co-determined by the two particles themselves. One simultaneously abolishes the detection loophole by defining the outcomes to be "+1" versus "-1 or no event at all". When this is done with Giustina's data we still, fortunately for those experimenters, have a statistically significant violation of CHSH. https://pubpeer.com/publications/BAA200E6CDDE845FD20E2149E0F998

Unregistered Submission:

( June 1st, 2015 8:38pm UTC )

Edit your feedback below

Thank you very much, Richard! My question is: in Pearle's "detection loophole" model we can prove analytically that the correlation -a.b; can we prove analyticaly that in the "coincidence loophole" model the correlation is also -a.b?

Richard Gill:

( June 5th, 2015 5:50am UTC )

Edit your feedback below

Very good question.

Pearle presents a very large class of detection loophole models, and he shows that within that class, there does exist a subclass of models which reproduces exactly the correlation - a . b. It is clear from his proof that it can be done in very many different ways. He just exhibits the most simple one he could find which does the job.

As far as I know, no-one has done something similar for the coincidence loophole. None of the published coincidence loophole models reproduces the singlet correlations *exactly*. They just come very very close. How people do it: make some ad-hoc choices for various distributions, do a simulation and look at the result, and then start tweaking parameters. So one does it by trial and error and one is just after a good numerical approximation.

I am sure that an exact analytic solution is possible - there is more room to play in. On the other hand, model calculations are more complicated, and Pearle's proof of existence in a much more simple context is already as hard as they get.

Jan-Ake Larsson may know the answer. He has another exact solution for the detection loophole model (at least, for measurement directions on the circle, not on the sphere).

Larsson, J.-Å., 1999. Modeling the singlet state with local variables. Phys. Lett. A 256, 245–252. doi:10.1016/S0375-9601(99)00236-4

http://arxiv.org/abs/quant-ph/9901074

Pearle presents a very large class of detection loophole models, and he shows that within that class, there does exist a subclass of models which reproduces exactly the correlation - a . b. It is clear from his proof that it can be done in very many different ways. He just exhibits the most simple one he could find which does the job.

As far as I know, no-one has done something similar for the coincidence loophole. None of the published coincidence loophole models reproduces the singlet correlations *exactly*. They just come very very close. How people do it: make some ad-hoc choices for various distributions, do a simulation and look at the result, and then start tweaking parameters. So one does it by trial and error and one is just after a good numerical approximation.

I am sure that an exact analytic solution is possible - there is more room to play in. On the other hand, model calculations are more complicated, and Pearle's proof of existence in a much more simple context is already as hard as they get.

Jan-Ake Larsson may know the answer. He has another exact solution for the detection loophole model (at least, for measurement directions on the circle, not on the sphere).

Larsson, J.-Å., 1999. Modeling the singlet state with local variables. Phys. Lett. A 256, 245–252. doi:10.1016/S0375-9601(99)00236-4

http://arxiv.org/abs/quant-ph/9901074

Enter your reply below (Please read the **How To**)

Richard Gill:

( June 5th, 2015 11:38am UTC )

Edit your feedback below

Christian has posted a new version of his simulation model: http://rpubs.com/jjc/84238

I have added two lines to "his" code in order to save the number of pairs of particles which are accepted, for each pair of settings, in a large matrix. At the end I draw a contour plot of the resulting fractions of the original sample, used for each setting pair. The vertical axis runs from 0 to 1.

I am thinking about how to also visualise the overlap between different samples.

http://rpubs.com/gill1109/ChristianSampleSizes

I have added two lines to "his" code in order to save the number of pairs of particles which are accepted, for each pair of settings, in a large matrix. At the end I draw a contour plot of the resulting fractions of the original sample, used for each setting pair. The vertical axis runs from 0 to 1.

I am thinking about how to also visualise the overlap between different samples.

http://rpubs.com/gill1109/ChristianSampleSizes

Peer 3:

( June 5th, 2015 5:59pm UTC )

Edit your feedback below

Fascinating to see how Christian's "Friedmann-Robertson-Walker Spacetime" model is equivalent to the detection loophole trick. Never saw he mentioned that in any of his arXiv papers.

Enter your reply below (Please read the **How To**)

Richard Gill:

( July 2nd, 2016 4:39pm UTC )

Edit your feedback below

In the meantime, version 4 of this paper has been posted to arXiv: http://arxiv.org/pdf/1405.2355v4.pdf

According to the author it has been accepted for publication now.

The paper has changed quite a lot since version 1. There is a new computation of the singlet correlation in "Christian's model" at the end of the paper, starting at around equation (53).

According to (55) and (56), A(a, lambda) = lambda and B(b, lambda) = - lambda where lambda = +/-1. This should lead to E(a, b), computed in (60)-(68), equal to -1. But instead the author gets the result - a . b. How is it done?

Notice formula (58) where s_1 and s_2 are argued to be equal, leading in (59) to L(s_1, lambda)L(s_2, lambda) = -1. This result is then substituted inside a double limit as s_1 converges to a and s_2 converges to b in the transition from equation (62) to (63).

So s_1 and s_2 are equal yet converge to different limits a and b.

But that is not enough. A second trick is put into play a few lines later. According to (57) we should have L(a, lambda)L(b, lambda) = D(a)D(b) independent of lambda, which means that the step from (65) to (66) can't be correct.

I started a new thread linked to the published paper here: https://pubpeer.com/publications/AEF49D3399CFA02CAC40BA21F824B4

According to the author it has been accepted for publication now.

The paper has changed quite a lot since version 1. There is a new computation of the singlet correlation in "Christian's model" at the end of the paper, starting at around equation (53).

According to (55) and (56), A(a, lambda) = lambda and B(b, lambda) = - lambda where lambda = +/-1. This should lead to E(a, b), computed in (60)-(68), equal to -1. But instead the author gets the result - a . b. How is it done?

Notice formula (58) where s_1 and s_2 are argued to be equal, leading in (59) to L(s_1, lambda)L(s_2, lambda) = -1. This result is then substituted inside a double limit as s_1 converges to a and s_2 converges to b in the transition from equation (62) to (63).

So s_1 and s_2 are equal yet converge to different limits a and b.

But that is not enough. A second trick is put into play a few lines later. According to (57) we should have L(a, lambda)L(b, lambda) = D(a)D(b) independent of lambda, which means that the step from (65) to (66) can't be correct.

I started a new thread linked to the published paper here: https://pubpeer.com/publications/AEF49D3399CFA02CAC40BA21F824B4

Richard Gill:

( October 10th, 2016 1:21pm UTC )

Edit your feedback below

Now version 5 is online: https://arxiv.org/pdf/1405.2355v5.pdf

It contains some new material, in particular, the derivations (57)-(60) and (61)-(64) on page 8. In (57)-(60), the limit is taken as s_1 converges to a of some expression which depends on s_1, a and lambda. Up to (59) everything seems to be OK. In (59) we have a limit of a sum of two terms. Going from (59) to (60) the following seems to have happened. The limit of a sum of two terms is rewritten as a sum of two limits, one for each of the two terms separately. The second of the two limits is evaluated, the result is zero. The first limit is however not evaluated: instead, no limit is taken at all, so that the end result still depends on the dummy variable s_1.

The expression concerned is continuous in s_1, a and lambda and the limit can therefore be computed by simply evaluating it with s_1 set equal to a. That results in lambda, as already claimed in (54).

So the step from (59) to (60) is non-sense, and the final result moreover contradicts (54).

It contains some new material, in particular, the derivations (57)-(60) and (61)-(64) on page 8. In (57)-(60), the limit is taken as s_1 converges to a of some expression which depends on s_1, a and lambda. Up to (59) everything seems to be OK. In (59) we have a limit of a sum of two terms. Going from (59) to (60) the following seems to have happened. The limit of a sum of two terms is rewritten as a sum of two limits, one for each of the two terms separately. The second of the two limits is evaluated, the result is zero. The first limit is however not evaluated: instead, no limit is taken at all, so that the end result still depends on the dummy variable s_1.

The expression concerned is continuous in s_1, a and lambda and the limit can therefore be computed by simply evaluating it with s_1 set equal to a. That results in lambda, as already claimed in (54).

So the step from (59) to (60) is non-sense, and the final result moreover contradicts (54).

Enter new comment below (Please read the **How To**)

- - Macroscopic Observability of Spinorial Sign Changes under 2π Rotations
- - Macroscopic Observability of Spinorial Sign Changes under 2π Rotations
- - Macroscopic Observability of Spinorial Sign Changes under 2π Rotations
- - Local causality in a Friedmann–Robertson–Walker spacetime
- - Local causality in a Friedmann–Robertson–Walker spacetime

Equation (22): if z is a fixed element of the unit sphere S^2 and s_0 is uniformly distributed over S^2, then the angle between s_0 and z (denoted eta with subscripts z and s_0) is not uniformly distributed between 0 and pi. In fact, the cosine of this angle has the uniform distribution on [-1, 1].

Equation (19): for a given e_0 and s_0, equation (19) manifestly does *not* hold for all n.

The paper refers to a simulation experiment, whose code and results have been posted on internet at http://rpubs.com/jjc/13965 . The author claims that particle pairs have not been rejected yet the published code actually rejects a number of particle pairs for each pair of measurement directions, the number of pairs accepted is printed out ... and it varies with the measurement directions. From the published code one can moreover see that different "pre-ensemble" states are being rejected for different measurement settings.

The first part of the code computes correlations for a wide range of both measurement settings, using a double "for" loop. In the inner iteration, a new number N of accepted particle pairs is calculated, for each new measurement setting pair. Yet the author only has a single value of N printed out, outside of the two loops (in other words, he only prints out the value of N corresponding to the last values of the settings a and b).

One gets the impression that the author's grasp of the statistical programming language R is at a similar level to his mathematics.

The code has in fact been largely composed by copy paste from an earlier simulation of the Pearle (1970) detection loophole model, http://rpubs.com/gill1109/Pearle

It is amusing that Pearle actually exhibited a whole class of local hidden variable models reproducing the singlet correlations through data rejection. His paper only explicitly wrote down one member of the class, and this is the one which Christian claims turns up through his own calculations. Yet Christian's magic formula (16) (where he inserts Pearle's special choice of distribution) is conjured up out of nowhere, not derived from Christian's model assumptions.

Permalink

Permalink

## Are you sure you want to delete your feedback?

How To)## Are you sure you want to delete your feedback?