"Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km"

Comments (165):

Display By:

Unregistered Submission:

( August 31st, 2015 4:57am UTC )

Edit your feedback below

Can anyone provide details of their calculation of the second p-value 0.039 (what they called "complete analysis")? Which statistic was used to compute this p-value?

Richard Gill:

( August 31st, 2015 10:43am UTC )

Edit your feedback below

We have to wait for publication of the "supplementary details" but the answer is, something very like: Peter Bierhorst. A rigorous analysis of the Clauser-Horne-Shimony-Holt inequality experiment when trials need not be independent. Foundations of Physics, 44(7):736-761, 2014; Peter Bierhorst. A robust mathematical model for a loophole-free Clauser-Horne experiment. Journal of Physics A: Mathematical and Theoretical, 48(19):195302, 2015. Basically, a very much refined analysis on the lines of the martingale based probability inequality developed in Richard Gill. Time, finite statistics, and Bell's fifth position. In Proceedings of foundations of probability and physics, volume 5 of Math. Modelling in Phys., Engin., and Cogn. Sc. 2002, Växjö Univ. Press, pages 179-206, 2003. http://arxiv.org/abs/quant-ph/0301059

Unregistered Submission:

( August 31st, 2015 1:11pm UTC )

Edit your feedback below

Unfortunately, because their sample size is small (n = 245), the easy to understand bound in Gill's well known "Statistics, Causality and Bell's Theorem" is not useful: Pr(S >= 2.42) <= 8*exp(-245*(.42^2)/256) = 6.75. I'll check the bound in "Time, Finite Statistics, and Bell's fifth position", and also read Bierhorst. Thanks for the references.

Richard Gill:

( August 31st, 2015 1:40pm UTC )

Edit your feedback below

Indeed you have to do a lot of careful probability theory to refine Gill's rough and ready result, before it says anything interesting at this value of N.

Peer 3:

( August 31st, 2015 1:40pm UTC )

Edit your feedback below

FYI, Gill's paper has been thoroughly discredited. See the following link here at PubPeer:

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

Unregistered Submission:

( August 31st, 2015 4:31pm UTC )

Edit your feedback below

If so, where is your published peer reviewed rebutal of Gill´s paper?

Peer 3:

( August 31st, 2015 8:53pm UTC )

Edit your feedback below

"If so, where is your published peer reviewed rebuttal of Gill's paper?"

Here:

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

The rebuttal is published on PubPeer, and it is being peer-reviewed here as we speak.

The advantage of PubPeer publication is that the paper is protected from the dirty politics and vested interests of the brain-washed editors and referees of the mainstream journals, whose main interest is in pocketing vast sums of our hard-earned money.

Here:

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

The rebuttal is published on PubPeer, and it is being peer-reviewed here as we speak.

The advantage of PubPeer publication is that the paper is protected from the dirty politics and vested interests of the brain-washed editors and referees of the mainstream journals, whose main interest is in pocketing vast sums of our hard-earned money.

Unregistered Submission:

( August 31st, 2015 10:47pm UTC )

Edit your feedback below

Nonsense. What's the interest of Statistical Science, one of the most prestigious _Statistics_ journals, in participating in your imaginary "Bell mafia conspiracy"? If you think Gill's paper is bogus, send your rebuttal to Statistical Science. Let's see if your schoolboy howlers can get published there.

Peer 3:

( September 1st, 2015 10:21am UTC )

Edit your feedback below

Is this a threat? Do you also have the so-called "prestigious journal" in your pocket?

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

Gill's silly mistakes have been exposed for all to see. If you are so "no-nonsense", then go to the link above and show us that Gill has not made any silly mistakes. I doubt very much that you will be able to do that, because that requires more than making anonymous threats.

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb23569

Gill's silly mistakes have been exposed for all to see. If you are so "no-nonsense", then go to the link above and show us that Gill has not made any silly mistakes. I doubt very much that you will be able to do that, because that requires more than making anonymous threats.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( August 31st, 2015 12:50pm UTC )

Edit your feedback below

The raw data and analysis code is not available for inspection and the corresponding author does not respond to a request for access to it. It's just more shameful anti-science from the quantum mysterians.

The paper itself suggests that they are simply post-selecting the data to manufacture a false violation (using the discredited Larsson-Gill counting trick), just as was done by Christensen et al (clearly shown by Graft). Without access to the raw data, these matters cannot be properly assessed. I call upon the authors to immediately make public their raw data and analysis code.

The paper itself suggests that they are simply post-selecting the data to manufacture a false violation (using the discredited Larsson-Gill counting trick), just as was done by Christensen et al (clearly shown by Graft). Without access to the raw data, these matters cannot be properly assessed. I call upon the authors to immediately make public their raw data and analysis code.

Richard Gill:

( August 31st, 2015 12:59pm UTC )

Edit your feedback below

The preprint on arXiv refers several times to "supplementary material". I guess we must just be patient and wait for peer review to take its course. Hopefully it won't take long.

I would say that Graft's claims about the Christensen et al. experiment have been thoroughly discredited. That experiment was not loophole-free, so it does have its issues. But they were not the issues raised by Graft.

The new experiment is simply the one proposed by https://vcq.quantum.at/fileadmin/Publications/1993-06.pdf Zukowski, Zeilinger, Horne, Ekert (1993) PRL vol 71. " 'Event-ready-detectors’' Bell experiment via entanglement swapping". Hardly a new idea. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287

I would say that Graft's claims about the Christensen et al. experiment have been thoroughly discredited. That experiment was not loophole-free, so it does have its issues. But they were not the issues raised by Graft.

The new experiment is simply the one proposed by https://vcq.quantum.at/fileadmin/Publications/1993-06.pdf Zukowski, Zeilinger, Horne, Ekert (1993) PRL vol 71. " 'Event-ready-detectors’' Bell experiment via entanglement swapping". Hardly a new idea. http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.71.4287

Donald A. Graft:

( August 31st, 2015 1:05pm UTC )

Edit your feedback below

"I would say that Graft's claims about the Christensen et al. experiment have been thoroughly discredited."

That's easy to claim but where is the published discrediting? My demonstration is published in J. Adv. Phys. 4 (2015) 284-300.

Do you also claim to have discredited the emission rate mechanism (not the same as the production rate mechanism) I have recently demonstrated ?

And here's another radical thought for you: If the authors of this paper were truly interested in the science they would welcome having a third-party perform an independent analysis during or even prior to peer review. Do you think that the chosen peer reviewers are going to request the raw data and do a full analysis? I stand ready to assist the authors with this. Given that this experiment is being touted as the long-sought "loophole-free" experiment, isn't it crucial to have independent checks?

That's easy to claim but where is the published discrediting? My demonstration is published in J. Adv. Phys. 4 (2015) 284-300.

Do you also claim to have discredited the emission rate mechanism (not the same as the production rate mechanism) I have recently demonstrated ?

And here's another radical thought for you: If the authors of this paper were truly interested in the science they would welcome having a third-party perform an independent analysis during or even prior to peer review. Do you think that the chosen peer reviewers are going to request the raw data and do a full analysis? I stand ready to assist the authors with this. Given that this experiment is being touted as the long-sought "loophole-free" experiment, isn't it crucial to have independent checks?

Richard Gill:

( August 31st, 2015 1:19pm UTC )

Edit your feedback below

Yes the Christensen et al. experiment is vulnerable to the emission rate loophole. We knew that in advance. It does not have rapid random switching to new measurement settings, one for each new pair of "time slots". Instead the measurement settings are kept constant for 1000 time-slots at a time, if I recall correctly.

Yes it is vital that there are independent checks on the new (Delft) experiment.

I suspect that as long as that paper is under peer review at a (I suppose, highly reputable, journal) the authors are simply not able to do anything. In fact they are not even allowed to talk to journalists. There has been no press release from Delft. We all just have to be patient.

I like Donald Graft's paper J. Adv. Phys. 4 (2015) http://dx.doi.org/10.1166/jap.2015.1198 http://arxiv.org/abs/1409.5158 even though I disagree with his conclusions. We could discuss it some more on PubPeer https://pubpeer.com/publications/E0F8384FC19A6034E86D516D03BB38

Yes it is vital that there are independent checks on the new (Delft) experiment.

I suspect that as long as that paper is under peer review at a (I suppose, highly reputable, journal) the authors are simply not able to do anything. In fact they are not even allowed to talk to journalists. There has been no press release from Delft. We all just have to be patient.

I like Donald Graft's paper J. Adv. Phys. 4 (2015) http://dx.doi.org/10.1166/jap.2015.1198 http://arxiv.org/abs/1409.5158 even though I disagree with his conclusions. We could discuss it some more on PubPeer https://pubpeer.com/publications/E0F8384FC19A6034E86D516D03BB38

Donald A. Graft:

( August 31st, 2015 1:24pm UTC )

Edit your feedback below

That is nonsense. It is already being touted on Nature, etc., as the finding of the holy grail.

And why did Zeilinger et al waste all that time and money on an experiment you claim that they knew was flawed and therefore inconclusive? I suppose something must be delivered when large grants are involved, eh? You acknowledge that the experiment is flawed, but you don't ask for its retraction, or for it to be no longer cited. What game are you playing?

I point out for the users here that you did not respond to my question about the discrediting of my paper that you claimed. It's not surprising because there is no such discrediting, except in your imagination. On the other hand, the Larsson-Gill counting method (plain old data discarding) is discredited in a peer-reviewed journal.

Finally, in my most recent arXiv paper, I show that the Larsson-Gill coincidence window analyses do not address the fundamental problem with coincidence counting. The "fair coincidences" assumption does nothing to exclude local models. Don't you have any comment on that? You can ignore things for only so long before people start to think that you are disingenuous. And in your infamous paper, why did you not cite the person that first discovered the coincidence window mechanism -- Arthur Fine 1980 and 1982 (and shortly thereafter by several others)? I cannot imagine that you would falsely claim priority for it.

And why did Zeilinger et al waste all that time and money on an experiment you claim that they knew was flawed and therefore inconclusive? I suppose something must be delivered when large grants are involved, eh? You acknowledge that the experiment is flawed, but you don't ask for its retraction, or for it to be no longer cited. What game are you playing?

I point out for the users here that you did not respond to my question about the discrediting of my paper that you claimed. It's not surprising because there is no such discrediting, except in your imagination. On the other hand, the Larsson-Gill counting method (plain old data discarding) is discredited in a peer-reviewed journal.

Finally, in my most recent arXiv paper, I show that the Larsson-Gill coincidence window analyses do not address the fundamental problem with coincidence counting. The "fair coincidences" assumption does nothing to exclude local models. Don't you have any comment on that? You can ignore things for only so long before people start to think that you are disingenuous. And in your infamous paper, why did you not cite the person that first discovered the coincidence window mechanism -- Arthur Fine 1980 and 1982 (and shortly thereafter by several others)? I cannot imagine that you would falsely claim priority for it.

Richard Gill:

( August 31st, 2015 1:50pm UTC )

Edit your feedback below

I contributed a lot to the earlier discussion of your paper on PubPeer, Donald, and of the Christensen et al paper also on PubPeer, and I'm sorry you apparently can't follow the logic of Larsson-Gill.

You mention Zeilinger, are you referring to the Giustina et al experiment? Four long runs each with a different fixed pair of settings. That was the other big experiment in 2013 which came close but which has obvious failings.

At last we have an experiment where the rigorous protocol carefully described by Bell (1981) "Bertlmann's socks" has been adhered to. Everyone has known for more than 30 years that this is the experiment we really needed to do. Everyone is welcome to their own analysis and own conclusions.

Too bad. Let's see what other scientists make of the controversy. We can both bluster away, pretty pointless - other people have to inform themselves carefully and draw their own conclusions.

Here is a nice survey paper of the whole loopholes business: http://arxiv.org/abs/1407.0363 J. Phys. A 47 424003 (2014)

BTW I would appreciate a reference to Fine if indeed he also noticed the coincidence loohole before other people. Another such early reference is Paszacio Physics Letters A 1986. The point is not that Larsson and Gill pointed out this loophole: they showed how to modify the CHSH inequality to take account of it. They were inspired by Hess and Philipp's claim that Bell did not take account of time. Nowadays Karl Hess claims that he and Walter Philipp discovered the coincidence loophole and that Larsson and Gill stole if from him. I would challenge anyone to show us where in the collected works of Hess and Philipp is anything like a description of the coincidence loophole. (Their LHV model reproducing the epr-b correlations depended on a mathematical error, a forgotten third subscript in the middle of horendous and delicate computations. Later they claimed that the model was OK because the forgotten subscript was not an "element of reality").

You mention Zeilinger, are you referring to the Giustina et al experiment? Four long runs each with a different fixed pair of settings. That was the other big experiment in 2013 which came close but which has obvious failings.

At last we have an experiment where the rigorous protocol carefully described by Bell (1981) "Bertlmann's socks" has been adhered to. Everyone has known for more than 30 years that this is the experiment we really needed to do. Everyone is welcome to their own analysis and own conclusions.

Too bad. Let's see what other scientists make of the controversy. We can both bluster away, pretty pointless - other people have to inform themselves carefully and draw their own conclusions.

Here is a nice survey paper of the whole loopholes business: http://arxiv.org/abs/1407.0363 J. Phys. A 47 424003 (2014)

BTW I would appreciate a reference to Fine if indeed he also noticed the coincidence loohole before other people. Another such early reference is Paszacio Physics Letters A 1986. The point is not that Larsson and Gill pointed out this loophole: they showed how to modify the CHSH inequality to take account of it. They were inspired by Hess and Philipp's claim that Bell did not take account of time. Nowadays Karl Hess claims that he and Walter Philipp discovered the coincidence loophole and that Larsson and Gill stole if from him. I would challenge anyone to show us where in the collected works of Hess and Philipp is anything like a description of the coincidence loophole. (Their LHV model reproducing the epr-b correlations depended on a mathematical error, a forgotten third subscript in the middle of horendous and delicate computations. Later they claimed that the model was OK because the forgotten subscript was not an "element of reality").

Donald A. Graft:

( August 31st, 2015 2:00pm UTC )

Edit your feedback below

"I'm sorry you apparently can't follow the logic of Larsson-Gill. "

Here we go again, the personal attacks have started. You should be ashamed of yourself.

The Fine citation is in my arXiv paper, where I set the record straight on the history. Try reading it. Don't divert this into a discussion of Hess and Philipp.

"the whole loopholes business"

Sums up nicely the corrupt attitude of the quantum mysterians to reasoned challenges.

Here we go again, the personal attacks have started. You should be ashamed of yourself.

The Fine citation is in my arXiv paper, where I set the record straight on the history. Try reading it. Don't divert this into a discussion of Hess and Philipp.

"the whole loopholes business"

Sums up nicely the corrupt attitude of the quantum mysterians to reasoned challenges.

Richard Gill:

( August 31st, 2015 2:03pm UTC )

Edit your feedback below

Could you please give the reference to the arXiv paper you are talking about? And stop making wild accusations of corruption and throwing about value-judgements like "quantum mysterian". It's not civilized. You mean that if someone writes something about QM which you don't understand, they are a quantum mysterian? It's not a helpful attitude.

Donald A. Graft:

( August 31st, 2015 2:09pm UTC )

Edit your feedback below

http://arxiv.org/abs/1507.06231

"You mean that if someone writes something about QM which you don't understand, they are a quantum mysterian?"

No, that is just something you made up to try to impugn me. A quantum mysterian is one who believes that something mysterious is going on, i.e., nonlocality. Surely that is more helpful than simply calling it nonsense.

"You mean that if someone writes something about QM which you don't understand, they are a quantum mysterian?"

No, that is just something you made up to try to impugn me. A quantum mysterian is one who believes that something mysterious is going on, i.e., nonlocality. Surely that is more helpful than simply calling it nonsense.

Richard Gill:

( August 31st, 2015 2:10pm UTC )

Edit your feedback below

Thank you. Please do the same. At least you now give me something to work on.

BTW, I do not believe that something non-local or mysterious is going on. I believe that what is going on is just what quantum mechanics predicts: "Nature produces chance events (irreducibly chance-like!) which can occur at widely removed spatial locations without anything propagating from point to point along any path joining those locations. ... The chance-like character of these effects prevents any possibility of using this form of non locality to communicate, thereby saving from contradiction one of the fundamental principles of relativity theory according to which no communication can travel faster than the speed of light" -- Nicolas Gisin, "Quantum Chance: Nonlocality, Teleportation and Other Quantum Marvels". Springer, 2014

This is what QM predicts and it is what experiment seems to confirm. But it would be good to have some more loophole-free successful experiments performed, or the Delft experiment repeated a few times.

BTW, I do not believe that something non-local or mysterious is going on. I believe that what is going on is just what quantum mechanics predicts: "Nature produces chance events (irreducibly chance-like!) which can occur at widely removed spatial locations without anything propagating from point to point along any path joining those locations. ... The chance-like character of these effects prevents any possibility of using this form of non locality to communicate, thereby saving from contradiction one of the fundamental principles of relativity theory according to which no communication can travel faster than the speed of light" -- Nicolas Gisin, "Quantum Chance: Nonlocality, Teleportation and Other Quantum Marvels". Springer, 2014

This is what QM predicts and it is what experiment seems to confirm. But it would be good to have some more loophole-free successful experiments performed, or the Delft experiment repeated a few times.

Donald A. Graft:

( August 31st, 2015 2:25pm UTC )

Edit your feedback below

First you say:

"I do not believe that something non-local or mysterious is going on."

Then you quote:

"The chance-like character of these effects prevents any possibility of using this form of non locality to communicate..."

Is that the nonlocality that you don't believe in? We are back to that "passion" stuff again? Even the title of the citation you appealed to refers to "nonlocality" and "quantum marvels"!

Furthermore, QM does not predict what you claim it predicts. And the experiments when properly interpreted confirm locality. I explain all this clearly in my peer-reviewed papers. You should read them.

"I do not believe that something non-local or mysterious is going on."

Then you quote:

"The chance-like character of these effects prevents any possibility of using this form of non locality to communicate..."

Is that the nonlocality that you don't believe in? We are back to that "passion" stuff again? Even the title of the citation you appealed to refers to "nonlocality" and "quantum marvels"!

Furthermore, QM does not predict what you claim it predicts. And the experiments when properly interpreted confirm locality. I explain all this clearly in my peer-reviewed papers. You should read them.

Richard Gill:

( August 31st, 2015 2:30pm UTC )

Edit your feedback below

I would urge everyone to read Donald Graft's papers. They are excellent. And read the literature on the loopholes (detection loophole, coincidence loophole as well, especially Larsson's 2014 survey paper.

Next time I write about the coincidence loophole I will refer to Fine's papers. I have always found those papers wonderful, a mine of information and wisdom. Obviously I missed some of the many nuggets of gold in them.

Perhaps Karl Hess should apologize to Arthur Fine.

Next time I write about the coincidence loophole I will refer to Fine's papers. I have always found those papers wonderful, a mine of information and wisdom. Obviously I missed some of the many nuggets of gold in them.

Perhaps Karl Hess should apologize to Arthur Fine.

Enter your reply below (Please read the **How To**)

Richard Gill:

( August 31st, 2015 2:28pm UTC )

Edit your feedback below

OK this is off topic since it is about the coincidence loophole and Donald Graft's nice paper http://arxiv.org/abs/1507.06231 and the Christensen et al experiment of two years ago.

I would say that there is no need to worry about the coincidence loophole anymore. Just analyse the data using a pre-fixed lattice of time windows, but insist on new randomly chosen settings per time-slot, and insist on a pre-chosen function of the data (detection events) collected in each wing of the experiment per time-slot with values in {-1, +1}. In other words: the coincidence loophole can be banned entirely simply by insisting on Bell's (1981) experimental design and proper randomisation of the settings.

Christensen et al's data can be analysed this way (with a fixed lattice of time windows) because it is a pulsed experiment. One finds a violation of CHSH (CH is just another version of CHSH - if there is no-signalling between the two wings of the experiment they are algebraically equivalent, though obviously not statistically identical). Unfortunately Christensen et al did not have new rapidly generated settings per time slot. Because technology wasn't far enough developed yet. Maybe this has changed now (two years on). I expect there will be some more loophole-free violations of Bell reported in the next few months. The Delft group wasn't the only one in the race. It seems to be within reach.

I would say that there is no need to worry about the coincidence loophole anymore. Just analyse the data using a pre-fixed lattice of time windows, but insist on new randomly chosen settings per time-slot, and insist on a pre-chosen function of the data (detection events) collected in each wing of the experiment per time-slot with values in {-1, +1}. In other words: the coincidence loophole can be banned entirely simply by insisting on Bell's (1981) experimental design and proper randomisation of the settings.

Christensen et al's data can be analysed this way (with a fixed lattice of time windows) because it is a pulsed experiment. One finds a violation of CHSH (CH is just another version of CHSH - if there is no-signalling between the two wings of the experiment they are algebraically equivalent, though obviously not statistically identical). Unfortunately Christensen et al did not have new rapidly generated settings per time slot. Because technology wasn't far enough developed yet. Maybe this has changed now (two years on). I expect there will be some more loophole-free violations of Bell reported in the next few months. The Delft group wasn't the only one in the race. It seems to be within reach.

Donald A. Graft:

( August 31st, 2015 2:40pm UTC )

Edit your feedback below

There is nothing wrong with the Christensen et al experiment and it is fully adequate to decide the matter. When correctly interpreted it decisively confirms locality. The data discarding associated with the pulsed analysis has been discredited. Please refer to my papers for a correct interpretation.

"In other words: the coincidence loophole can be banned entirely simply by insisting on Bell's (1981) experimental design and proper randomisation of the settings."

That is simply wrong, as I show in my last paper. I urge you to study it carefully as it introduces a totally new mechanism.

"In other words: the coincidence loophole can be banned entirely simply by insisting on Bell's (1981) experimental design and proper randomisation of the settings."

That is simply wrong, as I show in my last paper. I urge you to study it carefully as it introduces a totally new mechanism.

Richard Gill:

( August 31st, 2015 2:42pm UTC )

Edit your feedback below

We can discuss all that here: https://pubpeer.com/publications/208D050E6B33907F7F24D2F273B575

Unfortunately we can't ask Christensen et al to redo their experiment with a new randomly generated setting pair per time-slot. But I am sure this experiment is being redone right now, in the proper way.

Unfortunately we can't ask Christensen et al to redo their experiment with a new randomly generated setting pair per time-slot. But I am sure this experiment is being redone right now, in the proper way.

Donald A. Graft:

( August 31st, 2015 2:53pm UTC )

Edit your feedback below

"Unfortunately we can't ask Christensen et al to redo their experiment with a new randomly generated setting pair per time-slot. But I am sure this experiment is being redone right now, in the proper way."

It is not necessary to re-do the experiment. If you read my papers you will see why. And if your "proper way" is the post-selected time slot method, then you are deluding yourself.

It is not necessary to re-do the experiment. If you read my papers you will see why. And if your "proper way" is the post-selected time slot method, then you are deluding yourself.

Richard Gill:

( August 31st, 2015 3:25pm UTC )

Edit your feedback below

Well, Donald, your and my opinions are clear. I wonder what others will have to say.

Donald A. Graft:

( August 31st, 2015 6:17pm UTC )

Edit your feedback below

There is an objective way to decide the matter. We don't have to take a vote. I describe it in my last arXiv paper.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( September 1st, 2015 12:32am UTC )

Edit your feedback below

Do any of the discussants here know the stopping rule used in this experiment? Did they plan the sample size in advance? Or did they stop when the "desired" significance level was achieved? Or did they just run out of resources/money and stopped? Of course, each stopping rule gives a different p-value. It's nice to see Physics crucially depending on statistical methods, but now physicist will be faced by the fundamental questions of our field.

Richard Gill:

( September 1st, 2015 12:15pm UTC )

Edit your feedback below

Very good point. I believe that the sample size (or rather - the time period that the experiment was due to run) was fixed in advance. So based on earlier trial runs and experience, the actual sample size of the "definitive" experiment was chosen in advance, as far as one can do these things in the real world. I hope this will become clear if/when the paper is published and also supplementary materials are presented and the authors can actually answer questions openly about their work.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( September 2nd, 2015 2:45pm UTC )

Edit your feedback below

Since the corresponding author R. Hanson declines to correspond I have invited him to join the discussion here. Surely, he must be interested in knowing about possible flaws in his work *before* it is published. He touts the paper on his university website but won't respond to any queries/concerns about it. Is this science?

Donald A. Graft:

( September 3rd, 2015 12:23pm UTC )

Edit your feedback below

It is important to realize that the inclusion of the phrase "loophole-free" in the title of this paper is gross over-reaching. First, let's be honest and recognize a "loophole" for what it really is. It is a magic trick used by the quantum mysterians to fake nonlocality. Local realists do not appeal to loopholes; they are things that the mysterians invoke. Realists argue that QM does not predict nonlocality and that experiments do not show it -- when tricks are not used. So the mysterians try to "close" these loopholes, i.e., they try to show nonlocality without the tricks. They say they have closed all the loopholes except not yet in a single experiment. That's like a magician using different sleight-of-hand techniques in different shows to disappear an apple and claiming that therefore tricks aren't being used and he really did make the apple disappear. It is serial cheating by the magician; when one trick is exposed another is substituted. So the mysterians realized that they have to eliminate all the tricks in one experiment if they want to be taken seriously. Obviously they cannot do that so they are forced to use dubious logic and tactics. Consider the experiment of this paper. The authors claim to have eliminated the detection trick and the locality trick and therefore claim that their experiment is trick-free. But it is nonsense because these are not the only two tricks that can be used to falsely show a violation. Beyond the well-known production-rate mechanism and the coincidence-window mechanism, and others, we also have my recently discovered emission-rate mechanism and a subtle little trick called post-selection. This paper admits to post-selecting the data (using the discredited Larsson-Gill counting trick)! So here we have a dubious over-reaching to attempt to achieve the holy grail of a trick-free experiment by fiat, by simply decreeing that they have done so. The mysterians have to try to sell this nonsense because there is no other way out for them.

Meanwhile, the ra-ra articles in the popular press are multiplying. But the authors decline to discuss the experiment "while it is under review". Are they kidding? Are they afraid of risking their publication in Nature by allowing deficiencies to come to light? It can't be a matter of priority because they have already secured that by their arXiv publication. And while the authors decline media interviews while the paper is under review, they do not request that the ra-ra articles be held in abeyance until the reviews are complete. The media will not be denied, however, so they instead trot out prominent mysterians like A. Zeilinger, who blithely assures us everything is in order, before reviews are complete! Now the reviewers are under great pressure not to rain on the wonderous parade. And silly rabbit, no interviews for skeptical realists. Getting the picture, everybody?

Stand up for science and don't be taken in by nonsense. If you remove the "loophole-free" claim from this paper, it becomes just another in a long line of shabby magic shows, lavishly funded by the public purse.

Meanwhile, the ra-ra articles in the popular press are multiplying. But the authors decline to discuss the experiment "while it is under review". Are they kidding? Are they afraid of risking their publication in Nature by allowing deficiencies to come to light? It can't be a matter of priority because they have already secured that by their arXiv publication. And while the authors decline media interviews while the paper is under review, they do not request that the ra-ra articles be held in abeyance until the reviews are complete. The media will not be denied, however, so they instead trot out prominent mysterians like A. Zeilinger, who blithely assures us everything is in order, before reviews are complete! Now the reviewers are under great pressure not to rain on the wonderous parade. And silly rabbit, no interviews for skeptical realists. Getting the picture, everybody?

Stand up for science and don't be taken in by nonsense. If you remove the "loophole-free" claim from this paper, it becomes just another in a long line of shabby magic shows, lavishly funded by the public purse.

Richard Gill:

( September 4th, 2015 8:51am UTC )

Edit your feedback below

"Loophole free" simply means: adheres to the precise experimental protocol carefully described by Bell (1981) "Bertlmann's socks". There are no magic tricks there, and no-one claiming magic tricks, as far as I know. What they saw in Delft is allowed by (S < = 2.828...) and predicted by QM, but hard to understand from an Einstein pseudo-deterministic view of what was going on. If you try to simulate it in a local realistic way (network of classical computers) you'll only observe S > 2.4 (with N approx 250) in one out of forty repeats of the experiment.

Yes: some science journalists tend to go over the top and prove that they have no understanding at all of what they are talking about. And indeed a lot of high profile researchers have an interest in promoting this experiment. Especially since we can expect even better experimental results to be coming out soon.

Be patient, let's hope the reviews allow the supplementary material and the data to be published and then sceptics will have their opportunity to hunt for loopholes. I would also simply like to see the experiment repeated half a dozen times.

Yes: some science journalists tend to go over the top and prove that they have no understanding at all of what they are talking about. And indeed a lot of high profile researchers have an interest in promoting this experiment. Especially since we can expect even better experimental results to be coming out soon.

Be patient, let's hope the reviews allow the supplementary material and the data to be published and then sceptics will have their opportunity to hunt for loopholes. I would also simply like to see the experiment repeated half a dozen times.

Peer 3:

( September 4th, 2015 9:37am UTC )

Edit your feedback below

It is not at all hard to understand the observed results of the experiment from a manifestly local and deterministic point of view. For example, one such explicitly local and deterministic model that reproduces the strong correlation -a.b *exactly* (in three dimensions) has been long presented in this paper (as well as in the references therein):

http://arxiv.org/pdf/1501.03393.pdf

See, especially, the explicit derivation of E(a, b) = -a.b in the last appendix of the paper, and the derivation of the upper bound 2 root 2 in the main text [ eq.(26) ].

http://arxiv.org/pdf/1501.03393.pdf

See, especially, the explicit derivation of E(a, b) = -a.b in the last appendix of the paper, and the derivation of the upper bound 2 root 2 in the main text [ eq.(26) ].

Unregistered Submission:

( September 4th, 2015 1:22pm UTC )

Edit your feedback below

And yet you can't write a local realistic simulation of your "local realistic model". All the simulations you presented can be interpreted as either pre-selecting an ensemble of particle pairs using information about the detectors directions (hence, it's nonlocal), or using the data discarding mechanism of Pearle (by the way, it's funny that you think that an experiment which closes the detection loophole confirms your ideas). And please, stop saying this is not the case, and pasting textual "evidence" from your papers. Your ugly R code is out there for anyone to read.

Peer 3:

( September 4th, 2015 2:12pm UTC )

Edit your feedback below

UnReg ( September 4th, 2015 1:22pm UTC ) ,

your claims are manifestly false, as anyone competent in physics and mathematics can see for themselves:

http://rpubs.com/jjc/105450

http://rpubs.com/jjc/99993

http://rpubs.com/jjc/84238

But suppose, for the sake of argument, that your false claims about the above simulations were true. So what?

A simulation of a model is not a model itself. A simulation is a numerical implementation of a model. The analytical model itself stands on its own, as anyone can see for themselves:

http://arxiv.org/pdf/1501.03393.pdf

http://arxiv.org/pdf/1405.2355.pdf

So, UnReg, bogus and false claims will not get you anywhere. The Bell game is long finished.

your claims are manifestly false, as anyone competent in physics and mathematics can see for themselves:

http://rpubs.com/jjc/105450

http://rpubs.com/jjc/99993

http://rpubs.com/jjc/84238

But suppose, for the sake of argument, that your false claims about the above simulations were true. So what?

A simulation of a model is not a model itself. A simulation is a numerical implementation of a model. The analytical model itself stands on its own, as anyone can see for themselves:

http://arxiv.org/pdf/1501.03393.pdf

http://arxiv.org/pdf/1405.2355.pdf

So, UnReg, bogus and false claims will not get you anywhere. The Bell game is long finished.

Richard Gill:

( September 6th, 2015 9:51am UTC )

Edit your feedback below

Regarding computer simulations of Bell-EPRB experiments:

This is what Joy Christian (Peer 3?) wrote in 2013 about computer simulations:

Reply to John Reed in the usenet group sci.physics.foundations (2013) when John tried to event-by-event simulate his theory:

Hi John,

I do not wish to discourage you, but this will be a rather negative comment. Regardless of the details of your simulation, I can guarantee you that you will not be able to get either the sin^2 curve for the probabilities or the cosine curve for the correlations in your simulation. The reason is pure and simple: Bell's theorem. It is a mathematical theorem that proves, ones [sic. once?] and for all, that no procedure like what you are considering will ever produce the cosine correlation. There are also some corollaries of Bell's theorem, specifically aimed at simulation attempts of the kind you are considering, which prove that you will not be able to generate correlations or probabilities stronger than the straight line you are getting. Of course, if you somehow let some information pass on from one side to the other non-locally, or exploit some kind of a loophole like the detector loophole, then you will be able to get stronger correlations. One such example which exploits the time-window loophole can be found here: http://rugth30.phys.rug.nl/eprbdemo/simulation.php

Best,

Joy

https://groups.google.com/d/msg/sci.physics.foundations/bOzRTwrF-Es/uIDL0ZobQ2oJ

This is what Joy Christian (Peer 3?) wrote in 2013 about computer simulations:

Reply to John Reed in the usenet group sci.physics.foundations (2013) when John tried to event-by-event simulate his theory:

Hi John,

I do not wish to discourage you, but this will be a rather negative comment. Regardless of the details of your simulation, I can guarantee you that you will not be able to get either the sin^2 curve for the probabilities or the cosine curve for the correlations in your simulation. The reason is pure and simple: Bell's theorem. It is a mathematical theorem that proves, ones [sic. once?] and for all, that no procedure like what you are considering will ever produce the cosine correlation. There are also some corollaries of Bell's theorem, specifically aimed at simulation attempts of the kind you are considering, which prove that you will not be able to generate correlations or probabilities stronger than the straight line you are getting. Of course, if you somehow let some information pass on from one side to the other non-locally, or exploit some kind of a loophole like the detector loophole, then you will be able to get stronger correlations. One such example which exploits the time-window loophole can be found here: http://rugth30.phys.rug.nl/eprbdemo/simulation.php

Best,

Joy

https://groups.google.com/d/msg/sci.physics.foundations/bOzRTwrF-Es/uIDL0ZobQ2oJ

Peer 3:

( September 6th, 2015 8:18pm UTC )

Edit your feedback below

It appears that John Reed (whoever that is) was going about his simulation all wrong. :-)

Unregistered Submission:

( September 7th, 2015 3:11am UTC )

Edit your feedback below

It's the same John Reed that you (Peer 3 = Joy Christian) have been talking with here:

http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=192

Don't be ridiculous.

http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=192

Don't be ridiculous.

Peer 3:

( September 7th, 2015 8:41pm UTC )

Edit your feedback below

Is this the same John Reed that he (Peer 2 = Richard Gill) has been talking about in his post?

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( September 7th, 2015 3:35am UTC )

Edit your feedback below

The TU Delft loophole-closing experiment, in addition to its strictly intellectual importance, is directly relevant to the securing of transmitted data against hacking. So a couple of questions concerning quantum key distribution:

(1.) Is it real ... that is, does it perform the function it advertises for itself (guaranteed secure communication)?

(2.) If so, does it depend on what the authors of the paper under discussion undoubtedly regard as purely quantum principles (entanglement, superposition, irreducible randomness) or could it in theory be simulated classically?

(1.) Is it real ... that is, does it perform the function it advertises for itself (guaranteed secure communication)?

(2.) If so, does it depend on what the authors of the paper under discussion undoubtedly regard as purely quantum principles (entanglement, superposition, irreducible randomness) or could it in theory be simulated classically?

Richard Gill:

( September 7th, 2015 1:42pm UTC )

Edit your feedback below

There are many different techniques of quantum key distribution. One proposal has been to incorporate a loophole-free Bell violation experiment into quantum key distribution. The whole point is that this *cannot* be simulated classically. At least, not reliably.

A classical simulation of a loophole-free CHSH type experiment will tend to violate the CHSH inequality with probability 50%. The interesting thing is a violation by an interesting amount and based on a large number of trials.

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.67.661

Quantum cryptography based on Bell's theorem

Artur K. Ekert (1991)

Phys. Rev. Lett. 67, 661

A classical simulation of a loophole-free CHSH type experiment will tend to violate the CHSH inequality with probability 50%. The interesting thing is a violation by an interesting amount and based on a large number of trials.

http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.67.661

Quantum cryptography based on Bell's theorem

Artur K. Ekert (1991)

Phys. Rev. Lett. 67, 661

Unregistered Submission:

( September 7th, 2015 4:33pm UTC )

Edit your feedback below

The Ekert paper is available open-source here:

http://faeuat0.us.es/Qubit/Carpetas/Material/Ekert91.pdf

According to Wikipedia there have been successful quantum key distribution experiments:

https://en.wikipedia.org/wiki/Quantum_key_distribution#Implementations

The anti-Bell camp have claimed that "quantum voodoo" will be definitively disproved by an eventual failure to implement quantum computation. There might, however, turn out of be purely engineering reasons why QC wouldn't be accomplished (see for instance Sorin Paraoanu's prize-winning FQXi essay of a few years ago). In the meantime, as we wait, it'd be interesting to read informed technical arguments contending that the seemingly already-achieved implementation of quantum key distribution demonstrates nothing in support of QM "mysterianism" (the principle of entanglement in particular). Or, more persuasive still, maybe someone could successfully hack such a system.

http://faeuat0.us.es/Qubit/Carpetas/Material/Ekert91.pdf

According to Wikipedia there have been successful quantum key distribution experiments:

https://en.wikipedia.org/wiki/Quantum_key_distribution#Implementations

The anti-Bell camp have claimed that "quantum voodoo" will be definitively disproved by an eventual failure to implement quantum computation. There might, however, turn out of be purely engineering reasons why QC wouldn't be accomplished (see for instance Sorin Paraoanu's prize-winning FQXi essay of a few years ago). In the meantime, as we wait, it'd be interesting to read informed technical arguments contending that the seemingly already-achieved implementation of quantum key distribution demonstrates nothing in support of QM "mysterianism" (the principle of entanglement in particular). Or, more persuasive still, maybe someone could successfully hack such a system.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( September 7th, 2015 11:13pm UTC )

Edit your feedback below

How does Gill´s

http://arxiv.org/abs/quant-ph/0301059

apply to Hensen´s et al. results? Using the data in their Figure 4, Gill´s Z statistic (see p. 8) is negative, equal to -39, but then his inequality Pr(Z >= sqrt{N}*k) <= exp(-k^2/2) makes no sense, because k must be positive to use Azuma-Hoeffding (note that N = 245).

http://arxiv.org/abs/quant-ph/0301059

apply to Hensen´s et al. results? Using the data in their Figure 4, Gill´s Z statistic (see p. 8) is negative, equal to -39, but then his inequality Pr(Z >= sqrt{N}*k) <= exp(-k^2/2) makes no sense, because k must be positive to use Azuma-Hoeffding (note that N = 245).

Unregistered Submission:

( September 9th, 2015 1:31pm UTC )

Edit your feedback below

How can Hensen's data be analysed within the martingale framework developed in Gill's paper [1]? Using Hensen's data diplayed on Figure 4 (page 6), Gill's statistic Z, defined on page 8 of [1], is equal to Z = 63 - 46 - 46 - 41 = -70. But then his inequality, stated also on page 8 of [1], P(Z >= k * /N) <= exp(-k^2 / 2), in which k >= 0 and N = 245, doesn't give us an useful upper bound for the tail probability of Z. [1] http://arxiv.org/abs/quant-ph/0301059

Richard Gill:

( September 9th, 2015 5:43pm UTC )

Edit your feedback below

Gill's result was not sharp. It has later been much improved, among others by Peter Bierhorst http://arxiv.org/find/quant-ph/1/au:+Bierhorst_P/0/1/0/all/0/1

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( September 29th, 2015 3:02am UTC )

Edit your feedback below

Any updates about the refereeing of this paper? To which journal was it sent?

Unregistered Submission:

( September 29th, 2015 11:18pm UTC )

Edit your feedback below

It's published in Journal of Physics A: Mathematical and Theoretical:

http://iopscience.iop.org/article/10.1088/1751-8113/48/19/195302/pdf

http://iopscience.iop.org/article/10.1088/1751-8113/48/19/195302/pdf

Donald A. Graft:

( September 30th, 2015 12:50pm UTC )

Edit your feedback below

That link is for the Bierhorst paper, but the request seemed to pertain to the Hensen paper that this thread is about.

Unregistered Submission:

( September 30th, 2015 9:17pm UTC )

Edit your feedback below

The review process of the Hensen et alia paper seems to be a closely held secret. Not unlike the progress of Dr Joy Christian's macroscopic experiment. We must be patient.

Peer 3:

( October 1st, 2015 2:05pm UTC )

Edit your feedback below

UnReg: "The review process of the Hensen et alia paper seems to be a closely held secret. Not unlike the progress of Dr Joy Christian's macroscopic experiment. We must be patient."

There is at least continuing theoretical progress in support of the latter experiment:

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

https://www.academia.edu/16328957/A_simplified_local-realistic_derivation_of_the_EPR-Bohm_correlation

There is at least continuing theoretical progress in support of the latter experiment:

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

https://www.academia.edu/16328957/A_simplified_local-realistic_derivation_of_the_EPR-Bohm_correlation

Peer 3:

( October 21st, 2015 7:38pm UTC )

Edit your feedback below

The experiment is now published in Nature:

http://www.nature.com/articles/nature15759.epdf?shared_access_token=6CjbR0A2u5QlSXPXHmKif9RgN0jAjWel9jnR3ZoTv0Pfu6MWINxm4Io03p2jIRZ8ofLHk3Dh0IiXHJuQ-MHd1fN4D0Dr0vSS9MWXU23Be4wXJgMx820q5NyNKLFo9uHQQ6MeKlvPcvzmZ0MxCGcH_0JL--oy0pBIDDIGLgUcPCmweppKlp7KsjItSGV_gnLF

http://www.nature.com/articles/nature15759.epdf?shared_access_token=6CjbR0A2u5QlSXPXHmKif9RgN0jAjWel9jnR3ZoTv0Pfu6MWINxm4Io03p2jIRZ8ofLHk3Dh0IiXHJuQ-MHd1fN4D0Dr0vSS9MWXU23Be4wXJgMx820q5NyNKLFo9uHQQ6MeKlvPcvzmZ0MxCGcH_0JL--oy0pBIDDIGLgUcPCmweppKlp7KsjItSGV_gnLF

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( October 22nd, 2015 6:24pm UTC )

Edit your feedback below

Now that the paper has been published, presumably any embargo on the data and analyses has been lifted. Therefore, I sent a request through Nature for access to the raw data and analyses leading to the published claims. I also sent it by email to the corresponding author. The published paper and supplementary information do not contain links or references to the raw data and analyses. If I have overlooked them, please advise.

Donald A. Graft:

( October 22nd, 2015 6:54pm UTC )

Edit your feedback below

Prof. Hanson reports that the raw data etc. "will come online within a week, linked via arxiv version".

Donald A. Graft:

( October 30th, 2015 3:35pm UTC )

Edit your feedback below

Prof. Hanson has sent me the link to the data and is in the process of linking it through Nature.

http://doi.org/8rn

Note that to download the dataset a login is required. I was able to use a gmail account to get in and then I had to answer some personal details (nothing onerous) to create an access account, after which I was able to download the dataset.

Thank you, Prof. Hanson. I'm sure a lot of interesting discussion will be enabled by your data release.

http://doi.org/8rn

Note that to download the dataset a login is required. I was able to use a gmail account to get in and then I had to answer some personal details (nothing onerous) to create an access account, after which I was able to download the dataset.

Thank you, Prof. Hanson. I'm sure a lot of interesting discussion will be enabled by your data release.

Unregistered Submission:

( November 1st, 2015 7:18am UTC )

Edit your feedback below

Good luck with your analysis of the raw data. But if you are doing it to stop the nonsense about non-locality, it is not necessary. I doubt if an experiment like this will validate it anyways.

http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=212

http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=212

Peer 5:

( November 1st, 2015 7:48pm UTC )

Edit your feedback below

And learning how to spell his name correctly (it's Henson) would be a good start.

Peer 3:

( November 1st, 2015 8:01pm UTC )

Edit your feedback below

It appears that it is Peer 5 who needs to learn how to spell names correctly. It is either Hensen (the first author) or Hanson (the last author), but certainly not "Henson" as Peer 5 claims. :-)

Enter your reply below (Please read the **How To**)

Peer 6:

( November 4th, 2015 12:40am UTC )

Edit your feedback below

First of all thanks to the authors for making the data available. I have a few questions though:

1) The data in bell_open_data.txt, appears to contain not "raw" data but pre-processed data since the raw data from A and B have already been brought together into a single file. This is good for simplicity but It would be preferable to provide the true raw data without any pre-processing, or in addition to the pre-processed one.

2) The file contains only 4746 events. Where are the rest? I thought the paper reports orders of magnitude more trials. It would be nice to see the raw data and the methods used to reduced it to this set to be able to independently verify the data analysis.

Hope you will clarify these questions but in any case, I applaud your effort to be forthcoming by providing the data soon after publication.

1) The data in bell_open_data.txt, appears to contain not "raw" data but pre-processed data since the raw data from A and B have already been brought together into a single file. This is good for simplicity but It would be preferable to provide the true raw data without any pre-processing, or in addition to the pre-processed one.

2) The file contains only 4746 events. Where are the rest? I thought the paper reports orders of magnitude more trials. It would be nice to see the raw data and the methods used to reduced it to this set to be able to independently verify the data analysis.

Hope you will clarify these questions but in any case, I applaud your effort to be forthcoming by providing the data soon after publication.

Donald A. Graft:

( November 4th, 2015 1:16am UTC )

Edit your feedback below

Peer 6: "It would be nice to see the raw data and the methods used to reduced it to this set to be able to independently verify the data analysis."

Not just nice! It's absolutely crucial for Hensen et al to release the full raw data and analyses leading to the published claims. Post-selection of the data itself is a gaping loophole (or should I say, another cheap magic trick used by nonlocalists to fake violations). We must take the position that if the full data and analyses are not released, then the experiment is null and void.

Not just nice! It's absolutely crucial for Hensen et al to release the full raw data and analyses leading to the published claims. Post-selection of the data itself is a gaping loophole (or should I say, another cheap magic trick used by nonlocalists to fake violations). We must take the position that if the full data and analyses are not released, then the experiment is null and void.

Peer 6:

( November 4th, 2015 2:35am UTC )

Edit your feedback below

Well, so let us give them the benefit of doubt and simple ask that they release the raw data in full. Who knows they may be able to do that in short order and then it would be a moot issue. But I won't go yet as far as to suggest "cheap magic tricks".

The data they have released do not answer the questions I have. For example, why is the experiment so inefficient? In the supplementary information (page 11) , they say:

*Every few hundred milliseconds, the recorded events are transferred*

to the PC. During the experiment, about 2 megabyte of data is generated every second. To keep the size of the generated data-set manageable, blocks of about 100000 events are saved to the hard drive only if an entanglement heralding event (E) is present in that block.

Therefore what we have (4746 events, ~ 420 kilobytes) is about 5% of a single block.

The data they have released do not answer the questions I have. For example, why is the experiment so inefficient? In the supplementary information (page 11) , they say:

to the PC. During the experiment, about 2 megabyte of data is generated every second. To keep the size of the generated data-set manageable, blocks of about 100000 events are saved to the hard drive only if an entanglement heralding event (E) is present in that block.

Therefore what we have (4746 events, ~ 420 kilobytes) is about 5% of a single block.

Donald A. Graft:

( November 4th, 2015 4:20am UTC )

Edit your feedback below

I had asked Prof. Hanson in email prior to the release to publish the full raw data, which he has evidently declined, so I don't think there is any doubt to give him. The lack of response by the authors thus far in this thread doesn't augur well for you getting any answers to your questions. I argue that the full data will show no violation when correctly analyzed, and that only dubious post-selection can show a (manufactured) violation. Prof. Hanson cannot release the full data set, because it would confirm locality.

Peer 3:

( November 4th, 2015 5:07am UTC )

Edit your feedback below

Even if there was no "dubious post-selection" and no attempt to "manufacture" violation, the experiment does not dis-confirm locality, as can be easily seen from Appendix C of this paper:

http://arxiv.org/pdf/1501.03393.pdf

http://arxiv.org/pdf/1501.03393.pdf

Unregistered Submission:

( November 5th, 2015 7:31pm UTC )

Edit your feedback below

Well, a problem with the experiment is that they do not actually violate the Bell-CHSH inequality. They shift to a different inequality in the analysis. But this is typical of all the quantum experiments.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( November 6th, 2015 12:24am UTC )

Edit your feedback below

The original Bell-CHSH inequality involves four mathematical expectations of four incompatible experiments. Of course, it's impossible to observe mathematical expectations, because what you have in your hand is a finite sample. I believe Gill did some work on a finite sample version of Bell-CHSH, but his theory is a mess and gives no hint on the results of Hensen.

Richard Gill:

( November 7th, 2015 10:21am UTC )

Edit your feedback below

Gill published three finite sample versions of Bell-CHSH: http://arxiv.org/abs/quant-ph/0110137, http://arxiv.org/abs/quant-ph/0301059, and http://arxiv.org/abs/1207.5103

The first two versions (from 2001 and 2003) use martingale theory, in particular, martingale inequalities due to W. Hoeffding, to take account not only of finite samples but also of time-variation and memory.

The latest version from 2012 is simplified, allows time variation but does not allow memory. It was constructed for pedagogical purposes. It uses some simple classical probability inequalities also due to W. Hoeffding (bounds on the tail of the binomial and of the hypergeometric distribution). It is not relevant to the Hensen et al. experiment. The earlier papers are relevant, but as part of the history.

None of my results is very sharp. They were adequate for my purposes which was, back in 2001 and 2003, to set up the parameters of a bet against someone who claims to be able to simulate violations of Bell's inequality in a local way, so that my chance of losing our bet was negligable. How large should N be such that the chance my opponent would win (which would happen if the observed value of S was, say, larger than 2.4) is less than one in a million?

Hensen et al. use refinements of those oldest (martingale based) results from 2001 and 2003. What happened was that my results were taken up and refined and improved by other researchers, in particular, by Zhang et al (2011) and by Peter Bierhorst (2014). Those authors wanted sharper results, which would useful in real experiments. The final steps were taken by Elkouss and Wehner (2015), coauthors from Delft of the Hensen et al paper we discuss here. Please see their paper http://arxiv.org/abs/1510.07233 where they make use of a martingale probability inequality due to Bentkus (2004). Ann. Probab. 2, 1650-1673 https://projecteuclid.org/euclid.aop/1084884866

Elkouss and Wehner (2015) contains references to the other papers I have mentioned here: Zhang et al (2011) and Bierhorst (2014).

The paper of Bentkus (2004) gives us optimally sharpened versions of Hoeffding's martingale inequality, and Elkouss and Wehner have applied these in optimal fashion to the Delft experiment.

Bentkus essentially bounds probabilities of large deviations of a martingale with increments = +/-1 with probabilities of the same event when the increments are independent and identically distributed, i.e., Bernoulli.

The CHSH quantity S is the sum of three correlations minus a fourth. Now, if settings have been chosen time and time again by fair coin tosses, the four correlations all have denominators which are random but all close to N / 4. If we neglect the difference between these four denominators we get an alternative to S, let's call it Z, which can be written as a sum over all the trials of a quantity +/-1 for each trial. (So Z is approximately N / 4 times S). Now we are in business to apply martingale inequalities for martingales with bounded increments. We just have to study the conditional expectation of the value of the contribution from the j'th trial, conditional on the results of the preceding j - 1 trials, j = 1, ..., N. The probability calculation is done with respect to the completely random choice of settings at the j'th trial (two fair coin tosses), taking the four possible pairs of measurement outcomes as being fixed - possibly dependent on the past, but otherwise fixed by local realist physics). Thus we assume counterfactual definiteness and locality of measurement outcomes at the j'th trial, given the past up to that j'th trial. And we assume that measurement settings at the j'th trial are chosen by new independent fair coin tosses.

I think that my 2003 paper is perhaps the best introduction to my approach, for the present context. I'm sorry that "Unreg" thinks that this work is a mess. Anyway: this was just history: the good reference now is Elkouss and Wehner (2015), http://arxiv.org/abs/1510.07233

The first two versions (from 2001 and 2003) use martingale theory, in particular, martingale inequalities due to W. Hoeffding, to take account not only of finite samples but also of time-variation and memory.

The latest version from 2012 is simplified, allows time variation but does not allow memory. It was constructed for pedagogical purposes. It uses some simple classical probability inequalities also due to W. Hoeffding (bounds on the tail of the binomial and of the hypergeometric distribution). It is not relevant to the Hensen et al. experiment. The earlier papers are relevant, but as part of the history.

None of my results is very sharp. They were adequate for my purposes which was, back in 2001 and 2003, to set up the parameters of a bet against someone who claims to be able to simulate violations of Bell's inequality in a local way, so that my chance of losing our bet was negligable. How large should N be such that the chance my opponent would win (which would happen if the observed value of S was, say, larger than 2.4) is less than one in a million?

Hensen et al. use refinements of those oldest (martingale based) results from 2001 and 2003. What happened was that my results were taken up and refined and improved by other researchers, in particular, by Zhang et al (2011) and by Peter Bierhorst (2014). Those authors wanted sharper results, which would useful in real experiments. The final steps were taken by Elkouss and Wehner (2015), coauthors from Delft of the Hensen et al paper we discuss here. Please see their paper http://arxiv.org/abs/1510.07233 where they make use of a martingale probability inequality due to Bentkus (2004). Ann. Probab. 2, 1650-1673 https://projecteuclid.org/euclid.aop/1084884866

Elkouss and Wehner (2015) contains references to the other papers I have mentioned here: Zhang et al (2011) and Bierhorst (2014).

The paper of Bentkus (2004) gives us optimally sharpened versions of Hoeffding's martingale inequality, and Elkouss and Wehner have applied these in optimal fashion to the Delft experiment.

Bentkus essentially bounds probabilities of large deviations of a martingale with increments = +/-1 with probabilities of the same event when the increments are independent and identically distributed, i.e., Bernoulli.

The CHSH quantity S is the sum of three correlations minus a fourth. Now, if settings have been chosen time and time again by fair coin tosses, the four correlations all have denominators which are random but all close to N / 4. If we neglect the difference between these four denominators we get an alternative to S, let's call it Z, which can be written as a sum over all the trials of a quantity +/-1 for each trial. (So Z is approximately N / 4 times S). Now we are in business to apply martingale inequalities for martingales with bounded increments. We just have to study the conditional expectation of the value of the contribution from the j'th trial, conditional on the results of the preceding j - 1 trials, j = 1, ..., N. The probability calculation is done with respect to the completely random choice of settings at the j'th trial (two fair coin tosses), taking the four possible pairs of measurement outcomes as being fixed - possibly dependent on the past, but otherwise fixed by local realist physics). Thus we assume counterfactual definiteness and locality of measurement outcomes at the j'th trial, given the past up to that j'th trial. And we assume that measurement settings at the j'th trial are chosen by new independent fair coin tosses.

I think that my 2003 paper is perhaps the best introduction to my approach, for the present context. I'm sorry that "Unreg" thinks that this work is a mess. Anyway: this was just history: the good reference now is Elkouss and Wehner (2015), http://arxiv.org/abs/1510.07233

Peer 3:

( November 7th, 2015 11:19am UTC )

Edit your feedback below

Elsewhere on PubPeer critics seem to have found serious mistakes in Gill's papers and his whole line of reasoning, which amount to inherent contradictions in some of his equations:

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb27706

https://pubpeer.com/publications/D985B475C637F666CC1D3E3A314522#fb27706

Richard Gill:

( November 7th, 2015 12:25pm UTC )

Edit your feedback below

Fortunately, the present paper does not depend on Gill's work at all, but on Elkouss and Wehner (2015), http://arxiv.org/abs/1510.07233 and on Bentkus (2004) Ann. Probab. 2, 1650-1673 https://projecteuclid.org/euclid.aop/1084884866.

Peer 3 is the scientific director of the Einstein Centre for Local Realistic Physics in Oxford (the city, not the university) http://libertesphilosophica.info/blog/lpmain/ His most recent work is discussed on PubPeer at https://pubpeer.com/publications/E1EE7E4ECBA5A6412954577F0C42B6

Peer 3 is the scientific director of the Einstein Centre for Local Realistic Physics in Oxford (the city, not the university) http://libertesphilosophica.info/blog/lpmain/ His most recent work is discussed on PubPeer at https://pubpeer.com/publications/E1EE7E4ECBA5A6412954577F0C42B6

Peer 3:

( November 7th, 2015 1:06pm UTC )

Edit your feedback below

Gill seems to be working as a web-admin at PubPeer since he claims to know the identities of all the anonymous posters here. Does he mean like the "Institute for Advance Study, Princeton, where Einstein worked --- Princeton the city, not the university"? The link he has posted seems to be broken in any case, but at the bottom of the following link there seems to be a much better discussion:

http://libertesphilosophica.info/blog/

http://libertesphilosophica.info/blog/

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( November 13th, 2015 11:48am UTC )

Edit your feedback below

This new experiment is a gazillion times more significant:

http://arxiv.org/abs/1511.03190

Hensen et al.: p = 0.039.

Giustina et al.: p <= 3.74 x 10^{-31}

http://arxiv.org/abs/1511.03190

Hensen et al.: p = 0.039.

Giustina et al.: p <= 3.74 x 10^{-31}

Peer 3:

( November 13th, 2015 3:53pm UTC )

Edit your feedback below

All this new experiment (and indeed any such experiment) does is confirm a well known prediction of quantum mechanics. That is nice, but that does not by any means prove that it is impossible to reproduce that quantum mechanical prediction purely local realistically. If you do not believe this fact then just check out these local realistic predictions yourself:

https://www.academia.edu/17783877/Disproof_of_Bells_Theorem_page_7_of_arXiv_1501.03393_

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

Now you might say that the "new experiment" uses photons, not fermions (spin 1 particles, not spin 1/2 particles). Well, that is fine too, as you can verify for yourself:

http://arxiv.org/pdf/1106.0748v6.pdf

In short, both the "old" and the "new" experiments with electrons and photons are wonderful, but they do not have any relevance for the question of local realism. In fact, they simply confirm the above local realistic model of the underlying physics.

https://www.academia.edu/17783877/Disproof_of_Bells_Theorem_page_7_of_arXiv_1501.03393_

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

Now you might say that the "new experiment" uses photons, not fermions (spin 1 particles, not spin 1/2 particles). Well, that is fine too, as you can verify for yourself:

http://arxiv.org/pdf/1106.0748v6.pdf

In short, both the "old" and the "new" experiments with electrons and photons are wonderful, but they do not have any relevance for the question of local realism. In fact, they simply confirm the above local realistic model of the underlying physics.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( November 13th, 2015 4:17pm UTC )

Edit your feedback below

Continuing with discussion of the subject paper, in the following paper Bednorz shows a significant violation of no-signaling in the Hensen et al. experiment. Given that no-signaling is banned by quantum theory, this suggests either that the experiment is not properly designed, or that signaling has been generated through the post-selection. Neither of these are a good thing.

http://arxiv.org/pdf/1511.03509v1.pdf

Now that it is clear that the Hensen et al. experiment is grossly deficient, the other groups are getting onto the "loophole-free" bandwagon with new magic tricks of their own. It's tedious to keep whacking the moles but I suppose we have to do it.

http://arxiv.org/pdf/1511.03509v1.pdf

Now that it is clear that the Hensen et al. experiment is grossly deficient, the other groups are getting onto the "loophole-free" bandwagon with new magic tricks of their own. It's tedious to keep whacking the moles but I suppose we have to do it.

Peer 3:

( November 13th, 2015 7:31pm UTC )

Edit your feedback below

If Bednorz's analysis is correct, then we should demand retraction of the Hensen et al. paper from Nature so that next time they do not excessively salivate to publish such nonsense and at the same time reject any legitimate challenge to Bell's disproved theorem without even sending out the paper for a review.

Richard Gill:

( November 17th, 2015 10:38am UTC )

Edit your feedback below

The four sample sizes are 53, 79, 62 and 51. It looks as if the largest is too large and the two smallest too small. The four together should have the multinomial distribution with N = 245 and probabilities p = (1/4, 1/4, 1/4, 1/4). It is easy by simulation to compute the probability that the maximum minus the minimum of such a multinomial distribution is greater than or equal to the observed max minus min = 79 - 51 = 28. I already did it a few days ago because I already noticed this feature of the numbers. The answer is: 6.1%. I don't think this is very disturbing - we see some extreme feature of the data, compute how extreme it is by looking at a p-value, and not surprisingly it is moderately extreme. There are many other features which we might have noticed. We are very likely to find something. Bednorz goes on to analyse the data in a biased way and finds an estimate of "S" smaller than 2. I am not impressed. This is called p-hacking: look at your data, see what you want to see, and compute a p-value as if you had thought of that hypothesis in advance (independently of seeing the data).

Sure: the experiment needs to be replicated with larger N; or simply, repeated ten times. That should be enough to reduce standard errors by a factor of at least 3 (3 ^ 2 = 9 < 10) and turn this from a "two sigma" into a "six sigma" experiment.

Sure: the experiment needs to be replicated with larger N; or simply, repeated ten times. That should be enough to reduce standard errors by a factor of at least 3 (3 ^ 2 = 9 < 10) and turn this from a "two sigma" into a "six sigma" experiment.

Peer 6:

( November 19th, 2015 11:22am UTC )

Edit your feedback below

Why are the authors not releasing ALL the raw data? What are they trying to hide?

Unregistered Submission:

( November 26th, 2015 6:34pm UTC )

Edit your feedback below

At the end of the supplementary information they write: "... approximation holds since for large n the number of trials at each setting should be approximately n/4 ...". Hence, it's implicit in their analysis that the four detector settings (0,0), (0,1), (1,0), (1,1) are selected randomly in a "fair way" (each one is selected with probability 1/4). But their data does not support that. To test the null hypothesis H_0 : p_i = 1/4, for i = 1, ..., 4, we can compute a pearsonian Q = sum_{i=1}^4 frac{(n_i-n/4)^2}{n/4} = 7.9795. Pearson tells us that, as n grows, the distribution of Q approaches a chi-squared distribution with 3 deegres of freedom. This makes Pr(Q > 7.9795) = 0.046, which is too small. Sorry folks. Beautiful experiment, but you need a bigger sample size.

Enter your reply below (Please read the **How To**)

Peer 6:

( December 6th, 2015 10:04pm UTC )

Edit your feedback below

I'm deeply disappointing by the so-called "release of data". It is anything but. There should be 3 raw data files released, not pre-processed in anyway, corresponding to the three stations A, B, C as described in the supplementary information (Figure S5).

For A and B, they only saved data in blocks of 100000 events. All of those saved events should be provided. in two files with at the very least, the following columns (I, S, R0, R1, P, E). One file for A and another file for B.

For C, they saved all the data in columns (S P0, E, P1). They should provide all of the information in a separate file.

Their own supplementary information states that they recorded the data as above. Why are the authors not releasing ALL the raw data?

For A and B, they only saved data in blocks of 100000 events. All of those saved events should be provided. in two files with at the very least, the following columns (I, S, R0, R1, P, E). One file for A and another file for B.

For C, they saved all the data in columns (S P0, E, P1). They should provide all of the information in a separate file.

Their own supplementary information states that they recorded the data as above. Why are the authors not releasing ALL the raw data?

Donald A. Graft:

( December 7th, 2015 12:34am UTC )

Edit your feedback below

"Their own supplementary information states that they recorded the data as above. Why are the authors not releasing ALL the raw data?"

Because then it would be obvious that they achieved a violation only by post-selecting the data. All these shabby magic tricks are by now quite well known. It's baffling why these people think they can continue to fool us in such a manner.

Because then it would be obvious that they achieved a violation only by post-selecting the data. All these shabby magic tricks are by now quite well known. It's baffling why these people think they can continue to fool us in such a manner.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( December 10th, 2015 8:24pm UTC )

Edit your feedback below

wondering what you "physics" guys think of this paper reporting a loophole-free bell test?:

http://arxiv.org/abs/1511.03189

http://arxiv.org/abs/1511.03189

Peer 3:

( December 11th, 2015 10:58am UTC )

Edit your feedback below

They write "We therefore reject the hypothesis that local realism governs our experiment."

Regardless of the experimental details and their analysis, this statement of theirs is simply false, as one can easily see from studying the following papers:

http://arxiv.org/abs/1106.0748

http://arxiv.org/abs/1501.03393 ( see Appendix C )

http://arxiv.org/abs/1103.1879

http://arxiv.org/abs/1405.2355

Regardless of the experimental details and their analysis, this statement of theirs is simply false, as one can easily see from studying the following papers:

http://arxiv.org/abs/1106.0748

http://arxiv.org/abs/1501.03393 ( see Appendix C )

http://arxiv.org/abs/1103.1879

http://arxiv.org/abs/1405.2355

Unregistered Submission:

( December 13th, 2015 1:05am UTC )

Edit your feedback below

If you dig deep into their analysis, you will find that they shifted to a different inequality than the one they were supposed to be testing and supposedly "violated". As Peer 3 shows at the links above, there is no such thing as loopholes. Bell type inequalities are mathematically impossible to violate by anything.

Donald A. Graft:

( December 14th, 2015 1:19pm UTC )

Edit your feedback below

" Bell type inequalities are mathematically impossible to violate by anything."

That is not true. I show in the following paper, using a mathematical proof as well as a simulation, that the CH inequality can be violated in an experiment if the emission rate is too high. The violation, however, is purely artifactual and is not due to the (incoherent) notion of quantum nonlocality.

http://arxiv.org/abs/1507.06231

That is not true. I show in the following paper, using a mathematical proof as well as a simulation, that the CH inequality can be violated in an experiment if the emission rate is too high. The violation, however, is purely artifactual and is not due to the (incoherent) notion of quantum nonlocality.

http://arxiv.org/abs/1507.06231

Unregistered Submission:

( December 15th, 2015 11:23pm UTC )

Edit your feedback below

Nope, you did the same thing the experimenters do by shifting to a different inequality. The bound on your inequality in your analysis is 1 not 0. Easy to demonstrate by simple inspection. The 4 paired terms range from 0 to 1 and the single terms range from 0 to 0.5 so we could have for your eq. (1),

1 + 1 + 1 -1 - 1/2 -1/2 = 1 not 0

for independent terms.

1 + 1 + 1 -1 - 1/2 -1/2 = 1 not 0

for independent terms.

Donald A. Graft:

( December 16th, 2015 9:09pm UTC )

Edit your feedback below

I did not shift to another inequality. My equation 1 is the CH inequality.

The single terms are probabilities and thus range from 0 to 1. Given an emission event, what is the probability that a given side detects it? That probability can range from 0 to 1.

Are you questioning the derivation of the CH inequality? If so, on what grounds? CH is a combination of a tautologous inequality plus an hypothesis about locality (factorizability).

The single terms are probabilities and thus range from 0 to 1. Given an emission event, what is the probability that a given side detects it? That probability can range from 0 to 1.

Are you questioning the derivation of the CH inequality? If so, on what grounds? CH is a combination of a tautologous inequality plus an hypothesis about locality (factorizability).

Unregistered Submission:

( December 17th, 2015 7:17am UTC )

Edit your feedback below

Sorry, but the highest probability you can get on the single counts is 1/2. Say you have a 1000 emissions, you will only get on average a maximum of 500 hits for single counts per specified term. The derivation for the CH inequality is fine with dependent terms. I am reporting the fact that the experiments are shifting to a different inequality with a bound of 1 instead of 0. You also claimed you violated the CH inequality; it is mathematically impossible so you must have also shifted to the independent term inequality with a bound of 1.

Donald A. Graft:

( December 17th, 2015 1:42pm UTC )

Edit your feedback below

I don't care to spend any more time interacting with someone who claims it is mathematically impossible to violate the CH inequality. I've pointed you to the refutation of that wrong and naive view over at your own forum, but instead of addressing my arguments, you keep repeating your bald assertion.

Unregistered Submission:

( December 19th, 2015 2:35am UTC )

Edit your feedback below

Oops, let me fix this as Don pointed out above. It should be that we could have for the CH inequality with independent terms,

+1 -0 + 1 +1 - 1 -1 = +1 and not 0.

Or we could have,

+1 -0 + 1 + 1 - 1/2 - 1/2 = +2

It is well know that the probability average for single counts is 1/2 so we could have an absolute bound on the CH inequality of +2 and not +1 or 0. Some might think we could have,

+1 -0 +1 + 1 - 0 -0 = 3

but I don't think so. Comments?

+1 -0 + 1 +1 - 1 -1 = +1 and not 0.

Or we could have,

+1 -0 + 1 + 1 - 1/2 - 1/2 = +2

It is well know that the probability average for single counts is 1/2 so we could have an absolute bound on the CH inequality of +2 and not +1 or 0. Some might think we could have,

+1 -0 +1 + 1 - 0 -0 = 3

but I don't think so. Comments?

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( December 17th, 2015 7:34pm UTC )

Edit your feedback below

Donald: you're right. This discussion is a waste of time. These guy don't understand that the terms in the inequality are **mathematical expectations**, and the fact that they are in the same expression doesn't mean that we are considering different incompatible experiments simultaneously.

Peer 3:

( December 17th, 2015 9:34pm UTC )

Edit your feedback below

None of that matters. The only thing that matters is that --- contrary to the naive claims made by Bell and his followers --- the strong EPR-Bohm correlations can be quite easily reproduced in a manifestly local-realistic (and even deterministic) manner:

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

https://www.academia.edu/7024415/Local_Causality_in_a_Friedmann-Robertson-Walker_Spacetime

If this discussion is "a waste of time", then why are you in this discussion? Your participation proves that either this discussion is *not* a waste of time, or you have a lot of time to waste.

https://www.academia.edu/19235737/Macroscopic_Observability_of_Spinorial_Sign_Changes_A_Simplified_Proof

https://www.academia.edu/7024415/Local_Causality_in_a_Friedmann-Robertson-Walker_Spacetime

If this discussion is "a waste of time", then why are you in this discussion? Your participation proves that either this discussion is *not* a waste of time, or you have a lot of time to waste.

Donald A. Graft:

( December 17th, 2015 9:55pm UTC )

Edit your feedback below

To be precise, in CH the terms are probabilities, not expectations, but your point remains valid.

Peer 3:

( December 17th, 2015 10:15pm UTC )

Edit your feedback below

The UnReg's point is not valid. His point is just a typical double talk by a Bell sympathizer. The inequality, whether Bell-CHSH or CH, is derived by appealing to *counterfactually possible* incompatible set of experiments, whereas its violations are demonstrated by applying it to *actually occurring* incompatible set of experiments. It is a deception of a most vulgar kind.

Unregistered Submission:

( December 20th, 2015 2:33am UTC )

Edit your feedback below

Probabilities are expectations of indicator random variables: P(A) = E[I_A].

Unregistered Submission:

( December 25th, 2015 10:53am UTC )

Edit your feedback below

Unregistered Submission:( December 20th, 2015 2:33am UTC )

Probabilities are expectations of indicator random variables: P(A) = E[I_A].

True, but probabilities are more fundamental than expectations. Therefore it is better to express important hypotheses and results in terms of probabilities.

Unregistered Submission:

( December 26th, 2015 6:45pm UTC )

Edit your feedback below

Not always. CHSH is better expressed in terms of expectations.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( April 14th, 2016 2:36am UTC )

Edit your feedback below

Relevant: https://arxiv.org/pdf/1603.05705

Unregistered Submission:

( April 14th, 2016 3:08am UTC )

Edit your feedback below

In this new experiment the authors fixed the bias in the selection of the detectors' directions.

Using R, compare:

first_experiment = c(53, 79, 62, 51)

chisq.test(first_experiment)

with

second_experiment = c(69, 69, 78, 84)

chisq.test(second_experiment)

Also:

chisq.test(first_experiment + second_experiment)

Congratulations!

Using R, compare:

first_experiment = c(53, 79, 62, 51)

chisq.test(first_experiment)

with

second_experiment = c(69, 69, 78, 84)

chisq.test(second_experiment)

Also:

chisq.test(first_experiment + second_experiment)

Congratulations!

Peer 3:

( April 14th, 2016 1:05pm UTC )

Edit your feedback below

It is good to know that there is now increasing experimental support for local realism, as demonstrated, for example, in the following work in several different ways: http://arxiv.org/abs/1405.2355

It is remarkable that once again Einstein's local-realistic view of fundamental physics has been vindicated, both theoretically and experimentally: http://einstein-physics.org/

It is remarkable that once again Einstein's local-realistic view of fundamental physics has been vindicated, both theoretically and experimentally: http://einstein-physics.org/

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( June 8th, 2016 11:33pm UTC )

Edit your feedback below

Adenier and Khrennikov argue that the Hensen experiment is faulty because it shows a clear violation of no-signaling:

http://arxiv.org/abs/1606.00784

Note that this finding confirms the earlier finding of Bednorz, discussed above.

http://arxiv.org/pdf/1511.03509

It would be interesting to hear the response of Hensen et al, although based on their previous nonresponsiveness I am not holding my breath.

http://arxiv.org/abs/1606.00784

Note that this finding confirms the earlier finding of Bednorz, discussed above.

http://arxiv.org/pdf/1511.03509

It would be interesting to hear the response of Hensen et al, although based on their previous nonresponsiveness I am not holding my breath.

Donald A. Graft:

( August 1st, 2016 2:42pm UTC )

Edit your feedback below

I have just completed a local realist model that violates CHSH with statistical significance better than that of the Hensen et al experiments (5% of the runs violate at S >= 2.4 with N = 245). Hensen et al claim that no more than 3.9% of runs can violate at S >= 2.4 for local hidden variable models (p-value = 0.039). This means that the Hensen et al p-value is a fantasy. Given that it is derived using 10 pages of dense and impenetrable mathematics, that is not a surprise. I am writing this up now and will post to arXiv when completed. The model does not use any variable detection, coincidence windowing, memory effects, setting predictability, or other "loopholes" typically appealed to in order to justify complex statistical analyses (analyses that serve only to obscure matters and to exclude "loopholes" that local realists need not appeal to). I will also discuss the applicability/inapplicability of null hypothesis testing to EPR. I believe the whole program is misguided. What is a correct test for quantum nonlocality? I believe the answer is similar to the answer to the question "What is a correct test for perpetual motion?".

Richard Gill:

( August 1st, 2016 4:45pm UTC )

Edit your feedback below

Hensen et al. don't compute a p-value for the statistic S, but instead a p-value for the statistic k = number of times the CHSH game is won = 196 times in n = 245 trials, see equation (6) in Supplementary Information to their paper.

Donald A. Graft:

( August 1st, 2016 4:57pm UTC )

Edit your feedback below

No. The Nature paper abstract clearly states:

"We performed 245 trials that tested the CSHS inequality S <= 2 and found S= 2.42 +/- 0.20 (where S quantifies the correlation between measurement outcomes). A null-hypothesis test yields a probability of at most P = 0.039 that a local realist model for space-like separated sites could produce data with a violation at least as large as we observe, even when allowing for memory in the devices."

I have demonstrated that a local model can do better than their experiment.

"We performed 245 trials that tested the CSHS inequality S <= 2 and found S= 2.42 +/- 0.20 (where S quantifies the correlation between measurement outcomes). A null-hypothesis test yields a probability of at most P = 0.039 that a local realist model for space-like separated sites could produce data with a violation at least as large as we observe, even when allowing for memory in the devices."

I have demonstrated that a local model can do better than their experiment.

Richard Gill:

( August 1st, 2016 5:01pm UTC )

Edit your feedback below

The abstract is not quite correct then. It does not correspond to what is actually done in the paper.

Yes they did find S = 2.42 etc etc. But the null-hypothesis test was however not aimed at S, but at k.

Yes they did find S = 2.42 etc etc. But the null-hypothesis test was however not aimed at S, but at k.

Donald A. Graft:

( August 1st, 2016 5:10pm UTC )

Edit your feedback below

I'm flattered that you think I have found a flaw in the paper.

Please tell us, what is the p-value for the statistic S = 2.42 and how do you derive it?

I will have to study the data and analysis to see if they support the claim about k and that k means what you say it means (what I have called 'positivity' for CH). Thank you for mentioning it. It's good to see that my prior paper on positivity versus the raw metric for CH has had some influence.

Also, don't forget that the full raw data has not been released. Only a post-selected subset has been released. This may make it difficult or impossible to assess the claim. I ask again, why won't Hensen et al release the full raw data and analysis code? I can easily post-select the data from my simulation to give any k that I want. Finally, how do we know that the data that was used for the determination of k is the same as that used for S? These experiments must release raw data and actual analysis code. Otherwise, we have to just trust them, and with such a significant matter in foundations at stake, we shouldn't have to just trust them.

Please tell us, what is the p-value for the statistic S = 2.42 and how do you derive it?

I will have to study the data and analysis to see if they support the claim about k and that k means what you say it means (what I have called 'positivity' for CH). Thank you for mentioning it. It's good to see that my prior paper on positivity versus the raw metric for CH has had some influence.

Also, don't forget that the full raw data has not been released. Only a post-selected subset has been released. This may make it difficult or impossible to assess the claim. I ask again, why won't Hensen et al release the full raw data and analysis code? I can easily post-select the data from my simulation to give any k that I want. Finally, how do we know that the data that was used for the determination of k is the same as that used for S? These experiments must release raw data and actual analysis code. Otherwise, we have to just trust them, and with such a significant matter in foundations at stake, we shouldn't have to just trust them.

Richard Gill:

( August 1st, 2016 5:39pm UTC )

Edit your feedback below

I don't know how to find an exact p-value for S. The point of looking instead at k is that, with k, an exact analysis is possible and that moreover it allows for memory and time-variation in the LHV model.

Donald A. Graft:

( August 1st, 2016 5:44pm UTC )

Edit your feedback below

OK, thank you, I will look into the matter of k. For the record, the literature tells how to calculate the p-value for the S statistic. You have cited it in your own papers.

Just out of interest, as you have been involved in the analysis, is there any prospect of release of the full raw data and analysis code? Recall that Hensen et al do not respond to my inquiries, and your post above ignores this matter. This attitude is unscientific, and a skeptic might well conclude that when the experimenters discovered no statistically significant violation of S, they decided to post-select the data to have the desired result. I have previously shown that the Christensen et al experiment can be interpreted this way. Properly analyzed, Christensen et al shows no statistically significant violation of CH or the positivity. This determination was only possible for me because the full raw data and analysis code were supplied to me.

Just out of interest, as you have been involved in the analysis, is there any prospect of release of the full raw data and analysis code? Recall that Hensen et al do not respond to my inquiries, and your post above ignores this matter. This attitude is unscientific, and a skeptic might well conclude that when the experimenters discovered no statistically significant violation of S, they decided to post-select the data to have the desired result. I have previously shown that the Christensen et al experiment can be interpreted this way. Properly analyzed, Christensen et al shows no statistically significant violation of CH or the positivity. This determination was only possible for me because the full raw data and analysis code were supplied to me.

Richard Gill:

( August 1st, 2016 6:14pm UTC )

Edit your feedback below

It is not difficult to calculate an approximate p-value for S, under assumptions of independence and constant probability distributions. The idea of using k instead is that one can give exact results under much weaker assumptions.

I have not been involved in the Hensen et al. analysis.

I have not been involved in the Hensen et al. analysis.

Donald A. Graft:

( August 1st, 2016 6:39pm UTC )

Edit your feedback below

So you do know how to calculate the p-value for S. If I claim that the p-value for S is not significant, the assumptions you mentioned are irrelevant, because their violation would increase the p-value.

It's strange that the paper includes you in the acknowledgements, rather than just citing your papers. A reasonable person would conclude that you were involved.

So we can conclude that the experiment is meaningless, because the raw data and analysis code are being withheld. It's a really proud moment for the quantum mysterians!

Thank you for your responses, Richard. I wish Hensen et al were as honorable.

It's strange that the paper includes you in the acknowledgements, rather than just citing your papers. A reasonable person would conclude that you were involved.

So we can conclude that the experiment is meaningless, because the raw data and analysis code are being withheld. It's a really proud moment for the quantum mysterians!

Thank you for your responses, Richard. I wish Hensen et al were as honorable.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( August 1st, 2016 5:39pm UTC )

Edit your feedback below

"p-value = 0.039"

As a biologist I thought physicists were serious and required 5-sigma proofs...

I'm sure you are both well aware of the issues, but for other readers (and Nature editors):

http://rsos.royalsocietypublishing.org/content/1/3/140216

As a biologist I thought physicists were serious and required 5-sigma proofs...

I'm sure you are both well aware of the issues, but for other readers (and Nature editors):

http://rsos.royalsocietypublishing.org/content/1/3/140216

Richard Gill:

( August 3rd, 2016 7:29am UTC )

Edit your feedback below

After the Hensen et al. experiment two more experiments were done with well above 5-sigma significant results. Also the Hensen et al. experiment has been successfully repeated. Of course it needs to be repeated with an about 10 times larger sample size.

Delft:

Hensen et al. (2015)

http://www.nature.com/nature/journal/v52...15759.html

http://arxiv.org/abs/1508.05949

http://arxiv.org/abs/1603.05705

Vienna:

Giustina et al. (2015)

http://journals.aps.org/prl/abstract/10....115.250401

http://arxiv.org/abs/1511.03190

NIST:

Shalm et al. (2015)

http://journals.aps.org/prl/abstract/10....115.250402

http://arxiv.org/abs/1511.03189

Delft:

Hensen et al. (2015)

http://www.nature.com/nature/journal/v52...15759.html

http://arxiv.org/abs/1508.05949

http://arxiv.org/abs/1603.05705

Vienna:

Giustina et al. (2015)

http://journals.aps.org/prl/abstract/10....115.250401

http://arxiv.org/abs/1511.03190

NIST:

Shalm et al. (2015)

http://journals.aps.org/prl/abstract/10....115.250402

http://arxiv.org/abs/1511.03189

Enter your reply below (Please read the **How To**)

Richard Gill:

( August 1st, 2016 8:36pm UTC )

Edit your feedback below

S and k are different statistics and there is no simple relationship between their p-values. I discussed the martingale techniques underlying the use of k (which go back to papers of mine) with one of the authors in Delft, that's the reason I am mentioned in the acknowledgements.

With S = 2.4225 with an estimated standard error of 0.1920743 (using the usual binomial variance formulas for each of the four correlations involved) and using the Gaussian approximation, one gets a p-value of 0.01391518. However, I estimated the standard error using the correlations as observed. If local realism were true, the "true" correlations are closer to 0 and the variances of their estimates are larger. Hence a p-value computed using a normal approximation and the variance estimated under local realism could well be a lot larger.

There are quite a few reasons to prefer k. No approximations, weaker assumptions.

With S = 2.4225 with an estimated standard error of 0.1920743 (using the usual binomial variance formulas for each of the four correlations involved) and using the Gaussian approximation, one gets a p-value of 0.01391518. However, I estimated the standard error using the correlations as observed. If local realism were true, the "true" correlations are closer to 0 and the variances of their estimates are larger. Hence a p-value computed using a normal approximation and the variance estimated under local realism could well be a lot larger.

There are quite a few reasons to prefer k. No approximations, weaker assumptions.

Donald A. Graft:

( August 1st, 2016 8:51pm UTC )

Edit your feedback below

Nothing to say about the data hiding?

Richard Gill:

( August 1st, 2016 9:10pm UTC )

Edit your feedback below

In the experiment there are repeated attempts to create an entangled pair of NV spins at Alice and Bob's locations but only occasionally is there a signal at a third location that this has likely succeeded. However, every attempt does include a new random choice of measurement settings and two measurement outcomes. So the complete data set is perhaps billions of times larger than the subset of data corresponding to "heralded" trials. Yes it would be good if the complete data set were made available. I'm not sure how feasible that would be. I hope the experiment is repeated with a much larger N. There already has been one repetition but N is still small: http://arxiv.org/abs/1603.05705

Donald A. Graft:

( August 1st, 2016 9:48pm UTC )

Edit your feedback below

It's OK to omit failed trials but I am concerned that all the valid trials are not included. Thank you for your resonse.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( August 2nd, 2016 5:28am UTC )

Edit your feedback below

I have considered your point about S versus k and find it to be irrelevant because S and k are related by k ~= n (S + 4) / 8. This explains why Hensen et al conflate them in discussing P values in the main paper. It also means that if my model violates CHSH with S >= 2.4 with some probability, it will also violate with k >= 196 with approximately the same probability. S and k are just two approximately equivalent ways of representing the same basic metric, that is, violation of CHSH. I will modify my simulation model to show both S and k and you will see that this is true. The model produces a CHSH violation with statistics better than that of the experiments! This is not surprising because I have performed a search of the angle space to maximize the violation for the local model, whereas the experiments used angles determined from the quantum joint prediction, which I have shown cannot be applied to EPR (http://arxiv.org/abs/1607.01808).

I agree with you, however, that using k in the p-value analysis might be preferred as it can allow for non-uniform settings choices, but the nonuniformity is not sufficient here to undermine the relation between S and k, as my model will show.

It turns out, additionally, that k is*not* equivalent to what I have called positivity. k is the number of single event wins of the CHSH game, while positivity refers to the proportion of runs of M events (M >> 1) that violate CHSH overall. I continue to maintain that positivity is preferable to S (or k) as a metric for CHSH/CH violation. Unfortunately, It is difficult or impossible to apply to the Hensen et al experiment due to the extremely low rate of valid event generation; that is, multiple runs of M events are not currently feasible.

In view of all of this, there is no need to wonder if unreported post-selection has occurred and the released data appears to be adequate to support the following conclusion: the Hensen et al Bell test does not reject local realism.

I agree with you, however, that using k in the p-value analysis might be preferred as it can allow for non-uniform settings choices, but the nonuniformity is not sufficient here to undermine the relation between S and k, as my model will show.

It turns out, additionally, that k is

In view of all of this, there is no need to wonder if unreported post-selection has occurred and the released data appears to be adequate to support the following conclusion: the Hensen et al Bell test does not reject local realism.

Richard Gill:

( August 2nd, 2016 8:02am UTC )

Edit your feedback below

k ~= n (S + 4) / 8 is a good approximation when n is large. However in this experiment, n is rather small. In other words, 0.039 and 0.05 are "approximately the same probability" at such values of n.

k >= 196 when n = 245 has probability less than or equal to 0.039 when local realism is true and the settings are repeatedly chosen anew by fair coin tosses. I think that your model will agree with this not too difficult mathematical result.

k >= 196 when n = 245 has probability less than or equal to 0.039 when local realism is true and the settings are repeatedly chosen anew by fair coin tosses. I think that your model will agree with this not too difficult mathematical result.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( August 2nd, 2016 8:34am UTC )

Edit your feedback below

Well, I'm not so sure about that. Let's see tomorrow what my model produces for k. If I exceed probability 0.039 that k >= 196 will you be satisfied? You've previously stated the Hensen results as k >= 196 with p-value 0.039.

Richard Gill:

( August 2nd, 2016 9:05am UTC )

Edit your feedback below

Any LHV model gives maximally a chance of 3/4 of winning one trial of the Bell game when each of the four setting pairs has probability 1/4. Even with the LHV model depending on time and on past settings and outcomes. The probability, in 245 independent trials each with success probability 3/4, to achieve k = 196 or more successes, is 0.03907767 (from the binomial distribution).

The Bell game is won if the two outcomes are unequal when the setting pair is (2, 2) but equal if the setting pair is any of the other three possibilities (1, 1), (1, 2), (2, 1).

So if your model does better, then either you are cheating in some way, or the statistical theory section in Hensen et al. (Supplementary) is wrong. See also Elkouss and Wehner http://arxiv.org/abs/1510.07233, and Bierhorst http://arxiv.org/abs/1312.2999

The Bell game is won if the two outcomes are unequal when the setting pair is (2, 2) but equal if the setting pair is any of the other three possibilities (1, 1), (1, 2), (2, 1).

So if your model does better, then either you are cheating in some way, or the statistical theory section in Hensen et al. (Supplementary) is wrong. See also Elkouss and Wehner http://arxiv.org/abs/1510.07233, and Bierhorst http://arxiv.org/abs/1312.2999

Donald A. Graft:

( August 2nd, 2016 2:22pm UTC )

Edit your feedback below

The math looks OK in Elkouss and Wehner so it all depends on your definition of cheating, or whether you demand stricter conditions than the experiments deliver.

Richard Gill:

( August 2nd, 2016 2:52pm UTC )

Edit your feedback below

The mathematics of the Bierhorst/Elkouss and Wehner results makes various assumptions. So the question is whether those assumptions are true for your simulation model. If your model satisfies those assumptions and the maths is correct then the result applies to your model.

Assumptions: binary settings, binary outcomes. n = 245 trials. Per trial, Alice and Bob's setting chosen independently of one another and completely at random. In each trial: information is sent from a central source to each of two measurement stations. The two measurement stations each receive their setting. Each outputs an outcome. We count the number k of successes; a success = the setting pair is (1, 1), (1, 2), or (2, 1) and the two outcomes are equal, or the setting pair is (2, 2) and the two outcomes are unequal.

Between each two consecutive trials the source and the measurement stations are allowed to exchange any information you like.

The maths says that for each j, the conditional probability of success at trial j given everything that went on in the preceding trials is not more than 3/4.

Assumptions: binary settings, binary outcomes. n = 245 trials. Per trial, Alice and Bob's setting chosen independently of one another and completely at random. In each trial: information is sent from a central source to each of two measurement stations. The two measurement stations each receive their setting. Each outputs an outcome. We count the number k of successes; a success = the setting pair is (1, 1), (1, 2), or (2, 1) and the two outcomes are equal, or the setting pair is (2, 2) and the two outcomes are unequal.

Between each two consecutive trials the source and the measurement stations are allowed to exchange any information you like.

The maths says that for each j, the conditional probability of success at trial j given everything that went on in the preceding trials is not more than 3/4.

Donald A. Graft:

( August 2nd, 2016 3:11pm UTC )

Edit your feedback below

Thank you, it agrees with my understanding. One important question: do you consider the use of a pseudorandom generator to be cheating? If not, how do you operationally define "true randomness"? My model requires a pseudorandom generator and would not work with "true randomness". Is there any mathematics that accounts for failure of the assumption that the generator is "truly random"? Note that I do not refer to bias of the generator. I suppose that it is related to predictability of the settings. How do you prove that the generators in the experiments are truly random? Has anything been done in the Hensen analysis regarding this?

I just found this: https://arxiv.org/pdf/1411.4787v4.pdf

I just found this: https://arxiv.org/pdf/1411.4787v4.pdf

Richard Gill:

( August 2nd, 2016 3:36pm UTC )

Edit your feedback below

There are two places where pseudorandom generators can be used in a simulation model: to generate the settings; and to generate source information and measurement outcomes. The maths we are talking about assumes true randomness in the choice of detection settings. What goes on in the source and in the measurement devices is not relevant. I don't have an operational definition of "true randomness". Stefano Pironio has written a nice essay, I think, about what is required in real experiments http://arxiv.org/abs/1510.00248

In my opinion, for generating Alice and Bob's settings in real experiments, two state of the art pseudo-random generators would be much better than a recently developed QRNG. I think this would have improved the Hensen et al. experiment. (But I don't know how well the two options compare for speed).

As you mentioned in your addition to the last posting, you can adapt the mathematics to the situation where the pseudo-random generators have some (known, maximum) degree of predictability. This is also done in the Supplementary material of Hensen et al. But of course the p-value then gets worse.

In my opinion, for generating Alice and Bob's settings in real experiments, two state of the art pseudo-random generators would be much better than a recently developed QRNG. I think this would have improved the Hensen et al. experiment. (But I don't know how well the two options compare for speed).

As you mentioned in your addition to the last posting, you can adapt the mathematics to the situation where the pseudo-random generators have some (known, maximum) degree of predictability. This is also done in the Supplementary material of Hensen et al. But of course the p-value then gets worse.

Donald A. Graft:

( August 2nd, 2016 11:23pm UTC )

Edit your feedback below

The lack of an operational definition of true randomness is troubling for me. Inspired by the Pironio paper that you linked, for which thank you, I offer the following scenario. We are going to run a Hensen-like experiment with N=245 one thousand times. And our metric will be the percentage of runs that have k >= 196, as before. I will supply 2 lists of random bits of sufficient length to perform all the runs. These lists will be the setting inputs for the two sides.

I stipulate that the lists will individually satisfy any and all statistical tests for randomness that you care to apply. I further stipulate that there is zero correlation between the two lists that you can find using any statistical analysis that you care to apply. I stipulate that the experimental arrangement excludes all loopholes other than these matters related to randomness.

I claim to be able to provide lists that result in the observed probability of k >= 196 being 0.05, exceeding the theoretically possible 0.039. Would that be sufficient to win a Quantum Randi Challenge? If not, what further stipulations do you require? If I read Pironio correctly, he argues that no further stipulations are needed.

I stipulate that the lists will individually satisfy any and all statistical tests for randomness that you care to apply. I further stipulate that there is zero correlation between the two lists that you can find using any statistical analysis that you care to apply. I stipulate that the experimental arrangement excludes all loopholes other than these matters related to randomness.

I claim to be able to provide lists that result in the observed probability of k >= 196 being 0.05, exceeding the theoretically possible 0.039. Would that be sufficient to win a Quantum Randi Challenge? If not, what further stipulations do you require? If I read Pironio correctly, he argues that no further stipulations are needed.

Richard Gill:

( August 3rd, 2016 7:24am UTC )

Edit your feedback below

Of course you can do it if you are the one who engineers the lists of settings. You also engineer the simulation program of your local hidden variables model. Your model has to work with settings which are provided by third parties, outside of your control.

Donald A. Graft:

( August 3rd, 2016 10:04am UTC )

Edit your feedback below

Fine, but was that done in the Hensen experiment? This what I meant when I asked above if you are demanding conditions more extreme than are satisfied by the experiments.

Richard Gill:

( August 3rd, 2016 10:39am UTC )

Edit your feedback below

In the second Hensen et al. experiment various different ways to select the settings are tested including one of Pironio's proposals. The NIST experiment also uses a number of different ways of generating random settings.

Pironio explains very well that the precautions one might need to take in a physical experiment and in a computer experiment are different. "We cannot rule out every local hidden-variable model that is logically possible, but we should at least aim to rule out every model that is physically plausible. This is the best that we can do. As professional physicists, this is also the only thing that should matter to us."

Pironio explains very well that the precautions one might need to take in a physical experiment and in a computer experiment are different. "We cannot rule out every local hidden-variable model that is logically possible, but we should at least aim to rule out every model that is physically plausible. This is the best that we can do. As professional physicists, this is also the only thing that should matter to us."

Donald A. Graft:

( August 4th, 2016 11:45am UTC )

Edit your feedback below

So now, while overlooking and discounting the physical implausibility of "spooky action at a distance", you claim that my use of a pseudorandom generator that passes all statistical tests is physically implausible. Therefore, you rig the QRC because you can always claim that the method used to beat it is physically implausible (which is tantamount to adding new requirements after the fact). You have touted the QRC many times, so it is reasonable, indeed essential, for you to give a definition of "physically plausible". Please do so, or admit that the QRC is a useless piece of propaganda.

In spite of your objections to my generator, I can let you generate the lists and still beat the QRC as defined above. I suppose you'll call it physically implausible, because for you, anything that beats the QRC and is not quantum nonlocality is necessarily physically implausible. You will use a microscope to examine a local model claiming to violate QRC, but avert your eyes to such questions for the experiments. If my interpretation of your position is not correct, please clarify it by defining "physically plausible".

In spite of your objections to my generator, I can let you generate the lists and still beat the QRC as defined above. I suppose you'll call it physically implausible, because for you, anything that beats the QRC and is not quantum nonlocality is necessarily physically implausible. You will use a microscope to examine a local model claiming to violate QRC, but avert your eyes to such questions for the experiments. If my interpretation of your position is not correct, please clarify it by defining "physically plausible".

Richard Gill:

( August 4th, 2016 12:50pm UTC )

Edit your feedback below

We were not talking about the QRC. We were talking about the p = 0.039 result for k = 196, n = 245. You want to convince the world that it is wrong. So please go ahead and try.

It seems very implausible to me that the 0.039 result is wrong. If you want to convince the world that it is wrong, then you have to convince the world that your simulation is not rigged. I find it very suspicious that you insist on generating the settings yourself, in advance. That gives you huge scope for cheating.

"A pseudorandom generator that passes all statistical tests" is fine. If you allow me to supply the settings, that is exactly what I will use.

It seems very implausible to me that the 0.039 result is wrong. If you want to convince the world that it is wrong, then you have to convince the world that your simulation is not rigged. I find it very suspicious that you insist on generating the settings yourself, in advance. That gives you huge scope for cheating.

"A pseudorandom generator that passes all statistical tests" is fine. If you allow me to supply the settings, that is exactly what I will use.

Donald A. Graft:

( August 4th, 2016 1:14pm UTC )

Edit your feedback below

I don't deny your 0.039 mathematical result and of course we are talking about the QRC, because it defines what you will accept as a valid local model. I'm asking for your definition of "physically plausible". If you can't supply that definition then it's a rigged game.

Please provide your definition of "physically plausible" and post a link to your lists of random bits (enough for 1000 runs of N = 245). Then we will have a fully defined and fair challenge. If you won't do it then you are rigging the game and QRC is propaganda. We could conclude that "theorems do not rule" (hat tip Arthur Fine).

And why don't you apply the same level of scrutiny to the experiments? The validity of quantum nonlocality should not hang upon who Richard S. Gill trusts not to "cheat".

Please provide your definition of "physically plausible" and post a link to your lists of random bits (enough for 1000 runs of N = 245). Then we will have a fully defined and fair challenge. If you won't do it then you are rigging the game and QRC is propaganda. We could conclude that "theorems do not rule" (hat tip Arthur Fine).

And why don't you apply the same level of scrutiny to the experiments? The validity of quantum nonlocality should not hang upon who Richard S. Gill trusts not to "cheat".

Richard Gill:

( August 4th, 2016 3:31pm UTC )

Edit your feedback below

QRC was defined by Sascha Vongehr, http://arxiv.org/pdf/1207.5294.pdf

It seems to me to be fully defined and fair. Go ahead and try to win it, if you can. There is no criterion "physically plausible" involved. The settings are generated by calls to Mathematica's pseudo random number generator, while the game is in course, *after* you have supplied the realised values of the hidden variables in your model.

Years ago I had a bet with Luigi Accardi. You can find the details of the protocol of our challenge in my paper https://arxiv.org/pdf/quant-ph/0110137v4.pdf

I think it is also fully defined and fair. The notion of "physical plausibility" does not come up.

Stefano Pironio http://arxiv.org/pdf/1510.00248v1.pdf explains why the *kind* of scrutiny needed to rule out loopholes (possibility of local realistic explanation of the violation of a Bell inequality) depends on the experimental context. In particular, last line of page 6 and what follows on page 7, he explains why different precautions might be needed in the case of a computer bet and in the case of a lab experiment.

It seems to me to be fully defined and fair. Go ahead and try to win it, if you can. There is no criterion "physically plausible" involved. The settings are generated by calls to Mathematica's pseudo random number generator, while the game is in course, *after* you have supplied the realised values of the hidden variables in your model.

Years ago I had a bet with Luigi Accardi. You can find the details of the protocol of our challenge in my paper https://arxiv.org/pdf/quant-ph/0110137v4.pdf

I think it is also fully defined and fair. The notion of "physical plausibility" does not come up.

Stefano Pironio http://arxiv.org/pdf/1510.00248v1.pdf explains why the *kind* of scrutiny needed to rule out loopholes (possibility of local realistic explanation of the violation of a Bell inequality) depends on the experimental context. In particular, last line of page 6 and what follows on page 7, he explains why different precautions might be needed in the case of a computer bet and in the case of a lab experiment.

Donald A. Graft:

( August 4th, 2016 4:31pm UTC )

Edit your feedback below

It is perfectly clear that you are not answering my questions, and are obfuscating to try to hide that. I suppose that effectively ends the discussion. Thank you for playing.

Richard Gill:

( August 4th, 2016 6:08pm UTC )

Edit your feedback below

Well, I am not going to define "physically plausible" since I am not a physicist, and I am not going to post measurement settings (as inputs for your computer simulation) in advance for rather obvious reasons. If you don't understand that we should indeed stop the discussion.

Per trial, Alice's setting should not be known in advance (or be predictable) at Bob's location. Otherwise the experiment doesn't prove anything. Now if we are doing an experiment with photons then I don't think the experiment becomes less convincing if the settings are posted in advance on internet, as long as they are only entered into the measurement devices when they are needed, one at a time. It's physically implausible that this would spoil the experiment. As Pironio explains.

But if we are doing an experiment with some computer programs which implement some LHV model, then there are obvious dangers in giving the local realist computer programmer all the future settings in advance.

Per trial, Alice's setting should not be known in advance (or be predictable) at Bob's location. Otherwise the experiment doesn't prove anything. Now if we are doing an experiment with photons then I don't think the experiment becomes less convincing if the settings are posted in advance on internet, as long as they are only entered into the measurement devices when they are needed, one at a time. It's physically implausible that this would spoil the experiment. As Pironio explains.

But if we are doing an experiment with some computer programs which implement some LHV model, then there are obvious dangers in giving the local realist computer programmer all the future settings in advance.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( August 9th, 2016 1:32am UTC )

Edit your feedback below

Don't worry about the metaphysics of randomness, Richard, things are much more interesting than that, because now we know why Hensen et al. will not release their full data:

http://arxiv.org/abs/1608.02404

By the way, the dataset for the second experiment is even smaller than that of the first experiment. And one of their APDs was found to be defective in the middle of the experimental run, yet they still relied on the data and published it! Oh, what a tangled web we weave...

http://arxiv.org/abs/1608.02404

By the way, the dataset for the second experiment is even smaller than that of the first experiment. And one of their APDs was found to be defective in the middle of the experimental run, yet they still relied on the data and published it! Oh, what a tangled web we weave...

Donald A. Graft:

( August 10th, 2016 1:55am UTC )

Edit your feedback below

There is another way to calculate the probability that the biased numbers we see in the experiment could be obtained by chance. For side A, the probability of getting >= 2406 1's out of 4746 trials is 0.1727. The probability for side B to get >= 2401 1's is 0.2123. The RNGs are independent so the probability of both happening is 0.0367. This agrees with the calculation in my paper linked above.

The clearest give-away to the data postselection is that Hensen et al. document that the overall randomness distribution for all events (not just those with entanglement success) is very close to uniform (p-value = 0.57) but the published subset upon which the CHSH calculation is made is grossly biased with p-value 0.036. I call upon Hensen et al. to account for this serious anomaly (which may have been inadvertent), and to release the full raw data without further postselection. Saying the data is too large is a very lame excuse.

As if that isn't enough, there is statistically significant violation of no-signaling (Bednorz; Adenier and Khrennikov). That is a side effect of the data postselection, and for local realists it is highly fortuitous, because it further strengthens the demonstration of postselection. Neither quantum nor classical theory permit signaling.

If the experiment was correctly analyzed and reported, it would confirm locality. I previously demonstrated the same conclusion for the Christensen et al. experiment. Quantum nonlocality is the modern-day perpetual motion. It is a tragedy that so much money and talent is wasted on it.

What do you think, Richard? You seem to have gone silent.

The clearest give-away to the data postselection is that Hensen et al. document that the overall randomness distribution for all events (not just those with entanglement success) is very close to uniform (p-value = 0.57) but the published subset upon which the CHSH calculation is made is grossly biased with p-value 0.036. I call upon Hensen et al. to account for this serious anomaly (which may have been inadvertent), and to release the full raw data without further postselection. Saying the data is too large is a very lame excuse.

As if that isn't enough, there is statistically significant violation of no-signaling (Bednorz; Adenier and Khrennikov). That is a side effect of the data postselection, and for local realists it is highly fortuitous, because it further strengthens the demonstration of postselection. Neither quantum nor classical theory permit signaling.

If the experiment was correctly analyzed and reported, it would confirm locality. I previously demonstrated the same conclusion for the Christensen et al. experiment. Quantum nonlocality is the modern-day perpetual motion. It is a tragedy that so much money and talent is wasted on it.

What do you think, Richard? You seem to have gone silent.

Richard Gill:

( August 10th, 2016 10:15am UTC )

Edit your feedback below

I hope that the experiment will be repeated with a (say) ten times larger sample size.

Donald A. Graft:

( August 10th, 2016 10:24am UTC )

Edit your feedback below

There is no point to that if they again won't release the raw data, and postselect it, thereby producing only an artifactual violation. You hope they will release the raw data? You hope it won't show the same strong evidence of postselection?

If the experiment is so deficient that you think it requires 10 times more data, why was it published? I can prove that unicorns exist with such a methodology.

The obvious conclusion is that the full results failed to violate CHSH and when Hensen et al. realized this, they postselected the data, thereby producing a violation. Where is the full data, Richard? Where is the response by Hensen et al.? I don't think they can just brazen this out. There is too much at stake for the foundations of physics.

You lecture me above about conditions for an LHV model to be taken seriously in the QRC, but you won't apply the same criteria to the experiments. If I claimed a model violated CHSH and then said I can't release the code because it is too big, you would laugh me off the stage. The Delft experiment is laughable by your own criteria.

If the experiment is so deficient that you think it requires 10 times more data, why was it published? I can prove that unicorns exist with such a methodology.

The obvious conclusion is that the full results failed to violate CHSH and when Hensen et al. realized this, they postselected the data, thereby producing a violation. Where is the full data, Richard? Where is the response by Hensen et al.? I don't think they can just brazen this out. There is too much at stake for the foundations of physics.

You lecture me above about conditions for an LHV model to be taken seriously in the QRC, but you won't apply the same criteria to the experiments. If I claimed a model violated CHSH and then said I can't release the code because it is too big, you would laugh me off the stage. The Delft experiment is laughable by your own criteria.

Richard Gill:

( August 10th, 2016 10:44am UTC )

Edit your feedback below

For you it obviously would have no point at all.

Donald A. Graft:

( August 10th, 2016 11:20am UTC )

Edit your feedback below

Richard now resorts to inanity, turning a scientific matter into a personal one.

There would be a point to running a longer experiment if they released the full raw data and did not postselect the data in the analysis. But then no violation of CHSH would be observed, and, together with my demonstration of data massaging, the Nature paper would certainly need to be retracted. In my view, there is already enough evidence to justify a retraction, or at a minimum, a correction.

There would be a point to running a longer experiment if they released the full raw data and did not postselect the data in the analysis. But then no violation of CHSH would be observed, and, together with my demonstration of data massaging, the Nature paper would certainly need to be retracted. In my view, there is already enough evidence to justify a retraction, or at a minimum, a correction.

Peer 7:

( August 11th, 2016 9:34pm UTC )

Edit your feedback below

Multiplying p-values from independent tests does not yield a legitimate p-value (if it did, you could always get as small a p-value as desired, even with true null hypotheses, by multiplying a large enough number of tests). You need to use something like Fisher's method (https://en.wikipedia.org/wiki/Fisher%27s_method).

Also, you've used one-sided p-values with no apparent justification. If the counts of 1s had been smaller than half rather than larger, presumably you would have counted that as a deviation of the same magnitude. So, you need to multiply both p-values by 2.

Also, you've used one-sided p-values with no apparent justification. If the counts of 1s had been smaller than half rather than larger, presumably you would have counted that as a deviation of the same magnitude. So, you need to multiply both p-values by 2.

Donald A. Graft:

( August 11th, 2016 9:46pm UTC )

Edit your feedback below

We have here only two sides so we have only two tests. Perhaps you are correct about the two-sided point (I have to look at it further and will post again) but it seems pointless to quibble about a few percent for the p-value. The point is that it is unlikely that the distribution could be obtained by chance, and relying upon a sample with such nonuniformity to decide an important matter in the foundations of physics is highly questionable. Releasing the full raw data could help to resolve matters, but Hensen et al. refuse to do so.

We also have to consider that both sides show the deviation in the same direction. This will reduce the p-value.

It is also suspicious that only one of the 4 experiments shows the loss of events. I am not sure how these considerations affect the statistical calculation. Surely it is much more likely that all 4 experiments show similar deviations in random directions.

Please feel free to provide your own estimate of the p-value for us that allows for all these things. I appreciate your feedback.

We also have to consider that both sides show the deviation in the same direction. This will reduce the p-value.

It is also suspicious that only one of the 4 experiments shows the loss of events. I am not sure how these considerations affect the statistical calculation. Surely it is much more likely that all 4 experiments show similar deviations in random directions.

Please feel free to provide your own estimate of the p-value for us that allows for all these things. I appreciate your feedback.

Peer 7:

( August 11th, 2016 9:57pm UTC )

Edit your feedback below

Multiplying the two p-values is not legitimate. Do proper two-sided tests (i.e., double the p-values for the two tests) and apply Fisher's method, and you will see that the overall p-value does not even approach statistical significance.

Donald A. Graft:

( August 11th, 2016 10:18pm UTC )

Edit your feedback below

You are neglecting the other factors I just mentioned. I accept that the analysis is difficult, but eyeballing the data is enough to see that it is strange looking, and it is consistent with the hypothesis that postselection occurred. The hypothesis could be directly tested but Hensen et al. refuse to release the raw data.

Peer 7:

( August 11th, 2016 10:30pm UTC )

Edit your feedback below

If the two deviations had gone in opposite directions, presumably you would have argued that there was selection of the 10 or 01 results, so that's not relevant.

You have presented a doubly invalid p-value, and used it to make the unjustified claim that the authors have, at best, made a fatal error.

The numbers do not look "strange" to me. I think if you generate a bunch of sets of pseudorandom numbers from the relevant distribution, you will see that the deviations from uniformity are not unusual. In any case, the burden is on you to show this. "Eyeballing" is not a substitute for a legitimate statistical test.

You have presented a doubly invalid p-value, and used it to make the unjustified claim that the authors have, at best, made a fatal error.

The numbers do not look "strange" to me. I think if you generate a bunch of sets of pseudorandom numbers from the relevant distribution, you will see that the deviations from uniformity are not unusual. In any case, the burden is on you to show this. "Eyeballing" is not a substitute for a legitimate statistical test.

Donald A. Graft:

( August 11th, 2016 10:39pm UTC )

Edit your feedback below

I already provided a simulation showing that the probability of obtaining the distribution is low. Did you consider it?

Again, I test the specific hypothesis that the {00} experiment was postselected. You appear to be confusing the outcomes with the settings.

Given your feedback and the difficulty of the calculation, I am willing to eschew an analytical calculation and rely instead on the simulation, as I did in the arxiv paper, and which you have suggested above.

Again, I test the specific hypothesis that the {00} experiment was postselected. You appear to be confusing the outcomes with the settings.

Given your feedback and the difficulty of the calculation, I am willing to eschew an analytical calculation and rely instead on the simulation, as I did in the arxiv paper, and which you have suggested above.

Peer 7:

( August 11th, 2016 10:43pm UTC )

Edit your feedback below

In your arxiv manuscript? Yes. That is invalid on three counts. I can elaborate, but is that what you are referring to?

Donald A. Graft:

( August 11th, 2016 10:49pm UTC )

Edit your feedback below

Yes, that is the simulation I referred to: calc.cpp.

Do you agree that there is no reason that this should be a matter of statistical tests, when Hensen et al. could simply release the raw data to show that the analysis does not discard events? I am forced into this because Hensen et al. refuse to release the raw data.

Do you agree that there is no reason that this should be a matter of statistical tests, when Hensen et al. could simply release the raw data to show that the analysis does not discard events? I am forced into this because Hensen et al. refuse to release the raw data.

Peer 7:

( August 11th, 2016 11:07pm UTC )

Edit your feedback below

Three problems with that analysis:

1. Here you have also used a one-sided test. If the {00} count had been higher, rather than lower, than the expected value by the same amount, presumably you would still be crying foul. So, you really should use a two-sided test, which doubles the p-value.

2. You focused on the {00} count because that was the one that deviated most from the expectation. So, you have a multiple tests problem that you haven't accounted for. You need to calculate the probability that at least one of the four counts would deviate by ~43 or more from expectation. If you do this for you simulations, you will get a much higher p-value.

3. Even with these problems, you don't obtain significance at the 5% level. Then you do something bizarre, and circular:

"... if the data was indeed postselected, the number of events must be adjusted upwards by approximately (4746/4 - 1143 = ~43)."

There is no justification for this. Clearly this will give you a p-value of 0.036 much more than 3.6% of the time if the null hypothesis is true. You assumed that your conclusion (postselection) is true in order to "prove" that it is true.

Correcting any one of these problems will make the p-value non-significant.

1. Here you have also used a one-sided test. If the {00} count had been higher, rather than lower, than the expected value by the same amount, presumably you would still be crying foul. So, you really should use a two-sided test, which doubles the p-value.

2. You focused on the {00} count because that was the one that deviated most from the expectation. So, you have a multiple tests problem that you haven't accounted for. You need to calculate the probability that at least one of the four counts would deviate by ~43 or more from expectation. If you do this for you simulations, you will get a much higher p-value.

3. Even with these problems, you don't obtain significance at the 5% level. Then you do something bizarre, and circular:

"... if the data was indeed postselected, the number of events must be adjusted upwards by approximately (4746/4 - 1143 = ~43)."

There is no justification for this. Clearly this will give you a p-value of 0.036 much more than 3.6% of the time if the null hypothesis is true. You assumed that your conclusion (postselection) is true in order to "prove" that it is true.

Correcting any one of these problems will make the p-value non-significant.

Donald A. Graft:

( August 11th, 2016 11:17pm UTC )

Edit your feedback below

1. I test the specific hypothesis that the {00} experiment was postselected *out*. Postselecting events out cannot increase the count, so a one-sided test is appropriate. Hensen et al. could release the raw data to disprove my hypothesis but they refuse to do so.

2. I test the specific hypothesis that the {00} experiment was postselected. The {00} experiment has special significance in the CHSH equation. Perhaps you were not aware of that. Hensen et al. could release the raw data to disprove this but they refuse to do so.

3. I am happy to accept the 0.07 significance level that results if you do not accept my reasoning for the lower value. It clearly shows that the distribution seen is of low probability, which is the only point I am trying to make. It suggests that postselection may have occurred, and I go on to investigate the consequences of it. There is nothing magical about the 0.05 significance level; it is a mere convention. Do you want to hang the foundations of physics on a 0.02 difference in a p-value?

Remember, Hensen et al. claim that the data is uniform with p-value 0.57. Yes, that is for the full data but why would they show that when they know that the CHSH metric is calculated on the subset. Why didn't they say the data is biased with p-value 0.07? It appears to me like an intent to mislead.

Moreover, my argument does not depend exclusively on this single p-value being less than some threshold value. It appeals also to the local model that produces similar results as the experiment, and to the presence of no-signaling violation in the data.

Again, this could all be so easily resolved with the release of the full data, and I call upon Hensen et al to do so.

Thank you for your feedback; it will help me to improve my presentation.

2. I test the specific hypothesis that the {00} experiment was postselected. The {00} experiment has special significance in the CHSH equation. Perhaps you were not aware of that. Hensen et al. could release the raw data to disprove this but they refuse to do so.

3. I am happy to accept the 0.07 significance level that results if you do not accept my reasoning for the lower value. It clearly shows that the distribution seen is of low probability, which is the only point I am trying to make. It suggests that postselection may have occurred, and I go on to investigate the consequences of it. There is nothing magical about the 0.05 significance level; it is a mere convention. Do you want to hang the foundations of physics on a 0.02 difference in a p-value?

Remember, Hensen et al. claim that the data is uniform with p-value 0.57. Yes, that is for the full data but why would they show that when they know that the CHSH metric is calculated on the subset. Why didn't they say the data is biased with p-value 0.07? It appears to me like an intent to mislead.

Moreover, my argument does not depend exclusively on this single p-value being less than some threshold value. It appeals also to the local model that produces similar results as the experiment, and to the presence of no-signaling violation in the data.

Again, this could all be so easily resolved with the release of the full data, and I call upon Hensen et al to do so.

Thank you for your feedback; it will help me to improve my presentation.

Peer 7:

( August 11th, 2016 11:26pm UTC )

Edit your feedback below

You chose which hypothesis to test based on the data: if, say, the {11} count had been 1143, you would be claiming that the {11} results were postselected. This is a classic, all-too-common illegitimate practice that leads to bogus "significance"; look up "HARKing".

Donald A. Graft:

( August 11th, 2016 11:47pm UTC )

Edit your feedback below

I am not harking. As I said, I chose {00} in advance because it has special significance for CHSH (only one of the terms is weighted negatively). For example, the local model would not reproduce the experiment if I postselected a different experiment. The fact that {00} is the experiment that solely shows the large event deficits in the experiment is telling.

Look at the whole picture, including the local simulation and the signaling; it's not all about a p-value for the settings distribution. Hensen et al. are hiding their raw data and I call upon them to release it.

Look at the whole picture, including the local simulation and the signaling; it's not all about a p-value for the settings distribution. Hensen et al. are hiding their raw data and I call upon them to release it.

Richard Gill:

( August 12th, 2016 9:37am UTC )

Edit your feedback below

The correlations for 00, 01, and 10 are large and positive; the correlation for 11 is large and negative. In this experiment it is the {11} experiment which has special significance.

The four sample sizes are 53, 79, 62, 51. According to the chi-square test just significantly different from a uniform distribution at the 5% level.

Regarding no-signalling, there are four no-signalling conditions. We can estimate four deviations from no-signalling from the data in the obvious way. Using standard formulas for multinomial variances and covariances (the data counts form four multinomial experiments each with four categories) one can compute an estimate of the 4x4 covariance matrix Sigma of those four "discrepancies". Finally, assuming an approximate Gaussian distribution of the discrepancies "d", we can compute the quadratic form d-transpose Sigma-inverse d and compare it with the chi-square distribution with four degrees of freedom. The result I found was not significant at all.

Adenier and Khrennikov use Poisson variances instead of multinomial and do not take account of the correlations between the four no-signalling discrepancies. Moreover, they look at the results obtained with very different parameters used to define heralding than those used by Hensen et al.

If you are a priori convinced that local realism is true and that experimenters cheat, then you can come up with an interpretation of any set of experimental data which fits your prior beliefs. Unfortunately the attitude will not encourage experimenters to collaborate with you.

The four sample sizes are 53, 79, 62, 51. According to the chi-square test just significantly different from a uniform distribution at the 5% level.

Regarding no-signalling, there are four no-signalling conditions. We can estimate four deviations from no-signalling from the data in the obvious way. Using standard formulas for multinomial variances and covariances (the data counts form four multinomial experiments each with four categories) one can compute an estimate of the 4x4 covariance matrix Sigma of those four "discrepancies". Finally, assuming an approximate Gaussian distribution of the discrepancies "d", we can compute the quadratic form d-transpose Sigma-inverse d and compare it with the chi-square distribution with four degrees of freedom. The result I found was not significant at all.

Adenier and Khrennikov use Poisson variances instead of multinomial and do not take account of the correlations between the four no-signalling discrepancies. Moreover, they look at the results obtained with very different parameters used to define heralding than those used by Hensen et al.

If you are a priori convinced that local realism is true and that experimenters cheat, then you can come up with an interpretation of any set of experimental data which fits your prior beliefs. Unfortunately the attitude will not encourage experimenters to collaborate with you.

Donald A. Graft:

( August 12th, 2016 11:32am UTC )

Edit your feedback below

That is an unfair characterization of my attitude. I could retaliate in a similar manner regarding quantum nonlocalists, but it is unscientific. If you are suggesting that Hensen et al. are hiding their raw data because they think I have a bad attitude, you people are truly beyond salvation.

The fact of the matter is that the available data suggests harmful post-selection. The data set is too small to prove it outright, but Hensen et al. could easily resolve the reasonable doubt by releasing the full raw data. They refuse to do so without valid reason. It is a shameful display of arrogance and antiscience.

The fact of the matter is that the available data suggests harmful post-selection. The data set is too small to prove it outright, but Hensen et al. could easily resolve the reasonable doubt by releasing the full raw data. They refuse to do so without valid reason. It is a shameful display of arrogance and antiscience.

Enter your reply below (Please read the **How To**)

Donald A. Graft:

( August 11th, 2016 2:21pm UTC )

Edit your feedback below

It is not difficult to see how the postselection leads to the artifactual violation of no-signaling discovered in the experiment by Bednorz and later Adenier and Khrennikov. Consider side A's counts of 1's for the {00} and {01} experiments. If side B's setting is not affecting the counts, as required by no-signaling, then these two side A counts should be close to each other. However, postselection of {00} events will selectively reduce the first side A count, thereby biasing the results and producing an artifactual violation of no-signaling. This is confirmed by numerical simulation, and of course, it has already been demonstrated in the experimental data.

The Hensen et al. experiment is arguably the worst Bell test ever performed. When the real detection loophole is excluded by efficient detectors, one has to simulate it with postselection, in order to achieve a violation of CHSH.

The Hensen et al. experiment is arguably the worst Bell test ever performed. When the real detection loophole is excluded by efficient detectors, one has to simulate it with postselection, in order to achieve a violation of CHSH.

Richard Gill:

( August 11th, 2016 6:56pm UTC )

Edit your feedback below

No-signalling is about relative frequencies, not about absolute frequencies. Postselection (removal) of +- and -+ joint outcomes from the set of 00 events does not have to change Prob(Alice outcome is + | settings are 00).

Donald A. Graft:

( August 11th, 2016 7:11pm UTC )

Edit your feedback below

Simulation proves you wrong. The experimental data itself also shows this effect, i.e., no-signaling violation, as shown by Bednorz, and Adenier and Khrennikov.

Stop trying to distract us with trivia. Tell your pals to release the raw data...without editing it! Shalm et al. have set an honorable precedent in this regard. Even though their data set is many gigabytes, it has all been made available. People download multi-gigabyte video files without thinking twice about it; it is nonsensical and disingenuous for Hensen et al. to claim that they can't make the data available because it is too big. Hensen et al. are hiding their raw data! The longer they withhold it, the more suspicious things look.

Stop trying to distract us with trivia. Tell your pals to release the raw data...without editing it! Shalm et al. have set an honorable precedent in this regard. Even though their data set is many gigabytes, it has all been made available. People download multi-gigabyte video files without thinking twice about it; it is nonsensical and disingenuous for Hensen et al. to claim that they can't make the data available because it is too big. Hensen et al. are hiding their raw data! The longer they withhold it, the more suspicious things look.

Richard Gill:

( August 11th, 2016 8:04pm UTC )

Edit your feedback below

There is a lot wrong with Adenier and Khrennikov's statistics. The no-signalling violation is not present, when you do the statistics properly. (Bednorz is also not very convincing).

Donald A. Graft:

( August 11th, 2016 8:29pm UTC )

Edit your feedback below

Publish your critique of the work by Bednorz and Adenier et al. if you think it is wrong. Start new threads here for their papers. It's so easy to claim things, but not so easy to prove them. Anyway, it's just another distraction. I'll try to get you back on track...

Where is the raw data? Why is it being hidden? When will it be released?

Where is the raw data? Why is it being hidden? When will it be released?

Pierre-Yves Longaretti:

( August 17th, 2016 10:44am UTC )

Edit your feedback below

Would you mind specifying what in your opinion is wrong with the Adenier and Khrennilov analysis? Or refer to a paper or preprint in which you do so? Thanks

Richard Gill:

( August 17th, 2016 11:16am UTC )

Edit your feedback below

I wrote the following in an earlier posting: "Regarding no-signalling, there are four no-signalling conditions. We can estimate four deviations from no-signalling from the data in the obvious way. Using standard formulas for multinomial variances and covariances (the data counts form four multinomial experiments each with four categories) one can compute an estimate of the 4x4 covariance matrix Sigma of those four "discrepancies". Finally, assuming an approximate Gaussian distribution of the discrepancies "d", we can compute the quadratic form d-transpose Sigma-inverse d and compare it with the chi-square distribution with four degrees of freedom. The result I found was not significant at all. Adenier and Khrennikov use Poisson variances instead of multinomial and do not take account of the correlations between the four no-signalling discrepancies. Moreover, they look at the results obtained with very different parameters used to define heralding than those used by Hensen et al."

Pierre-Yves Longaretti:

( August 17th, 2016 11:24am UTC )

Edit your feedback below

Thanks for repeating the argument (this is a long post and I am afraid I skipped some of it). I understand your point. I will look into this when I find the time (QM nonlocality being only a hobby of mine).

Enter your reply below (Please read the **How To**)

Enter new comment below (Please read the **How To**)

Permalink

"The simulations presented in the referred site are either explicitly non local, or depend on the well known data discarding mechanism first proposed by Pearle (which opens a detection loophole):

http://www.sciphysicsforums.com/spfbb1/viewtopic.php?f=6&t=197#p5345 "

This claim is evidently false. Anyone with any knowledge of the subject can check for themselves that the simulations presented at the referred site are *manifestly local*, and they have *nothing whatsoever* to do with *any loopholes*, let alone the "detection loophole."

http://rpubs.com/jjc/105450

http://rpubs.com/jjc/99993

There is one-to-one correspondence in these simulations between the event-ready initial states "e" and the measurement results A and B. Thus there is no question of any "loopholes."

The UnReg above further claims that

"In fact, Delft's experiment is the last nail in the coffin of the S^3 cult."

This is a strange comment. S^3 is a part of a well known solution of Einstein's field equations of general relativity. Is Einstein's general relativity a " cult " ? http://arxiv.org/abs/1405.2355

The use of such unprofessional language with derogatory words like " cult " by the UnReg suggests that he/she is either unable to find, or incapable of finding, anything wrong with the physics and mathematics of Einstein's S^3 model of the universe, and the corresponding simulations of the EPR-Bell correlations presented at the referred site above.

Admittedly, however, it requires some knowledge of basic physics to understand all this.

Permalink

## Are you sure you want to delete your feedback?

Permalink

## Are you sure you want to delete your feedback?

Permalink

## Are you sure you want to delete your feedback?

That excuse is even more lame . Why not simply admit that the experimenters failed to verify the quantum mechanical prediction. Better luck next time, I'd say.

Permalink

## Are you sure you want to delete your feedback?

Permalink

## Are you sure you want to delete your feedback?

Interesting. So there is now a "theorem" that prevents realization of the quantum mechanical prediction of S = 2.828 and the experimenters have verified this so-called theorem. In that case those who proved that "theorem" and the experimenters who verified it in their experiment should be awarded Nobel Prizes at once for exhibiting violation of quantum mechanics.

In fact, this is a major revolution in physics. Please report this to the New York Times.

Permalink

## Are you sure you want to delete your feedback?

How To)## Are you sure you want to delete your feedback?