"Analysis of the Christensen et al Clauser-Horne (CH)-Inequality-Based Test of Local Realism"

Comments (240):

Display By:

Unregistered Submission:

( January 6th, 2015 9:29pm UTC )

Edit your feedback below

BTW I think the end of this Bell business will be when it is finally understood that post-selection is implied in the QM framework, ie, you can only obtain results matching QM by post-selecting, with absolutely no impact on local causality. Your work is definitely a great contribution towards this.

Donald Graft:

( January 7th, 2015 2:22am UTC )

Edit your feedback below

Thank you. Far from retracting Christensen et al, we should memorialize it. Christensen et al brilliantly performed a crucial experiment at an historic moment for our understanding of quantum foundations. They reported experimental results and supplied the data and materials to allow multiple independent analyses. Brad Christensen is a great scientist, brilliant really I think, and he encouraged and supported me even until this very moment (check my Acknowledgments). They are an experimental group. Theorists need the data! It is the first decisive EPRB experiment; that Christensen et al confirms local realism is huge.

I am writing a new paper called "The Rise and Fall of Quantum Nonlocality". Once nonlocality is eliminated, all the rest (teleportation, swapping, quantum speedup to the extent that it relies on nonlocal entanglement, etc.) goes too. It is all a delusion. We have to move on.

Quantum computation without nonlocality reduces to analog computation. It has its place, but the requirement for stable super-high precision makes it impractical. That's why the modern world turned to digital representations. Engineers are all too well aware of these things. I tried to recommend to one prominent quantum mysterian to include local realists in his team without luck. That means it is not about an objective pursuit of truth. Now it's clear that for the quantum mysterians it is not about truth, it is about positions and funding, external consulting contracts, startups, prizes, and other sinister things. Don't get me started on the state of modern "science". It has been mangled by government funding.

To be frank, I did not post this under the Christensen paper DOI specifically because I so highly value the work and would not wish to impugn it in any way. Theoretical fashions come and go; it is an experiment...with real data!

You're absolutely correct about post-selection and thank you for bringing it up. I just read a paper that bemoaned the fact that their "quantum circuits" could show nonlocality only after post-selection. The quantum mysterians know they are in a weak position; that's why they now talk about "hidden nonlocality", i.e., nonlocality that shows only after you cheat. If it wasn't such a tragedy for science and the public purse, it would be laughable.

John Bell's work is not challenged in any way. His is a monumental contribution to physics. Here's the takeaway: even quantum theory must face the no-go results. It is only the incoherent idea that a joint distribution can be sampled with marginal measurements that led to the giant mistake of thinking QM predicts a violation. I cover this in detail in my paper "On reconciling quantum mechanics and local realism". [arxiv 1309.1153]

This "rational interpretation" in my estimation completely resolves the EPR paradox in the Bohm-Aharonov form. The original position-momentum formulation of EPR is easily resolved by Einstein's statistical interpretation. It has always been the irrational idea of quantum nonlocality that blocked proper understanding.

Local realists need to go on the offensive now. Our position is sound and that of our opponents is weak. The idea of "loopholes" is just a way the mysterians use to subtly ridicule us. That kind of talk is not scientific. But now we can turn it back on them. Zeilinger told me "oh you've found another loophole" and I responded "no, I have exposed yours".

I am writing a new paper called "The Rise and Fall of Quantum Nonlocality". Once nonlocality is eliminated, all the rest (teleportation, swapping, quantum speedup to the extent that it relies on nonlocal entanglement, etc.) goes too. It is all a delusion. We have to move on.

Quantum computation without nonlocality reduces to analog computation. It has its place, but the requirement for stable super-high precision makes it impractical. That's why the modern world turned to digital representations. Engineers are all too well aware of these things. I tried to recommend to one prominent quantum mysterian to include local realists in his team without luck. That means it is not about an objective pursuit of truth. Now it's clear that for the quantum mysterians it is not about truth, it is about positions and funding, external consulting contracts, startups, prizes, and other sinister things. Don't get me started on the state of modern "science". It has been mangled by government funding.

To be frank, I did not post this under the Christensen paper DOI specifically because I so highly value the work and would not wish to impugn it in any way. Theoretical fashions come and go; it is an experiment...with real data!

You're absolutely correct about post-selection and thank you for bringing it up. I just read a paper that bemoaned the fact that their "quantum circuits" could show nonlocality only after post-selection. The quantum mysterians know they are in a weak position; that's why they now talk about "hidden nonlocality", i.e., nonlocality that shows only after you cheat. If it wasn't such a tragedy for science and the public purse, it would be laughable.

John Bell's work is not challenged in any way. His is a monumental contribution to physics. Here's the takeaway: even quantum theory must face the no-go results. It is only the incoherent idea that a joint distribution can be sampled with marginal measurements that led to the giant mistake of thinking QM predicts a violation. I cover this in detail in my paper "On reconciling quantum mechanics and local realism". [arxiv 1309.1153]

This "rational interpretation" in my estimation completely resolves the EPR paradox in the Bohm-Aharonov form. The original position-momentum formulation of EPR is easily resolved by Einstein's statistical interpretation. It has always been the irrational idea of quantum nonlocality that blocked proper understanding.

Local realists need to go on the offensive now. Our position is sound and that of our opponents is weak. The idea of "loopholes" is just a way the mysterians use to subtly ridicule us. That kind of talk is not scientific. But now we can turn it back on them. Zeilinger told me "oh you've found another loophole" and I responded "no, I have exposed yours".

Enter your reply below (Please read the **How To**)

Richard Gill:

( January 7th, 2015 9:07am UTC )

Edit your feedback below

You start with points a, b, c.

Point a is meaningless.

Point b is imprecise: *past* experiments (restricting attention to those performed under a strict, proper protocol) did not violate appropriate generalised Bell-type inequalities with the possible exception of the Christensen et al. and the Giustina et al. experiments of last year. The data analysis needs to be checked and there are different, legitimate ways to process the data, so the jury is still out on this. Your new analysis is an interesting contribution. Even if those experiments did not quite achieve significant violations it is clear that they came extremely close, these experiments are a huge leap improvement on the previous generation. Unless one is a dogmatic local realist it would not seem implausible that a "next generation" of exoeriments will get us a whole lot further.

c. is what I called Bell's fifth position. It is indeed logically possible that QM itself, through uncertainty relations, prevents us from engineering "to order" well entangled stayes in distant and space-time localized and rapidly-measurable subsystems.

I would say that right now, all options are wide open, and we need to urge the experimentalists to work hard on one final major push forward. Only if they fail does the "fifth position" start to gain credibility. Anyway it needs serious theoretical investigation and analysis, which so far has hardly been done.

Point a is meaningless.

Point b is imprecise: *past* experiments (restricting attention to those performed under a strict, proper protocol) did not violate appropriate generalised Bell-type inequalities with the possible exception of the Christensen et al. and the Giustina et al. experiments of last year. The data analysis needs to be checked and there are different, legitimate ways to process the data, so the jury is still out on this. Your new analysis is an interesting contribution. Even if those experiments did not quite achieve significant violations it is clear that they came extremely close, these experiments are a huge leap improvement on the previous generation. Unless one is a dogmatic local realist it would not seem implausible that a "next generation" of exoeriments will get us a whole lot further.

c. is what I called Bell's fifth position. It is indeed logically possible that QM itself, through uncertainty relations, prevents us from engineering "to order" well entangled stayes in distant and space-time localized and rapidly-measurable subsystems.

I would say that right now, all options are wide open, and we need to urge the experimentalists to work hard on one final major push forward. Only if they fail does the "fifth position" start to gain credibility. Anyway it needs serious theoretical investigation and analysis, which so far has hardly been done.

Donald Graft:

( January 7th, 2015 4:28pm UTC )

Edit your feedback below

a) is not "meaningless" and it seems trite and unscientific for anybody to leave it at that. It is very clear. To a statistician it should even be obvious. It's clearly addressed in my paper "On reconciling quantum mechanics and local realism". Please re-read the section covering separated measurements and advise me if there is not a demonstration of the irrationality of applying a joint prediction in a case of separated measurements.

Regarding experiments: Sure the earlier ones are useless. Everybody knows the detection threshold for QM to predict a violation of CHSH, yet Weihs does an experiment with 5% efficiency and then reports a visibility of 98%. And people keeping citing it. It's absurd on its face. How much money was wasted on this? For the new experiments, we have Christensen et al and Giustina et al. Christensen et al decisively confirms local realism. The Giustina et al data was recently released (January 2015) and is being analyzed independently.

The idea of "fifth position" is incoherent, especially when an experiment has already been performed that decides the issue. Nature is conspiring to prevent a decisive experiment? Some hand-waving about the uncertainty principle?

One has to adjust one's ways to the realities of the experimental evidence. Faith is not the right way. The great Max Planck faced a crisis when experiment disconfirmed the Planck-Wien radiation law (which Planck had claimed to derive from the second law of thermodynamics!). But he absorbed the blow and adjusted, contributed brilliantly to the resolution of that stark confrontation between theory and experiment, and went down in history for his contributions to physics. We can all do the same.

Regarding experiments: Sure the earlier ones are useless. Everybody knows the detection threshold for QM to predict a violation of CHSH, yet Weihs does an experiment with 5% efficiency and then reports a visibility of 98%. And people keeping citing it. It's absurd on its face. How much money was wasted on this? For the new experiments, we have Christensen et al and Giustina et al. Christensen et al decisively confirms local realism. The Giustina et al data was recently released (January 2015) and is being analyzed independently.

The idea of "fifth position" is incoherent, especially when an experiment has already been performed that decides the issue. Nature is conspiring to prevent a decisive experiment? Some hand-waving about the uncertainty principle?

One has to adjust one's ways to the realities of the experimental evidence. Faith is not the right way. The great Max Planck faced a crisis when experiment disconfirmed the Planck-Wien radiation law (which Planck had claimed to derive from the second law of thermodynamics!). But he absorbed the blow and adjusted, contributed brilliantly to the resolution of that stark confrontation between theory and experiment, and went down in history for his contributions to physics. We can all do the same.

Richard Gill:

( January 7th, 2015 6:52pm UTC )

Edit your feedback below

The issue here is not faith, but logic. The Christensen et al. experiment might be consistent with local realism, if when analysed properly, it turns out not to have violated any of the appropriate generalised Bell-type inequalities.

Therefore, if you have prior *faith* in local realism, then your faith is *confirmed*. Well that is nice for those with such faith. They can continue to sleep soundly in their beds.

The "fifth position" is Santos' position: quantum mechanics is true and quantum mechanics itself prevents a successful loophole free experiment. Through typical QM uncertainty relations.

If the Christensen et al. experiment, and the Giustina et al experiment too, is indeed consistent with local realism, then Bell's fifth position (Santos' position) also remains a logical option. QM can be true *and* every experiment will always have a "local realist" explanation (and quantum computers will never get off the ground because entanglement will not be possible to be maintained at large enough distances and for long enough times).

But that does not mean that there ever has to be a better theory to predict experimental results than QM. The local realist explanation of each entanglement-requiring prediction of QM might be totally unphysical and moreover different, for each different experiment.

I put my money on a next generation of experiments. Christensen and Giustina have gone a huge distance beyond Weihs, and now they are right there at the touch line. Is no more improvement possible? I keep my mind open, I am not blinded by some faith or other.

Therefore, if you have prior *faith* in local realism, then your faith is *confirmed*. Well that is nice for those with such faith. They can continue to sleep soundly in their beds.

The "fifth position" is Santos' position: quantum mechanics is true and quantum mechanics itself prevents a successful loophole free experiment. Through typical QM uncertainty relations.

If the Christensen et al. experiment, and the Giustina et al experiment too, is indeed consistent with local realism, then Bell's fifth position (Santos' position) also remains a logical option. QM can be true *and* every experiment will always have a "local realist" explanation (and quantum computers will never get off the ground because entanglement will not be possible to be maintained at large enough distances and for long enough times).

But that does not mean that there ever has to be a better theory to predict experimental results than QM. The local realist explanation of each entanglement-requiring prediction of QM might be totally unphysical and moreover different, for each different experiment.

I put my money on a next generation of experiments. Christensen and Giustina have gone a huge distance beyond Weihs, and now they are right there at the touch line. Is no more improvement possible? I keep my mind open, I am not blinded by some faith or other.

Donald Graft:

( January 7th, 2015 9:03pm UTC )

Edit your feedback below

I don't challenge QM. I have said it explicitly multiple times (see point c in the first post).

They're at the "touch line". Nonsense. Christensen et al have published a decisive experiment (with real data). How can it be faulted?

They're at the "touch line". Nonsense. Christensen et al have published a decisive experiment (with real data). How can it be faulted?

Heine Rasmussen :

( January 7th, 2015 9:29pm UTC )

Edit your feedback below

The premise of Graft's article, that we should expect to see marginal and not joint distributions in Bell type experiments, and that this is actually what is predicted by QM, indicates that the author is a crackpot. His increasingly heated responses in this forum tends to reinforce that impression.

On the other hand, he has aquired the raw data from the Christensen et. al. experiment, and apparently done a thorough job analysing it (misguided or not, to early for me to tell, lots of data to go through).

Since Graft's analysis seems to be explained rather clearly in his paper, I suggest further feedback to be targeted to actual faults of his analysis (if there are any), and that we try to avoid a general discussion on the foundations of QM.

On the other hand, he has aquired the raw data from the Christensen et. al. experiment, and apparently done a thorough job analysing it (misguided or not, to early for me to tell, lots of data to go through).

Since Graft's analysis seems to be explained rather clearly in his paper, I suggest further feedback to be targeted to actual faults of his analysis (if there are any), and that we try to avoid a general discussion on the foundations of QM.

Richard Gill:

( January 7th, 2015 10:11pm UTC )

Edit your feedback below

I agree: "further feedback to be targeted to actual faults of his analysis" ... or confirmation of his analysis, of course! And that will take some time.

Discussing foundations of QM is like discussing religion - with the best of intentions, you still very quickly tread on someone's toes.

Discussing foundations of QM is like discussing religion - with the best of intentions, you still very quickly tread on someone's toes.

Donald Graft:

( January 7th, 2015 10:31pm UTC )

Edit your feedback below

We would all like to avoid a discussion of quantum foundations! It is so clear, how could I have forgotten?

"And that will take some time"

How much time does it take to look at the two short and very clear MATLAB .m files I have published, and to compare their outputs: the original compared to the fixed version?

"And that will take some time"

How much time does it take to look at the two short and very clear MATLAB .m files I have published, and to compare their outputs: the original compared to the fixed version?

Donald Graft:

( January 7th, 2015 10:32pm UTC )

Edit your feedback below

"Discussing foundations of QM is like discussing religion"

Some cannot distinguish between Science and Religion.

Some cannot distinguish between Science and Religion.

Heine Rasmussen :

( January 7th, 2015 11:52pm UTC )

Edit your feedback below

Graft, the way this forum works, Peer nick names are assigned per thread, so Peer 1 in one thread will be a completely different person than Peer 1 in another thread. Anyway, my posts in this thread now show my real name.

And your atttitude in this thread is doing you a terrible disservice if your goal is to be taken seriously by the professional community.

And your atttitude in this thread is doing you a terrible disservice if your goal is to be taken seriously by the professional community.

Donald Graft:

( January 8th, 2015 4:37pm UTC )

Edit your feedback below

Thank you for the advice, but it's not about serving me; my goal is to discover and reveal the truth and to add to human knowledge to the extent that my soul and the creative spark of life allow. Always embrace and sustain the divine spark of life. Remain indifferent to what others think of one personally or if they deign to accept one. It has been said before, some clubs are not worth belonging to. I will use any tactics to defeat anti-science.

Like you, I also try to be open to any feedback, positive or negative. Free, honest, open, scientific exchange, a feast for the scientific soul. What a wonderful resource PubPeer is, at once manifesting and embracing the brilliant spark of creative thought, dazzling, warming, and sustaining us.

Like you, I also try to be open to any feedback, positive or negative. Free, honest, open, scientific exchange, a feast for the scientific soul. What a wonderful resource PubPeer is, at once manifesting and embracing the brilliant spark of creative thought, dazzling, warming, and sustaining us.

Enter your reply below (Please read the **How To**)

Jan-Åke Larsson:

( January 7th, 2015 11:16am UTC )

Edit your feedback below

Let me adjust a statement attributed to me: I think that even if the Christensen et al data indeed does not show a violation of CH, one cannot take that as evidence that local realism is confirmed. What one can conclude is that local realism is not rejected. This is the nature of the test: "violation"="reject local realism", "no violation"="no conclusion".

It is correct that postselection is a dangerous thing to do. For example, a detector could postselect "detection" or "no detection" depending on a hypothesized hidden variable and the local measurement setting. This is the cause for the modified inequality needed when detectors are inefficient. Another example is time-delays in the experiment that can (and will) depend on the local settings, which causes the coincidence-time loophole, here because of postselection on coincidence. A third example is the Franson interferometer where postselection gives an apparent violation of the CHSH inequality in spite of existence of a local realist model for the experiment.

However, postselection *is* allowed if the postselection is taken into account in the data analysis. For the efficiency problem this results in a modified (more difficult) bound for the CHSH inequality; the CH inequality is unchanged (but in itself more difficult to violate). For the coincidence problem, this also results in a modified bound (even more difficult to violate). Under certain restrictions on the experiment, a somewhat modified CH inequality is still valid, the simplest way to reach this is to clock externally, like Richard suggests. For these modern experiments, there are very few time windows with several clicks, few enough to not influence the statistics significantly (or almost at all). Don should be able to see this in Christensen's data, and he will see it in Giustina's when (not if) he gets it.

It is correct that postselection is a dangerous thing to do. For example, a detector could postselect "detection" or "no detection" depending on a hypothesized hidden variable and the local measurement setting. This is the cause for the modified inequality needed when detectors are inefficient. Another example is time-delays in the experiment that can (and will) depend on the local settings, which causes the coincidence-time loophole, here because of postselection on coincidence. A third example is the Franson interferometer where postselection gives an apparent violation of the CHSH inequality in spite of existence of a local realist model for the experiment.

However, postselection *is* allowed if the postselection is taken into account in the data analysis. For the efficiency problem this results in a modified (more difficult) bound for the CHSH inequality; the CH inequality is unchanged (but in itself more difficult to violate). For the coincidence problem, this also results in a modified bound (even more difficult to violate). Under certain restrictions on the experiment, a somewhat modified CH inequality is still valid, the simplest way to reach this is to clock externally, like Richard suggests. For these modern experiments, there are very few time windows with several clicks, few enough to not influence the statistics significantly (or almost at all). Don should be able to see this in Christensen's data, and he will see it in Giustina's when (not if) he gets it.

Unregistered Submission:

( January 7th, 2015 2:36pm UTC )

Edit your feedback below

I though that was precisely what Don analysed in the Christensen data and found that correctly analysed it does not violate the CH?

Within each time window they have multiple clicks not one, but they only counted a single click, thereby over-counting coincidences and under-counting singles. So I'm afraid in the Christensen case, they unknowingly post-processed the data.

I wonder why it is difficult for the Giustina group to share their data and detailed methods. I've asked them for the "special" coincidence matching algorithm mentioned in the paper and their response was, it will be published soon. That was almost a year ago. It is very disturbing that Don has been asking for data for 7 months. This is why raw data should be required to be deposited in a public repository with a DOI prior to publication, to be held in confidence until the paper is published (or a short time afterwards). This is already common practice in many disciplines and is long overdue in foundations research. It is strange that results of tax-payer funded research should be withheld from tax payers after publication. The data does not belong to the experimenters, it belongs to the funders of the research.

Within each time window they have multiple clicks not one, but they only counted a single click, thereby over-counting coincidences and under-counting singles. So I'm afraid in the Christensen case, they unknowingly post-processed the data.

I wonder why it is difficult for the Giustina group to share their data and detailed methods. I've asked them for the "special" coincidence matching algorithm mentioned in the paper and their response was, it will be published soon. That was almost a year ago. It is very disturbing that Don has been asking for data for 7 months. This is why raw data should be required to be deposited in a public repository with a DOI prior to publication, to be held in confidence until the paper is published (or a short time afterwards). This is already common practice in many disciplines and is long overdue in foundations research. It is strange that results of tax-payer funded research should be withheld from tax payers after publication. The data does not belong to the experimenters, it belongs to the funders of the research.

Donald Graft:

( January 7th, 2015 4:36pm UTC )

Edit your feedback below

JA, good to see you post here. Thank you for sharing your thoughts. Seems to me though that your clarification just repeats what I had already described as your position, i.e., Christensen et al shows no violation and so it is inadequate.

Your reasoning about the experiments is unscientific (amusing and revealing in a way too). You say that experiments can only confirm nonlocality; if they don't then the experiment is bad. But surely an experiment can confirm locality. And I claim that Christensen et al is that experiment. So, specifically, I would like to ask you: what is bad or inadequate about the Christensen et al experiment?

Your suggestion to modify the inequalities to allow for post-selection is bizarre but perhaps not uninteresting. However, the Giustina et al paper does not do that. They claim to have eliminated all post-selection through the experimental design. That is why it is so important for them to publish their data and analyses, even if so egregiously belatedly. Immunity to detection losses is not the same thing as immunity to post-selection (this is discussed further in the thread with excellent analysis). I cannot even imagine any way that post-selection can be excluded; I can always throw out some data to produce whatever result I want. But that is not science; that is chicanery. It's like Maxwell's demon: losing events randomly is one thing, but picking which ones to lose is quite another.

The external clocking has already been done by Christensen et al via the pockels cell openings. I am baffled that you aren't aware of this. And the suggestion that there are not enough multiple detections to affect the statistics is ridiculous. Look at the histogram I published and look at the critique section, where I show that it is beyond doubt that the multiple detections cause the apparent violation, through post-selection, and that the experiment when properly analyzed does not violate CH (just as rational theorists expect).

Can you please tell us your modified CH inequality so that I can test it against available data? Or maybe you really do claim that the unmodified CH excludes post-selection?! But that is manifestly not the case, as I have clearly shown in my paper. Can you fault it in some way? Thank you.

Your reasoning about the experiments is unscientific (amusing and revealing in a way too). You say that experiments can only confirm nonlocality; if they don't then the experiment is bad. But surely an experiment can confirm locality. And I claim that Christensen et al is that experiment. So, specifically, I would like to ask you: what is bad or inadequate about the Christensen et al experiment?

Your suggestion to modify the inequalities to allow for post-selection is bizarre but perhaps not uninteresting. However, the Giustina et al paper does not do that. They claim to have eliminated all post-selection through the experimental design. That is why it is so important for them to publish their data and analyses, even if so egregiously belatedly. Immunity to detection losses is not the same thing as immunity to post-selection (this is discussed further in the thread with excellent analysis). I cannot even imagine any way that post-selection can be excluded; I can always throw out some data to produce whatever result I want. But that is not science; that is chicanery. It's like Maxwell's demon: losing events randomly is one thing, but picking which ones to lose is quite another.

The external clocking has already been done by Christensen et al via the pockels cell openings. I am baffled that you aren't aware of this. And the suggestion that there are not enough multiple detections to affect the statistics is ridiculous. Look at the histogram I published and look at the critique section, where I show that it is beyond doubt that the multiple detections cause the apparent violation, through post-selection, and that the experiment when properly analyzed does not violate CH (just as rational theorists expect).

Can you please tell us your modified CH inequality so that I can test it against available data? Or maybe you really do claim that the unmodified CH excludes post-selection?! But that is manifestly not the case, as I have clearly shown in my paper. Can you fault it in some way? Thank you.

Richard Gill:

( January 7th, 2015 6:41pm UTC )

Edit your feedback below

Unregistered Submitter wrote: "I though that was precisely what Don analysed in the Christensen data and found that correctly analysed it does not violate the CH".

There is not one unique correct way to analyse that data. In particular, CH is not the only game in town. If we have externally determined time windows (so no "coincidence loophole") we still have to decide on a single "outcome" per time window. For instance, we could take the *first* event, value "+" or "-" if indeed there is any detection event at all, and otherwise the value "none". Now we have three possible outcomes for each time window in each wing of the experiment. Local realism is characterised by a (finite) family of generalised Bell inequalities of which CH is only one: they form the faces of the so-called convex polytope. If we use detection-time determined windows then we are subject to the coincidence loophole, but we can take it into account, too.

So Don's analysis is fascinating and important but it is not the last word.

There is not one unique correct way to analyse that data. In particular, CH is not the only game in town. If we have externally determined time windows (so no "coincidence loophole") we still have to decide on a single "outcome" per time window. For instance, we could take the *first* event, value "+" or "-" if indeed there is any detection event at all, and otherwise the value "none". Now we have three possible outcomes for each time window in each wing of the experiment. Local realism is characterised by a (finite) family of generalised Bell inequalities of which CH is only one: they form the faces of the so-called convex polytope. If we use detection-time determined windows then we are subject to the coincidence loophole, but we can take it into account, too.

So Don's analysis is fascinating and important but it is not the last word.

Unregistered Submission:

( January 7th, 2015 8:01pm UTC )

Edit your feedback below

Richard Gill wrote: "There is not one unique correct way to analyse that data"

Let me get this straight. You believe Don's analysis is correct, but you also believe there are other correct ways of analysing the data which will reach a different conclusion? How is that not the same as simultaneously believing two contradictory propositions? In any case, only one of the two methods under discussion can be correct. Either Don's or that of the original authors.

Let me get this straight. You believe Don's analysis is correct, but you also believe there are other correct ways of analysing the data which will reach a different conclusion? How is that not the same as simultaneously believing two contradictory propositions? In any case, only one of the two methods under discussion can be correct. Either Don's or that of the original authors.

Donald Graft:

( January 7th, 2015 9:10pm UTC )

Edit your feedback below

There are multiple correct analyses. There are also multiple incorrect ones, such as resorting to post-selection (data discarding).

"we could take the *first* event, value"

Still can't give up that data discarding? I'm sorry that the experimental data does not support nonlocality.

"we could take the *first* event, value"

Still can't give up that data discarding? I'm sorry that the experimental data does not support nonlocality.

Richard Gill:

( January 7th, 2015 9:37pm UTC )

Edit your feedback below

Donald: I said that your point "a" was meaningless. I should have been more precise and said that I found it meaningless. It did not make any sense to me. It still doesn't make any sense to me.

Then on some other issues which have come up: taking the first event in a time interval to define an outcome for that interval is not post-selection. Post-selection refers to selecting a subset of the experimental units. The experimental units are time-intervals. Not particles, not entangled pairs, ... read Bertlmann's socks.

The CGLMP inequality doesn't allow for cheating. CH is just CHSH. They represent the two kinds of faces of the local polytope. The set of generalised Bell inequalities which bound all sets of local realist probability distributions for a given type of experiment. Here: a two party, two settings per party, three outcomes per setting experiment. For a 2x2x3 experiment, the only generalised Bell inequalities are the CHSH inequalities (got by merging some of the outcomes) and the CGLMP inequalities. But you say you already know all this stuff.

And we therefore agree that there are multiple correct analyses. It's basic knowledge in this field, I wouldn't call it "overblown rhetoric", just important knowledge.

Unregistered Submitter: it looks to me as if Don's analysis is correct but it need not be the optimal correct way to analyse this data.

Then on some other issues which have come up: taking the first event in a time interval to define an outcome for that interval is not post-selection. Post-selection refers to selecting a subset of the experimental units. The experimental units are time-intervals. Not particles, not entangled pairs, ... read Bertlmann's socks.

The CGLMP inequality doesn't allow for cheating. CH is just CHSH. They represent the two kinds of faces of the local polytope. The set of generalised Bell inequalities which bound all sets of local realist probability distributions for a given type of experiment. Here: a two party, two settings per party, three outcomes per setting experiment. For a 2x2x3 experiment, the only generalised Bell inequalities are the CHSH inequalities (got by merging some of the outcomes) and the CGLMP inequalities. But you say you already know all this stuff.

And we therefore agree that there are multiple correct analyses. It's basic knowledge in this field, I wouldn't call it "overblown rhetoric", just important knowledge.

Unregistered Submitter: it looks to me as if Don's analysis is correct but it need not be the optimal correct way to analyse this data.

Donald Graft:

( January 7th, 2015 9:57pm UTC )

Edit your feedback below

You can't understand an idea, therefore it is meaningless?!

My way is not the optimal way? Nobody claimed it. There are multiple ways to demonstrate things. Isn't my simple way satisfactory? In what way can it be faulted?

"But you say you already know all this stuff."

That's right, and so do all the other competent foundational researchers and independent thinkers around the globe.

My way is not the optimal way? Nobody claimed it. There are multiple ways to demonstrate things. Isn't my simple way satisfactory? In what way can it be faulted?

"But you say you already know all this stuff."

That's right, and so do all the other competent foundational researchers and independent thinkers around the globe.

Jan-Åke Larsson:

( January 8th, 2015 8:44am UTC )

Edit your feedback below

CH and CHSH are equivalent.

Donald Graft:

( January 8th, 2015 4:38pm UTC )

Edit your feedback below

Are you sure? CHSH is subject to detection losses and CH is not. This is well-known. Surely you know about the "no enhancement" assumption and all that? CH requires no such dubious additional assumptions. That is its strength, and that is the downfall for the quantum mysterians.

Jan-Åke Larsson:

( January 8th, 2015 6:26pm UTC )

Edit your feedback below

CH and CHSH are mathematically equivalent. CHSH can be obtained by proper combination of several CH inequalities. CH can be obtained via proper labeling of the outcomes in a CHSH inequality, even when detection losses are present. This is an exercise in transforming expectation values into probabilities and back. Properly done, you will recover CH from CHSH, including the 2/3 efficiency limit in the non-maximal entanglement case. I did not stress this enough in my loopholes review doi:10.1088/1751-8113/47/42/424003.

Donald Graft:

( January 8th, 2015 7:08pm UTC )

Edit your feedback below

I'll have a look but it appears doubtful to me. Anyway, it doesn't affect my argument at all. I apply the simple unadorned CH inequality. There is no need to combine inequalities; that is the simple beauty of CH. If you find them equivalent, you should have no problem using CH.

Richard Gill:

( January 10th, 2015 5:17am UTC )

Edit your feedback below

This is not about "combining inequalities". It is about coarse-graining the data obtained in each time-slot in each wing of the experiment, separately. CH results from coarse graining a ternary outcome with possible vales "+", "-", "no detection" to binary, by merging two of the three together.

Such an operation is "reducing the data" but it is local: it could have been done *inside* the box in Figure 7 of Bertlmann's socks. It is not *post-selection*. Post-selection is the technical term for discarding some time-slots altogether. That can bias the correlations, since the fact of discarding deoends on settings in both wings of the experiment and the hidden variables in the system.

Such an operation is "reducing the data" but it is local: it could have been done *inside* the box in Figure 7 of Bertlmann's socks. It is not *post-selection*. Post-selection is the technical term for discarding some time-slots altogether. That can bias the correlations, since the fact of discarding deoends on settings in both wings of the experiment and the hidden variables in the system.

Unregistered Submission:

( January 14th, 2015 2:56pm UTC )

Edit your feedback below

Unregistered Submission: ( January 7th, 2015 2:36pm UTC ) said:

"The data does not belong to the experimenters, it belongs to the funders of the research."

Donald Graft: ( January 7th, 2015 6:22pm UTC ) said:

"A point that is sadly lost on some research groups."

And apparently to some participants of this discussion. An obvious corollary is that the experiments may not be free or able to distribute the data. It all depends on agreements with the funders.

"The data does not belong to the experimenters, it belongs to the funders of the research."

Donald Graft: ( January 7th, 2015 6:22pm UTC ) said:

"A point that is sadly lost on some research groups."

And apparently to some participants of this discussion. An obvious corollary is that the experiments may not be free or able to distribute the data. It all depends on agreements with the funders.

Unregistered Submission:

( January 15th, 2015 2:53pm UTC )

Edit your feedback below

Jan-Åke Larsson: ( January 7th, 2015 11:16am UTC ): "Let me adjust a statement attributed to me: I think that even if the Christensen et al data indeed does not show a violation of CH, one cannot take that as evidence that local realism is confirmed. What one can conclude is that local realism is not rejected. This is the nature of the test: "violation"="reject local realism", "no violation"="no conclusion"."

Some people use the word "confirmed" in the sense "not rejected" as in "We observed a black cat, which confirms the hypothesis that all crows are white."

Some people use the word "confirmed" in the sense "not rejected" as in "We observed a black cat, which confirms the hypothesis that all crows are white."

Donald Graft:

( January 15th, 2015 6:00pm UTC )

Edit your feedback below

The strong sense is logically correct. The Christensen et al experiment both confirms locality and disconfirms nonlocality. If A is true, then notA is false; and if notA is true; then A is false.

Unregistered Submission:

( January 16th, 2015 12:40pm UTC )

Edit your feedback below

Donald Graft: "The Christensen et al experiment both confirms locality and disconfirms nonlocality. If A is true, then notA is false; and if notA is true; then A is false."

And if A is unknown then notA is unknown, too. Which shall be concluded unless one can show that the result contradicts one of the hypotheses with sufficient significance.

And if A is unknown then notA is unknown, too. Which shall be concluded unless one can show that the result contradicts one of the hypotheses with sufficient significance.

Donald Graft:

( January 16th, 2015 2:12pm UTC )

Edit your feedback below

"And if A is unknown then notA is unknown, too. Which shall be concluded unless one can show that the result contradicts one of the hypotheses with sufficient significance."

I believe that I have shown that.

I believe that I have shown that.

Unregistered Submission:

( January 16th, 2015 6:08pm UTC )

Edit your feedback below

Donald Graft: ""And if A is unknown then notA is unknown, too. Which shall be concluded unless one can show that the result contradicts one of the hypotheses with sufficient significance."

I believe that I have shown that."

On which significance level? And where is that result (page and paragraph or table number)? In the article under discussion there are several mentions of insignificant results but I can't spot any significant one.

I believe that I have shown that."

On which significance level? And where is that result (page and paragraph or table number)? In the article under discussion there are several mentions of insignificant results but I can't spot any significant one.

Donald Graft:

( January 16th, 2015 6:36pm UTC )

Edit your feedback below

If you have a violation of CH you may consider the significance level thereof. If you have no violation, there is no significance level to consider.

Unregistered Submission:

( January 16th, 2015 9:40pm UTC )

Edit your feedback below

Donald Graft: ( January 16th, 2015 6:36pm UTC ): "If you have a violation of CH you may consider the significance level thereof. If you have no violation, there is no significance level to consider."

If you have no significance level, then A is unknown and nonA is unknown too.

If you have no significance level, then A is unknown and nonA is unknown too.

Unregistered Submission:

( January 18th, 2015 5:13pm UTC )

Edit your feedback below

If I open my lunchbox and my sandwich is missing, I don't start thinking about standard deviations.

Enter your reply below (Please read the **How To**)

Donald Graft:

( January 8th, 2015 4:24am UTC )

Edit your feedback below

As the current interpretations of EPR leave us unsatisfied and new thinking is therefore needed, I offer the latest executive summary of my "rational interpretation":

a. The quantum joint prediction cannot be recovered in an experiment with separated (marginal) measurements, just as for classical probability. Quantum mechanics correctly applied does not predict a violation of CH.

b. Valid experiments properly interpreted do not violate the CH inequality and therefore confirm local realism.

c. That does not mean quantum mechanics is wrong. The correct quantum mechanics prediction for an EPRB experiment must use the marginals (via reduced density matrices) and not the joint distribution. The essence of quantum mechanics is just fine; we need only to be careful about separated measurement situations, just as we are in classical probability theory. Just as we would not blindly expect the joint prediction to apply in the presence of heavy decoherence, we should not expect it to apply in a case of separated measurement.

d. John Bell's work is not challenged in any way. Even quantum theory must face the no-go results. It is only the misguided idea that a joint distribution can be sampled with marginal measurements that led to the mistake of thinking QM predicts a violation.

e. This "rational interpretation" completely resolves the EPR paradox in the Bohm-Aharonov formulation. The original position-momentum formulation of EPR is easily resolved by Einstein's statistical interpretation. It has always been the nonsensical idea of quantum nonlocality that blocked proper understanding. Without nonlocality, a consistent axiom set for physics is restored.

a. The quantum joint prediction cannot be recovered in an experiment with separated (marginal) measurements, just as for classical probability. Quantum mechanics correctly applied does not predict a violation of CH.

b. Valid experiments properly interpreted do not violate the CH inequality and therefore confirm local realism.

c. That does not mean quantum mechanics is wrong. The correct quantum mechanics prediction for an EPRB experiment must use the marginals (via reduced density matrices) and not the joint distribution. The essence of quantum mechanics is just fine; we need only to be careful about separated measurement situations, just as we are in classical probability theory. Just as we would not blindly expect the joint prediction to apply in the presence of heavy decoherence, we should not expect it to apply in a case of separated measurement.

d. John Bell's work is not challenged in any way. Even quantum theory must face the no-go results. It is only the misguided idea that a joint distribution can be sampled with marginal measurements that led to the mistake of thinking QM predicts a violation.

e. This "rational interpretation" completely resolves the EPR paradox in the Bohm-Aharonov formulation. The original position-momentum formulation of EPR is easily resolved by Einstein's statistical interpretation. It has always been the nonsensical idea of quantum nonlocality that blocked proper understanding. Without nonlocality, a consistent axiom set for physics is restored.

Richard Gill:

( January 8th, 2015 8:07am UTC )

Edit your feedback below

Thanks for your new summary.

This is what doesn't make sense to me in point (a): " just as for classical probability".

Also in (d) the same thing comes up: "the misguided idea that a joint distribution can be sampled with marginal measurements".

Two marginal measurements carried out on the same system are one joint measurement. There is no difference here between classical and quantum: and there is no formal prohibition, in either case (in the quantum situation restricting attention, of course, to the case of compatible measurements). But perhaps you are using the words in a different way from how they are used in probability and statistics.

I think that what you are saying is that by the time the two particles have separated, their joint quantum state has decohered into a so-called separable state. The marginal state of each particle separately is not changed, but their joint state has changed. So the issue is not that the two measurements can't be done; it is that the state of the system is no longer what it first was, by the time the two measurements can be done.

This is compatible with experimental results to date, and it is a logical possibility (Bell's fifth position, i.e. Santos' position), that precisely because of QM, this will remain the case for ever.

This is what doesn't make sense to me in point (a): " just as for classical probability".

Also in (d) the same thing comes up: "the misguided idea that a joint distribution can be sampled with marginal measurements".

Two marginal measurements carried out on the same system are one joint measurement. There is no difference here between classical and quantum: and there is no formal prohibition, in either case (in the quantum situation restricting attention, of course, to the case of compatible measurements). But perhaps you are using the words in a different way from how they are used in probability and statistics.

I think that what you are saying is that by the time the two particles have separated, their joint quantum state has decohered into a so-called separable state. The marginal state of each particle separately is not changed, but their joint state has changed. So the issue is not that the two measurements can't be done; it is that the state of the system is no longer what it first was, by the time the two measurements can be done.

This is compatible with experimental results to date, and it is a logical possibility (Bell's fifth position, i.e. Santos' position), that precisely because of QM, this will remain the case for ever.

Jan-Åke Larsson:

( January 8th, 2015 8:41am UTC )

Edit your feedback below

Incidentally, that the quantum state could evolve into a separable state was already discussed in Furry, W. H. "Note on the Quantum-Mechanical Theory of Measurement." Physical Review 49, no. 5 (1936): 393, 476. doi:10.1103/PhysRev.49.393. See especially the note on page 476.

Donald Graft:

( January 8th, 2015 2:27pm UTC )

Edit your feedback below

Good morning, all!

I understand probablity perfectly well. Read my papers carefully, especially "On reconciling quantum mechanics and local realism". It is explained very clearly there, and I have referred to it once already. I'll be happy to discuss it if it can be faulted. The point is basic and obvious, and has devastating implications, and that is why the quantum mysterians pretend not to understand. It doesn't matter if the source state is nonseparable; any experiment that measures marginals can deliver only separable statistics.

Study the field of copulas, for example. One will find methods that attempt to recover joint distributions from marginals. Of course, it's possible only in certain cases, and the result is an approximation. Why would such a field exist? But there is no rescue via copulas; I have already looked into that. In any case, it would not be legitimate to insert copulas into the correlation analysis.

Give my ideas a fair consideration, that is all I ask.

@JA

Thank you for your comment. Please realize that I am not talking about decoherence or any kind of state evolution. And I am quite familiar with the classic literature. I mentioned decoherence only as an analogy to show that we do not always blindly expect the joint prediction to apply. By what special powers do the quantum mysterians seek to perfectly recover joints from marginals? I'm really curious to hear your answer. Thanks for directing more attention to that as it is crucial.

I understand probablity perfectly well. Read my papers carefully, especially "On reconciling quantum mechanics and local realism". It is explained very clearly there, and I have referred to it once already. I'll be happy to discuss it if it can be faulted. The point is basic and obvious, and has devastating implications, and that is why the quantum mysterians pretend not to understand. It doesn't matter if the source state is nonseparable; any experiment that measures marginals can deliver only separable statistics.

Study the field of copulas, for example. One will find methods that attempt to recover joint distributions from marginals. Of course, it's possible only in certain cases, and the result is an approximation. Why would such a field exist? But there is no rescue via copulas; I have already looked into that. In any case, it would not be legitimate to insert copulas into the correlation analysis.

Give my ideas a fair consideration, that is all I ask.

@JA

Thank you for your comment. Please realize that I am not talking about decoherence or any kind of state evolution. And I am quite familiar with the classic literature. I mentioned decoherence only as an analogy to show that we do not always blindly expect the joint prediction to apply. By what special powers do the quantum mysterians seek to perfectly recover joints from marginals? I'm really curious to hear your answer. Thanks for directing more attention to that as it is crucial.

Enter your reply below (Please read the **How To**)

Jan-Åke Larsson:

( January 8th, 2015 2:30pm UTC )

Edit your feedback below

I should point out that neither Christensen et al nor Giustina et al claim to

perform tests of local realism. Both experiments change the measurement

setting too slowly. Both groups clearly state this in their papers:

Christensen et al write "...the high system efficiency allowed us to truly

violate a CH Bell inequality with no fair-sampling assumption (but still

critically relyng on the no-signaling assumption that leaves the causality

loophole open)."

Giustina et al write "Using photons, we have demonstrated an experimental Bell

inequality violation that closes the fair-sampling loophole. ... To achieve

a loophole-free Bell test as described above, it will be necessary to

introduce space-like separation sufficient to prohibit unwanted communication

between Alice, Bob, the measurement decisions, and the photon emission event."

Thus, neither experiment can lead to the conclusion that local realism is

rejected. However, this does not imply that local realism is "confirmed", it

rather tells us that no conclusion can be drawn.

perform tests of local realism. Both experiments change the measurement

setting too slowly. Both groups clearly state this in their papers:

Christensen et al write "...the high system efficiency allowed us to truly

violate a CH Bell inequality with no fair-sampling assumption (but still

critically relyng on the no-signaling assumption that leaves the causality

loophole open)."

Giustina et al write "Using photons, we have demonstrated an experimental Bell

inequality violation that closes the fair-sampling loophole. ... To achieve

a loophole-free Bell test as described above, it will be necessary to

introduce space-like separation sufficient to prohibit unwanted communication

between Alice, Bob, the measurement decisions, and the photon emission event."

Thus, neither experiment can lead to the conclusion that local realism is

rejected. However, this does not imply that local realism is "confirmed", it

rather tells us that no conclusion can be drawn.

Donald Graft:

( January 8th, 2015 3:37pm UTC )

Edit your feedback below

JA, respectfully, that's nonsense. The experiment collected data (real data!). I analyzed it. It doesn't matter what they claim or believe. If no experiment can confirm locality, then nonlocality is unfalsifiable and only a faith. "Our experiment disconfirmed the joint prediction, therefore no conclusion can be drawn." Are you kidding? Just trying to keep the gravy train going indefinitely? I am shocked that a scientist can post what you have posted, that you place such thoughts in public.

I would like to point you to my supplementary materials, placed online when my paper was first submitted to arxiv. Now there you have sound science. Christensen et al also followed this process of sound science. Maybe you and the Zeilinger group can learn from it.

You are repeatedly evading the question about the "special window method". Aren't you able to respond to reasonable questions about your work, let alone provide the data and analyses supporting your conclusions?

I sense desperation among the quantum mysterians. It's time for them to follow Max Planck's principled and courageous approach.

I would like to point you to my supplementary materials, placed online when my paper was first submitted to arxiv. Now there you have sound science. Christensen et al also followed this process of sound science. Maybe you and the Zeilinger group can learn from it.

You are repeatedly evading the question about the "special window method". Aren't you able to respond to reasonable questions about your work, let alone provide the data and analyses supporting your conclusions?

I sense desperation among the quantum mysterians. It's time for them to follow Max Planck's principled and courageous approach.

Jan-Åke Larsson:

( January 8th, 2015 4:17pm UTC )

Edit your feedback below

It is correct that nonlocality cannot be falsified, what you can do is argue that it is not needed since there are no experiments that violates local realism. Experimenters aim to violate local realism but have not yet. I am agreeing with you, please accept that. I am trying to give you a better way of presenting your conclusion.

I have been trying to find what "special" could possibly refer to. I cannot find the word in the Giustina paper, or their later comment on arxiv, nor is it in our paper. I have read every part of the papers involving "coincidence" and "count", and found nothing out of the ordinary.

At http://people.isy.liu.se/jalar/belltiming/ you can find a small demo of coincidence counting. It uses data from the Weihs experiment. This has been present since circa 2005, so I am not evading anything.

I have been trying to find what "special" could possibly refer to. I cannot find the word in the Giustina paper, or their later comment on arxiv, nor is it in our paper. I have read every part of the papers involving "coincidence" and "count", and found nothing out of the ordinary.

At http://people.isy.liu.se/jalar/belltiming/ you can find a small demo of coincidence counting. It uses data from the Weihs experiment. This has been present since circa 2005, so I am not evading anything.

Donald Graft:

( January 8th, 2015 4:33pm UTC )

Edit your feedback below

The Weihs experiment is an amusement and I know all about coincidence counting. How can an experiment with 5% efficiency violate CHSH (with 98% visibility), given the detection efficiency threshold for QM to violate CHSH? It's only through the detection loophole, for which I presented a plausible, non-conspiratorial explanation in my paper on the Weihs experiment.

Let's drop "special" then, no problem. Can you answer the poster's query about the "window method" used? Thank you. Actually, if you just give us the data and your analysis code, we can take it from there.

Let's drop "special" then, no problem. Can you answer the poster's query about the "window method" used? Thank you. Actually, if you just give us the data and your analysis code, we can take it from there.

Unregistered Submission:

( January 8th, 2015 4:49pm UTC )

Edit your feedback below

On page 6 of 1212.0533 they say:

"Algorithms identify photon signatures in the analog output signal, determine an arrival time for each event, and count two-photon coincidences, all without requiring additional information from the user about the data."

What algorithm? Is it published/described anywhere. Since you have worked closely with the authors, I presume you know the details of these algorithms. What are they

At the very least, do you agree that in order to reproduce their results other scientists need access to these algorithms as well as the raw data? I asked them for the algorithms one year ago and they told me it was being written up for publication.

Secondly, do you have algorithms and data to support your conclusions in the paper co-authored with them. You were the first author of that paper so why can't you release the data that is the basis of your claims? Apparently, you had access to the data to be able to do the analysis, therefore the problem of access does not appear to be logistical. Does the Zeilinger group only provide data/supplementary materials from published articles to friendly collaborators?

It is disturbing and I implore you guys to clean this up promptly.

"Algorithms identify photon signatures in the analog output signal, determine an arrival time for each event, and count two-photon coincidences, all without requiring additional information from the user about the data."

What algorithm? Is it published/described anywhere. Since you have worked closely with the authors, I presume you know the details of these algorithms. What are they

At the very least, do you agree that in order to reproduce their results other scientists need access to these algorithms as well as the raw data? I asked them for the algorithms one year ago and they told me it was being written up for publication.

Secondly, do you have algorithms and data to support your conclusions in the paper co-authored with them. You were the first author of that paper so why can't you release the data that is the basis of your claims? Apparently, you had access to the data to be able to do the analysis, therefore the problem of access does not appear to be logistical. Does the Zeilinger group only provide data/supplementary materials from published articles to friendly collaborators?

It is disturbing and I implore you guys to clean this up promptly.

Jan-Åke Larsson:

( January 8th, 2015 6:09pm UTC )

Edit your feedback below

The Weihs experiment is no joke, it constitutes crucial progress towards a proper violation. Not that you think proper violation is possible. But I would nevertheless advise a less hostile tone, this would help your credibility.

You have the code already. It is that program, rebuilt to handle CH, and read Giustina's data. Anton Zeilinger has repeatedly assured you that you will get the data within a few days (ten days from when your request reached him, he tells me), you will have to wait for that.

You have the code already. It is that program, rebuilt to handle CH, and read Giustina's data. Anton Zeilinger has repeatedly assured you that you will get the data within a few days (ten days from when your request reached him, he tells me), you will have to wait for that.

Jan-Åke Larsson:

( January 8th, 2015 6:44pm UTC )

Edit your feedback below

I know the details of the coincidence counting, see my other posts. I do not have detailed knowledge of the analog-to-digital conversion and time tagging, but it involves distinguishing electronic noise from photon detection pulses, and setting a time stamp at a standardized point in the pulses. In the rising flank, I believe. I have a masters degree in Electrical Engineering (and Applied Physics), and to me it seems pretty standard.

Donald Graft:

( January 8th, 2015 8:50pm UTC )

Edit your feedback below

Jan-Åke Larsson:

( January 8th, 2015 4:17pm UTC )

"nonlocality cannot be falsified"

This is one for the ages. Thank you for so clearly demonstrating the anti-scientific nature of the quantum mysterians.

( January 8th, 2015 4:17pm UTC )

"nonlocality cannot be falsified"

This is one for the ages. Thank you for so clearly demonstrating the anti-scientific nature of the quantum mysterians.

Jan-Åke Larsson:

( January 8th, 2015 9:29pm UTC )

Edit your feedback below

I should be clear here: I am of course talking about nonlocal realism being rejected the way you propose.

Local realism can be falsified, properly Popper-falsified, by a proper Bell violation.

Nonlocal realism cannot be falsified by not having a violation. I repeat: what you can do is argue that it is not needed since there are no experiments that violates local realism. If you ask experimenters: there are none, yet.

Local realism can be falsified, properly Popper-falsified, by a proper Bell violation.

Nonlocal realism cannot be falsified by not having a violation. I repeat: what you can do is argue that it is not needed since there are no experiments that violates local realism. If you ask experimenters: there are none, yet.

Donald Graft:

( January 8th, 2015 10:36pm UTC )

Edit your feedback below

Your game has no end and won't play in Peoria. Nobody is talking about nonlocal realism. That is more quantum mysterian obfuscation.

Christensen et al decisively disconfirms the quantum joint prediction and thus confirms local realism. Did you ask Christensen et al if they think their experiment is inadequate? I clearly showed that the Christensen et al data disconfims quantum nonlocality. Are you able to fault it in any way? What would count as a confirmation of locality for you?

Christensen et al decisively disconfirms the quantum joint prediction and thus confirms local realism. Did you ask Christensen et al if they think their experiment is inadequate? I clearly showed that the Christensen et al data disconfims quantum nonlocality. Are you able to fault it in any way? What would count as a confirmation of locality for you?

Unregistered Submission:

( January 8th, 2015 11:00pm UTC )

Edit your feedback below

"Local realism can be falsified, properly Popper-falsified, by a proper Bell violation."

This is false. A proper Bell violation simply falsifies the suggestion that the inequalities apply to the experiment. The central assumption for deriving the inequalities is:

P(AB|a,b,l) = P(A|a,l)*P(B|b,l)

But is ambiguous, which of the following claims is meant:

a) If P(AB|a,b,l) = P(A|a,l)*P(B|b,l) then local realism is True.

b) If Local realism is True then P(AB|a,b,l) = P(A|a,l)*P(B|b,l).

(a) is correct, and (b) is false. It is very easy to find a situation for which Local realism is true but (b) is false. But if you start with (a), the only conclusion you can make from a violation of the inequality is that P(AB|a,b,l) =/= P(A|a,l)*P(B|b,l) for the experiment.

This is why the loophole business is very misguided. Every loophole describes a possible locally realistic situation for which P(AB|a,b,l) =/= P(A|a,l)*P(B|b,l). If statement (b) above was Correct to start with, why should any loophole exist at all? (b) is false, it is (a) that is correct.

Instead we have "frankenstein-like" inequalities patched with duct-tape to address different loopholes, but never in a single inequality addressing them all. What should be done is to go back to the drawing board, and formulate a proper loophole-free local realistic assumption. Then derive new inequalities for those, and then experimentally test them. If this can not be done, then it is also impossible to demonstrate that all possible loopholes have been found.

Or better still, simply test the assumption directly by measuring P(A|a,l), and P(A|B,a,b,l) without any use of inequalities. If the results are equal, then local realism is true.

This is false. A proper Bell violation simply falsifies the suggestion that the inequalities apply to the experiment. The central assumption for deriving the inequalities is:

P(AB|a,b,l) = P(A|a,l)*P(B|b,l)

But is ambiguous, which of the following claims is meant:

a) If P(AB|a,b,l) = P(A|a,l)*P(B|b,l) then local realism is True.

b) If Local realism is True then P(AB|a,b,l) = P(A|a,l)*P(B|b,l).

(a) is correct, and (b) is false. It is very easy to find a situation for which Local realism is true but (b) is false. But if you start with (a), the only conclusion you can make from a violation of the inequality is that P(AB|a,b,l) =/= P(A|a,l)*P(B|b,l) for the experiment.

This is why the loophole business is very misguided. Every loophole describes a possible locally realistic situation for which P(AB|a,b,l) =/= P(A|a,l)*P(B|b,l). If statement (b) above was Correct to start with, why should any loophole exist at all? (b) is false, it is (a) that is correct.

Instead we have "frankenstein-like" inequalities patched with duct-tape to address different loopholes, but never in a single inequality addressing them all. What should be done is to go back to the drawing board, and formulate a proper loophole-free local realistic assumption. Then derive new inequalities for those, and then experimentally test them. If this can not be done, then it is also impossible to demonstrate that all possible loopholes have been found.

Or better still, simply test the assumption directly by measuring P(A|a,l), and P(A|B,a,b,l) without any use of inequalities. If the results are equal, then local realism is true.

Donald Graft:

( January 8th, 2015 11:26pm UTC )

Edit your feedback below

I appreciate your comment and sentiments but CH is not "frankenstein-like", nor is it patched with duct tape. Loopholes are only relevant if you doubt a claimed CH violation. If you claim and report no violation, then loopholes arise only on the quantum mysterian side, embodied in such tricks as post-selection. Local realists simply report that the data shows no violation and so locality is confirmed.

I too have thought about direct tests like you mention. It is a fruitful direction, but has a lot of problems. All avenues should be pursued, but we have a decisive one in hand: Christensen et al.

At some point, nonlocality becomes like perpetual motion. Sure, do an experiment to see what is possible if you like, but really, why bother?

I too have thought about direct tests like you mention. It is a fruitful direction, but has a lot of problems. All avenues should be pursued, but we have a decisive one in hand: Christensen et al.

At some point, nonlocality becomes like perpetual motion. Sure, do an experiment to see what is possible if you like, but really, why bother?

Unregistered Submission:

( January 9th, 2015 3:17am UTC )

Edit your feedback below

I was referring to the inequalities patched to account for various loopholes, not the CH.

Donald Graft:

( January 9th, 2015 3:10pm UTC )

Edit your feedback below

"I was referring to the inequalities patched to account for various loopholes, not the CH."

Ah, I see now. Thank you for clarifying that and sorry for my misunderstanding.

Ah, I see now. Thank you for clarifying that and sorry for my misunderstanding.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( January 8th, 2015 2:37pm UTC )

Edit your feedback below

Questions for Don, Jan and Richard:

In the Christensen experiment, they state that the detection efficiency must be greater than 2/3

1. What is the detection efficiency of the Christensen experiment in terms of total photon counts (They report ~77%)

2. What is the detection efficiency of the Christensen experiment in terms of "pockel cells", or "time-intervals" (??)

3. Given that they only consider one particle in any "pockel cell", what is therefore EFFECTIVE detection efficiency.

4. Given the answers to the above, can they still claim "detection efficiency" loophole is closed?

In the Christensen experiment, they state that the detection efficiency must be greater than 2/3

1. What is the detection efficiency of the Christensen experiment in terms of total photon counts (They report ~77%)

2. What is the detection efficiency of the Christensen experiment in terms of "pockel cells", or "time-intervals" (??)

3. Given that they only consider one particle in any "pockel cell", what is therefore EFFECTIVE detection efficiency.

4. Given the answers to the above, can they still claim "detection efficiency" loophole is closed?

Jan-Åke Larsson:

( January 8th, 2015 6:56pm UTC )

Edit your feedback below

1. They estimate 75%+-2%, the estimate is based on coarse estimates for each component (source, transmission line, analyzer, and detectors). A proper evaluation would involve two detectors at each site (both experiments only have one detector at each site), and the quote P(coincidence)/P(single), see my paper "Bell's Inequality and Detector Inefficiency." Phys. Rev. A 57: 3304-8. (1998) doi:10.1103/PhysRevA.57.3304. It is important to point out that Christensen et al only need the estimate to see that they are in the good range.

2. A proper estimate of this would need two detectors at each site, perhaps a coarse estimate is possible somehow

3. See 2.

4. Yes. None of 1,2,3 are needed, this only requires a violation of the CH inequality. (In this case, Donald's analysis would also play a role)

2. A proper estimate of this would need two detectors at each site, perhaps a coarse estimate is possible somehow

3. See 2.

4. Yes. None of 1,2,3 are needed, this only requires a violation of the CH inequality. (In this case, Donald's analysis would also play a role)

Donald Graft:

( January 8th, 2015 7:13pm UTC )

Edit your feedback below

I concur with JA but add some things too.

1. They claim ~75%. But even here we need to also consider what portion of this is IID losses versus losses common to the two sides. Nobody ever talks about that. Common losses can occur as a function of a shared hidden variable, for instance. Common losses do not affect the CH metric; they only force you to run a longer experiment. How can we detect common losses? Well, that is like your questions 2 and 3. Everybody is open to suggestions.

2. It's not possible to answer without source tomography or something like JA suggests.

3. See 2. I agree with JA here.

4. I agree with JA's answer. These things affect the normalizaton, and hence the absolute value of the metric, not whether it is positive or negative. I factor out the absolute value of the metric by considering positivity, and it would be exciting to apply the positivity analysis to the Giustina et al data. That is why I asked for it so long ago. Be careful, though, not to confuse detection loss with post-selection, as I have previously mentioned here. Arbitrary post-selection (cheating) is not excluded by CH.

1. They claim ~75%. But even here we need to also consider what portion of this is IID losses versus losses common to the two sides. Nobody ever talks about that. Common losses can occur as a function of a shared hidden variable, for instance. Common losses do not affect the CH metric; they only force you to run a longer experiment. How can we detect common losses? Well, that is like your questions 2 and 3. Everybody is open to suggestions.

2. It's not possible to answer without source tomography or something like JA suggests.

3. See 2. I agree with JA here.

4. I agree with JA's answer. These things affect the normalizaton, and hence the absolute value of the metric, not whether it is positive or negative. I factor out the absolute value of the metric by considering positivity, and it would be exciting to apply the positivity analysis to the Giustina et al data. That is why I asked for it so long ago. Be careful, though, not to confuse detection loss with post-selection, as I have previously mentioned here. Arbitrary post-selection (cheating) is not excluded by CH.

Enter your reply below (Please read the **How To**)

Jan-Åke Larsson:

( January 8th, 2015 6:14pm UTC )

Edit your feedback below

I would like to turn to the actual content of the paper. I read the statements there along with statements in this thread in the following way:

"If there are several events at Alice (or Bob) in a given timeslot, and only the first of these events are used in the data analysis, the result could be a false violation of the CH inequality."

Is this a correct re-statement of what you have presented?

"If there are several events at Alice (or Bob) in a given timeslot, and only the first of these events are used in the data analysis, the result could be a false violation of the CH inequality."

Is this a correct re-statement of what you have presented?

Donald Graft:

( January 8th, 2015 7:05pm UTC )

Edit your feedback below

I don't need to re-state anything, but your understanding seems approximately correct, though I prefer my precise formulation. And I have shown that it is not just a possibility, but that this inadvertent post-selection actually occurs in the Christensen et al analysis. Again, I don't fault them for that. They performed a crucial experiment at a crucial time and offered the data to all theoreticians and analysts around the globe to chew on. They report on their own mastications, too. I view those as a valuable starting point.

Jan-Åke Larsson:

( January 8th, 2015 8:00pm UTC )

Edit your feedback below

It is always good to re-state your claim in a simple manner. It makes it simpler to argue for or against it.

And I would never ridicule someone in science. Science is much too important for that.

And I would never ridicule someone in science. Science is much too important for that.

Donald Graft:

( January 8th, 2015 8:18pm UTC )

Edit your feedback below

Oops, I love simplicity too. Sorry for misunderstanding you and thank you for giving us the precise statement you did. However, we're not talking about false violations, we're talking about experimentally observed lack of violation (and nonquantum behavior in a positivity analysis, etc.) disconfirming quantum nonlocality. No rational person can believe in nonlocality anymore.

Regarding my approach to anti-science, I humbly follow Einstein's lead. Like the great man, you and I would never ridicule a real scientist, an honest, rational one, dedicated to the furtherance of human knowledge, cradling and sustaining the divine creative spark.

Christensen et al confronts us with a stark experimental refutation of quantum nonlocality. I have demonstrated this and offer a new, rational, theoretical interpretation that explains the results of this experiment and resolves the EPR paradox. What's not to like about that?

Regarding my approach to anti-science, I humbly follow Einstein's lead. Like the great man, you and I would never ridicule a real scientist, an honest, rational one, dedicated to the furtherance of human knowledge, cradling and sustaining the divine creative spark.

Christensen et al confronts us with a stark experimental refutation of quantum nonlocality. I have demonstrated this and offer a new, rational, theoretical interpretation that explains the results of this experiment and resolves the EPR paradox. What's not to like about that?

Jan-Åke Larsson:

( January 9th, 2015 9:33am UTC )

Edit your feedback below

Donald, I have tried to find the supporting argument in your papers for the statement "If there are several events at Alice (or Bob) in a given timeslot, and only the first of these events are used in the data analysis, the result could be a false violation of the CH inequality."

I have found no such support; the present paper only shows that if you include the extra events, there is no violation. This is not enough. The violation may still be a true violation, and disappear because of added noise. To show that this cannot be the case, the above statement needs a supporting argument.

I would kindly ask you to provide such an argument. Please be advised that arguments of the kind "Obviously, ..."; "Everybody knows that ..." or even "Jan-Ake is a sloppy anti-scientist" does not provide scientific support for that statement.

The best kind of support would be an example of a local hidden-variable model that gives a false violation when only the first event in each time-slot is used. Somewhat weaker support would be to point at the place where the proof of the CH inequality fails in this situation (this is because a different proof might still hold).

The alternative that the above statement is not true is a possibility, but let us leave that aside for the time being, and first concentrate on scientific support for the statement.

I have found no such support; the present paper only shows that if you include the extra events, there is no violation. This is not enough. The violation may still be a true violation, and disappear because of added noise. To show that this cannot be the case, the above statement needs a supporting argument.

I would kindly ask you to provide such an argument. Please be advised that arguments of the kind "Obviously, ..."; "Everybody knows that ..." or even "Jan-Ake is a sloppy anti-scientist" does not provide scientific support for that statement.

The best kind of support would be an example of a local hidden-variable model that gives a false violation when only the first event in each time-slot is used. Somewhat weaker support would be to point at the place where the proof of the CH inequality fails in this situation (this is because a different proof might still hold).

The alternative that the above statement is not true is a possibility, but let us leave that aside for the time being, and first concentrate on scientific support for the statement.

Donald Graft:

( January 9th, 2015 11:00am UTC )

Edit your feedback below

All the support I need already exists in my paper and posts here. I leave it to the readers to judge that.

"the present paper only shows that if you include the extra events, there is no violation"

Thanks for the support! But they are not "extra" events. They are simply events. And it doesn't just show there is no violation; it decisively disconfirms the quantum joint prediction. This is why I resisted your restatement of my precise formulation. The analysis must include all the events on an equal basis. If you don't it is (inadvertent) cheating. And the paper shows much more than that. It shows highly nonquantum behavior in the positivity analysis. It describes lots of useful things analysts should know about. It presents some new technical results too, such as the demonstration that singles-rate averaging deteriorates the CH metric. Finally, the supplementary material is not vaporware and it provides some very cool things like a general code for obtaining the QM prediction under different experimental conditions. You should be saying "thank you!", not trying to minimize my work.

You want me to show a local model? How hard is that and why bother? If you allow me unlimited post-selection, I can show that the Pope is Jewish. So what? There's no need for a model when we already have the real thing! I have shown beyond a doubt that there is no violation without (inadvertent) cheating. Why do you remain in denial? It would be interesting to speculate on local models in a followup paper, but at this time it is a distraction and not needed to demonstrate the failure of the quantum joint prediction. If the CH metric is not violated, there must be a local model of it. You know that. I'm pretty sure the legendary Arthur Fine proved that.

Christensen et al has reported on noise levels and they are low enough such that the quantum joint prediction remains strong enough to easily verify, if it were true. I reported on this in the paper. I always try to be thorough in my work. Did you ask Christensen et al if they think their experiment is too noisy? Brad never confided any concerns about that to me.

Could there still be invisible fairies? This is the sad thing about quantum mysterians; they remain in denial even in the face of obvious facts and demonstrations perfectly satisfactory to rational people.

JA, does the Zeilinger analysis code use post-selection in any way, inadvertent or not?

"the present paper only shows that if you include the extra events, there is no violation"

Thanks for the support! But they are not "extra" events. They are simply events. And it doesn't just show there is no violation; it decisively disconfirms the quantum joint prediction. This is why I resisted your restatement of my precise formulation. The analysis must include all the events on an equal basis. If you don't it is (inadvertent) cheating. And the paper shows much more than that. It shows highly nonquantum behavior in the positivity analysis. It describes lots of useful things analysts should know about. It presents some new technical results too, such as the demonstration that singles-rate averaging deteriorates the CH metric. Finally, the supplementary material is not vaporware and it provides some very cool things like a general code for obtaining the QM prediction under different experimental conditions. You should be saying "thank you!", not trying to minimize my work.

You want me to show a local model? How hard is that and why bother? If you allow me unlimited post-selection, I can show that the Pope is Jewish. So what? There's no need for a model when we already have the real thing! I have shown beyond a doubt that there is no violation without (inadvertent) cheating. Why do you remain in denial? It would be interesting to speculate on local models in a followup paper, but at this time it is a distraction and not needed to demonstrate the failure of the quantum joint prediction. If the CH metric is not violated, there must be a local model of it. You know that. I'm pretty sure the legendary Arthur Fine proved that.

Christensen et al has reported on noise levels and they are low enough such that the quantum joint prediction remains strong enough to easily verify, if it were true. I reported on this in the paper. I always try to be thorough in my work. Did you ask Christensen et al if they think their experiment is too noisy? Brad never confided any concerns about that to me.

Could there still be invisible fairies? This is the sad thing about quantum mysterians; they remain in denial even in the face of obvious facts and demonstrations perfectly satisfactory to rational people.

JA, does the Zeilinger analysis code use post-selection in any way, inadvertent or not?

Richard Gill:

( January 9th, 2015 12:01pm UTC )

Edit your feedback below

I think we should go back to Bertlmann's socks and in particular Figure 7 and the surrounding text.

Here is the figure: http://www.math.leidenuniv.nl/~gill/Images/ch16fig7.png

Here is the text:

=================================

You might suspect that there is something specially peculiar about spin-1/2 particles. In fact there are many other ways of creating the troublesome correlations. So the following argument makes no reference to spin-1/2 particles, or any other particular particles.

Finally you might suspect that the very notion of particle, and particle orbit, freely used above in introducing the problem, has somehow led us astray. Indeed did not Einstein think that fields rather than particles are at the bottom of everything? So the following argument will not mention particles, nor indeed fields, nor any other particular picture of what goes on at the microscopic level. Nor will it involve any use of the words ‘quantum mechanical system’, which can have an unfortunate effect on the discussion. The difficulty is not created by any such picture or any such terminology. It is created by the predictions about the correlations in the visible outputs of certain conceivable experimental set-ups.

Consider the general experimental set-up of Fig. 7. To avoid inessential details it is represented just as a long box of unspecified equipment, with three inputs and three outputs. The outputs, above in the figure, can be three pieces of paper, each with either ‘yes’ or ‘no’ printed on it. The central input is just a ‘go’ signal which sets the experiment off at time t_1. Shortly after that the central output says ‘yes’ or ‘no’. We are only interested in the ‘yes’s, which confirm that everything has got off to a good start (e.g., there are no ‘particles’ going in the wrong directions, and so on). At time t_1 + T the other outputs appear, each with ‘yes’ or ‘no’ (depending for example on whether or not a signal has appeared on the ‘up’ side of a detecting screen behind a local Stern–Gerlach magnet). The apparatus then rests and recovers internally in preparation for a subsequent repetition of the experiment. But just before time t_1 + T, say at time t_1 + T – delta, signals a and b are injected at the two ends. (They might for example dictate that Stern–Gerlach magnets be rotated by angles a and b away from some standard position). We can arrange that c delta << L, where c is the velocity of light and L the length of the box; we would not then expect the signal at one end to have any influence on the output at the other, for lack of time, whatever hidden connections there might be between the two ends.

===================

This is for a CHSH type analysis. One could also do a CGLMP analysis, with three possible outcomes at the two measurement stations. The point is that there is no law saying how the experimenter arranges that there is a binary (or ternary) outcome. They should of course have a definite, pre-written down, protocol.

In medical statistics this is called analysing data according to the "intention to treat" principle. We avoid bias by insisting on a binary outcome for each experimental unit. The experimental unit is not the pair of particles, but the time interval, the "time slot".

There simply is no supporting argument for the statement "If there are several events at Alice (or Bob) in a given timeslot, and only the first of these events are used in the data analysis, the result could be a false violation of the CH inequality."

The result could be no violation because of a sub-optimal protocol. One can think of a number of different rules to legitimately determine one binary outcome from each wing of the experiment for each time-slot. Different choices will lead to different statistical power to reject the null hypothesis, if it is indeed not true. Different choices have no influence on the significance level (the chance to incorrectly reject the null hypothesis when it is true).

Here is the figure: http://www.math.leidenuniv.nl/~gill/Images/ch16fig7.png

Here is the text:

=================================

You might suspect that there is something specially peculiar about spin-1/2 particles. In fact there are many other ways of creating the troublesome correlations. So the following argument makes no reference to spin-1/2 particles, or any other particular particles.

Finally you might suspect that the very notion of particle, and particle orbit, freely used above in introducing the problem, has somehow led us astray. Indeed did not Einstein think that fields rather than particles are at the bottom of everything? So the following argument will not mention particles, nor indeed fields, nor any other particular picture of what goes on at the microscopic level. Nor will it involve any use of the words ‘quantum mechanical system’, which can have an unfortunate effect on the discussion. The difficulty is not created by any such picture or any such terminology. It is created by the predictions about the correlations in the visible outputs of certain conceivable experimental set-ups.

Consider the general experimental set-up of Fig. 7. To avoid inessential details it is represented just as a long box of unspecified equipment, with three inputs and three outputs. The outputs, above in the figure, can be three pieces of paper, each with either ‘yes’ or ‘no’ printed on it. The central input is just a ‘go’ signal which sets the experiment off at time t_1. Shortly after that the central output says ‘yes’ or ‘no’. We are only interested in the ‘yes’s, which confirm that everything has got off to a good start (e.g., there are no ‘particles’ going in the wrong directions, and so on). At time t_1 + T the other outputs appear, each with ‘yes’ or ‘no’ (depending for example on whether or not a signal has appeared on the ‘up’ side of a detecting screen behind a local Stern–Gerlach magnet). The apparatus then rests and recovers internally in preparation for a subsequent repetition of the experiment. But just before time t_1 + T, say at time t_1 + T – delta, signals a and b are injected at the two ends. (They might for example dictate that Stern–Gerlach magnets be rotated by angles a and b away from some standard position). We can arrange that c delta << L, where c is the velocity of light and L the length of the box; we would not then expect the signal at one end to have any influence on the output at the other, for lack of time, whatever hidden connections there might be between the two ends.

===================

This is for a CHSH type analysis. One could also do a CGLMP analysis, with three possible outcomes at the two measurement stations. The point is that there is no law saying how the experimenter arranges that there is a binary (or ternary) outcome. They should of course have a definite, pre-written down, protocol.

In medical statistics this is called analysing data according to the "intention to treat" principle. We avoid bias by insisting on a binary outcome for each experimental unit. The experimental unit is not the pair of particles, but the time interval, the "time slot".

There simply is no supporting argument for the statement "If there are several events at Alice (or Bob) in a given timeslot, and only the first of these events are used in the data analysis, the result could be a false violation of the CH inequality."

The result could be no violation because of a sub-optimal protocol. One can think of a number of different rules to legitimately determine one binary outcome from each wing of the experiment for each time-slot. Different choices will lead to different statistical power to reject the null hypothesis, if it is indeed not true. Different choices have no influence on the significance level (the chance to incorrectly reject the null hypothesis when it is true).

Jan-Åke Larsson:

( January 9th, 2015 1:13pm UTC )

Edit your feedback below

Donald: "The analysis must include all the events on an equal basis. If you don't it is (inadvertent) cheating." This is true in some situations, and I have given examples of this in earlier posts in this thread. What support do you have that it is true in this situation?

What you seem to think is support consists of repeated such statements (or equivalent claims) several times in this thread and in that paper. I have yet to see a proper supporting argument. (I do agree with what Richard states above.)

For those who wonder (Donald is of course aware of this), the standard reference in this situation is what Richard points to: Bell, "Bertlmann's Socks and the Nature of Reality." J de Physique, Colloques C2, vol 42:41-62 (1981) doi:10.1051/jphyscol:1981202. It is also reprinted in Bell's book "Speakable and Unspeakable in Quantum Mechanics".

What you seem to think is support consists of repeated such statements (or equivalent claims) several times in this thread and in that paper. I have yet to see a proper supporting argument. (I do agree with what Richard states above.)

For those who wonder (Donald is of course aware of this), the standard reference in this situation is what Richard points to: Bell, "Bertlmann's Socks and the Nature of Reality." J de Physique, Colloques C2, vol 42:41-62 (1981) doi:10.1051/jphyscol:1981202. It is also reprinted in Bell's book "Speakable and Unspeakable in Quantum Mechanics".

Heine Rasmussen :

( January 9th, 2015 1:40pm UTC )

Edit your feedback below

Richard Gill wrote:

“The point is that there is no law saying how the experimenter arranges that there is a binary (or ternary) outcome. They should of course have a definite, pre-written down, protocol..”

Yes, I agree. If we start with a LHV model, and assume such a local arrangement made the results violate the inequality, we could then incorporate the arrangement into the model, and the result would be a new LHV model with only binary outcomes that violated the inequality, which we know is impossible.

“The point is that there is no law saying how the experimenter arranges that there is a binary (or ternary) outcome. They should of course have a definite, pre-written down, protocol..”

Yes, I agree. If we start with a LHV model, and assume such a local arrangement made the results violate the inequality, we could then incorporate the arrangement into the model, and the result would be a new LHV model with only binary outcomes that violated the inequality, which we know is impossible.

Jan-Åke Larsson:

( January 9th, 2015 1:58pm UTC )

Edit your feedback below

An equivalent statement, proving that the CH inequality applies to Christensen's data even when using only the first event in each time-slot (at Alice and Bob separately) is in Larsson et al, "Bell-Inequality Violation with Entangled Photons, Free of the Coincidence-Time Loophole." Phys Rev A 90:032107 (2014) doi:10.1103/PhysRevA.90.032107. We thought it best to be explicit, see page 3, second column, last paragraph.

Besides, the coincidence identification in Giustina et al does not delete any singles. This would be clear if you try out the demo I provided a link for above. (Giustina et al did not use fixed time-windows, so do not check that checkbox in the demo.)

Besides, the coincidence identification in Giustina et al does not delete any singles. This would be clear if you try out the demo I provided a link for above. (Giustina et al did not use fixed time-windows, so do not check that checkbox in the demo.)

Donald Graft:

( January 9th, 2015 2:22pm UTC )

Edit your feedback below

JA, you still haven't told us what would constitute a disconfirmation of the QM joint prediction! The last I heard from you was "quantum nonlocality is not falsifiable", followed by some hand-waving about "nonlocal realism". The former clause ensures that your funding is not threatened. The latter is a deflection tactic of the quantum mysterians; no rational realist would resort to such an internally inconsistent construct, nor does the experimental data demand it. To a realist, it's like saying I'm going to buy a "black white" car. We can have nice philosophical debates about it, but it's a red herring as far as the science is concerned. Lorentz invariance is real; quantum nonlocality is not.

I'm in the mood for a picturesque metaphor, so I'll say it is wonderful even to just puncture the hull of the quantum mysterian ship. Frantic bailing is going on, but the water pours in relentlessly, and we will see the eventual sinking with all hands on-board (at least those without enough sense to jump off).

I'm in the mood for a picturesque metaphor, so I'll say it is wonderful even to just puncture the hull of the quantum mysterian ship. Frantic bailing is going on, but the water pours in relentlessly, and we will see the eventual sinking with all hands on-board (at least those without enough sense to jump off).

Richard Gill:

( January 9th, 2015 4:26pm UTC )

Edit your feedback below

What you call post-selection, Don, isn't what we call post-selection. The experimental unit is not the pair of particles, but the time slot. In principle, it is good that you already know "Bertlmann's socks" and don't need to listen to me repeat what is already explained there. But it seems to me you haven't understood what was explained there.

You are not the only one. Statistical thinking is difficult for physicists, and of course, vice versa. We both need to understand each other better.

You are not the only one. Statistical thinking is difficult for physicists, and of course, vice versa. We both need to understand each other better.

Donald Graft:

( January 9th, 2015 4:38pm UTC )

Edit your feedback below

My definition and use of "post-selection" is perfectly clear in my paper and rational readers without an agenda will have no difficulty understanding it. And I don't care if anybody thinks I have trouble understanding anything. It is not scientific discourse.

Jan-Åke Larsson:

( January 9th, 2015 4:41pm UTC )

Edit your feedback below

It would be easier to respond to your comments if you did not keep editing them.

Even postselection has limitations. It all depends on the situation and the amount of postselection.

The link to the demo is higher up in the thread. The coincidence identification procedure is present, Giustina uses matlab so the actual code is different but uses the same coincidence identification: if two remote events are close enough in time, they are deemed to be coincident. A single event cannot occur in two coincidences. Nothing more.

Concerning postselection, I will be complete, so please excuse another lecture.

1. Single events. The Giustina paper has less than 100% detector efficiency, which can be thought of as postselection; the local hidden variable could influence detection/non-detection. This is taken care of by usage of the CH inequality.

2. Coincidences. The above procedure as used in the Giustina paper is vulnerable to the coincidence loophole, which can be thought of as postselection; the local hidden variable could influence local time-delays. This is taken care of in our 2014 PRA paper, referenced earlier.

3. The Giustina paper uses no further postselection. For example, no singles are deleted.

For completeness, there are two methods for taking care of the coincidence loophole

2a. Use movable time-windows, where the size of one of the four windows is the sum of the other three (for details see the paper). This can also be thought of as postselection; the local hidden variable could influence local time-delays. We prove in the paper that the CH inequality still holds for the postselected data. There is no further postselection in this case.

2b. Use fixed time-slots. This can also be thought of as postselection; the local hidden variable could influence local time-delays. We prove in the paper that the CH inequality still holds for the postselected data. After this, the same coarse-graining procedure as in Christensen et al is used. We prove in the paper that the CH inequality still holds for the postselected coarse-grained data (using the same argument as in Bertlmann's socks).

The violation is larger in 2a (movable time-windows, no further postselection) than in 2b (fixed time-slots, coarse-graining). If you intend to use fixed time-slots for the Giustina data, please be aware that this is a suboptimal analysis method in that situation. If you do not coarse-grain, it will be even worse, and unnecessarily so. If you do not get a violation it is your own analysis method that is at fault.

I do not know who Ms Titova is.

It is possible to falsify QM, you could for example violate the Tsirelson bound. But that has nothing to do with the locality/nonlocality question. I have no more suggestion. Note that there exists a nonlocal realist theory: Bohmian mechanics. Not that I am especially fond of that.

Even postselection has limitations. It all depends on the situation and the amount of postselection.

The link to the demo is higher up in the thread. The coincidence identification procedure is present, Giustina uses matlab so the actual code is different but uses the same coincidence identification: if two remote events are close enough in time, they are deemed to be coincident. A single event cannot occur in two coincidences. Nothing more.

Concerning postselection, I will be complete, so please excuse another lecture.

1. Single events. The Giustina paper has less than 100% detector efficiency, which can be thought of as postselection; the local hidden variable could influence detection/non-detection. This is taken care of by usage of the CH inequality.

2. Coincidences. The above procedure as used in the Giustina paper is vulnerable to the coincidence loophole, which can be thought of as postselection; the local hidden variable could influence local time-delays. This is taken care of in our 2014 PRA paper, referenced earlier.

3. The Giustina paper uses no further postselection. For example, no singles are deleted.

For completeness, there are two methods for taking care of the coincidence loophole

2a. Use movable time-windows, where the size of one of the four windows is the sum of the other three (for details see the paper). This can also be thought of as postselection; the local hidden variable could influence local time-delays. We prove in the paper that the CH inequality still holds for the postselected data. There is no further postselection in this case.

2b. Use fixed time-slots. This can also be thought of as postselection; the local hidden variable could influence local time-delays. We prove in the paper that the CH inequality still holds for the postselected data. After this, the same coarse-graining procedure as in Christensen et al is used. We prove in the paper that the CH inequality still holds for the postselected coarse-grained data (using the same argument as in Bertlmann's socks).

The violation is larger in 2a (movable time-windows, no further postselection) than in 2b (fixed time-slots, coarse-graining). If you intend to use fixed time-slots for the Giustina data, please be aware that this is a suboptimal analysis method in that situation. If you do not coarse-grain, it will be even worse, and unnecessarily so. If you do not get a violation it is your own analysis method that is at fault.

I do not know who Ms Titova is.

It is possible to falsify QM, you could for example violate the Tsirelson bound. But that has nothing to do with the locality/nonlocality question. I have no more suggestion. Note that there exists a nonlocal realist theory: Bohmian mechanics. Not that I am especially fond of that.

Donald Graft:

( January 9th, 2015 4:49pm UTC )

Edit your feedback below

I like to correct typos, etc. As long as the site offers me an Edit button I'd like to use it. You can use it too! It's actually extremely helpful for the discourse in my opinion, as it allows us to refine our arguments and progress toward a definitive rendering thereof, a sort of crystallized rendering of the divine creative spark.

I am a bulldog against what I perceive as anti-science. I will use any tactics to expose the quantum mysterians. I regret that it offends you.

I too don't know anyone fond of Bohmian nonlocality. But it's not needed to explain the experiment.

"If you do not get a violation it is your own analysis method that is at fault."

Of course! It couldn't possibly be that nonlocality doesn't exist.

I am a bulldog against what I perceive as anti-science. I will use any tactics to expose the quantum mysterians. I regret that it offends you.

I too don't know anyone fond of Bohmian nonlocality. But it's not needed to explain the experiment.

"If you do not get a violation it is your own analysis method that is at fault."

Of course! It couldn't possibly be that nonlocality doesn't exist.

Enter your reply below (Please read the **How To**)

Heine Rasmussen :

( January 9th, 2015 4:44pm UTC )

Edit your feedback below

Donald Graft wrote:

"Anything is possible with post-selection in the data analysis. This is my major point, but you still pretend not to understand."

No, this is where you are mistaken. The kind of post-selection employed by Christensen et. al. is in fact harmless, since it follows a strictly local criterion.

Example of potentially harmful post-selection would be removing all the singles from a CHSH-experiment, since this opens up a loophole so that the inequality can be violated by a local realistic model. But the post-selection is here nonlocal, since removing the singles requires comparing results from both wings simultaneously.

"Anything is possible with post-selection in the data analysis. This is my major point, but you still pretend not to understand."

No, this is where you are mistaken. The kind of post-selection employed by Christensen et. al. is in fact harmless, since it follows a strictly local criterion.

Example of potentially harmful post-selection would be removing all the singles from a CHSH-experiment, since this opens up a loophole so that the inequality can be violated by a local realistic model. But the post-selection is here nonlocal, since removing the singles requires comparing results from both wings simultaneously.

Donald Graft:

( January 9th, 2015 4:52pm UTC )

Edit your feedback below

"But the post-selection is here nonlocal, since removing the singles requires comparing results from both wings simultaneously."

I'm not fully following you, but yes, that is what is implicitly done in the original Christensen data analysis. The (inadvertent) post-selection occurs as part of the coincidence calculation, leading to an artifactual violation. It would be trivial to make a local realist simulation that does the same thing.

I'm not fully following you, but yes, that is what is implicitly done in the original Christensen data analysis. The (inadvertent) post-selection occurs as part of the coincidence calculation, leading to an artifactual violation. It would be trivial to make a local realist simulation that does the same thing.

Heine Rasmussen :

( January 9th, 2015 5:50pm UTC )

Edit your feedback below

Selecting the first of multiple events in a given time slot as the representative of that time slot is a strictly local selection, i.e. it can be done without any information about what is happening in the other wing of the experiment. With some natural assumptions, this kind of selection is harmless, it can not make a local realistic model "artificially" violate the inequality.

The only way a relevant Bell inequality can be violated with a local realist model by post-selection, is for the post-selection procedure to sneak in non-locality through the back door, so to speak, i.e., the post-selection procedure must itself take advantage of nonlocal data treatment, directly or indirectly. This seems not to be the case in Chistensen et. al, since the selection procedure depends only on data from the local wing.

The only way a relevant Bell inequality can be violated with a local realist model by post-selection, is for the post-selection procedure to sneak in non-locality through the back door, so to speak, i.e., the post-selection procedure must itself take advantage of nonlocal data treatment, directly or indirectly. This seems not to be the case in Chistensen et. al, since the selection procedure depends only on data from the local wing.

Richard Gill:

( January 9th, 2015 5:57pm UTC )

Edit your feedback below

If it is trivial, Donald, to make a local realist simulation that does the same thing, please go ahead and do so. I don't think it is easy. I think it is impossible.

Your move.

Your move.

Jan-Åke Larsson:

( January 9th, 2015 6:00pm UTC )

Edit your feedback below

It is impossible. We have a formal proof.

You can't say "Your move", Richard. It is checkmate.

You can't say "Your move", Richard. It is checkmate.

Richard Gill:

( January 10th, 2015 8:50am UTC )

Edit your feedback below

I know it was Donald's move, Jan-Ake, and that he doesn't have one. I didn't want to appear arrogant by saying "checkmate". (I unfortunately appear arrogant anyway and it pisses him off).

Jan-Åke Larsson:

( January 10th, 2015 11:56am UTC )

Edit your feedback below

My patience had run out.

Unregistered Submission:

( January 14th, 2015 3:20pm UTC )

Edit your feedback below

Jan-Åke Larsson: "You can't say "Your move", Richard. It is checkmate."

Chess doesn't look like a good metaphor of these discussions. They are more like go: the game is not over before players aree that the game is over.

Chess doesn't look like a good metaphor of these discussions. They are more like go: the game is not over before players aree that the game is over.

Richard Gill:

( February 4th, 2015 10:44am UTC )

Edit your feedback below

The game will be truly over when a successful loophole free Bell-CHSH type experiment is performed (and preferably reproduced a few times in different laboratories). According to those who "know" that local realism is true (Santos, Graft, ...), that experiment will never ever be done.

It has taken 50 years to bring us to the threshold. I would be interested to hear the predictions of present day top quantum experimentalists, how many years it will be before they think that we will cross the threshold. I wouldn't call such people "quantum mysterians". These are experimentalists, engineers, building real stuff, pushing the envelope further all the time. Very practical people.

I do know that the PhD students whose job it is to do those experiments have already been recruited, in several labs around the world.

I try to keep an open mind, and I concentrate on studying (and trying to contribute to) the logic and the statistics. There's no need to pick sides. In fact I think it would be foolish to pick either side, right now. It certainly does lead to tunnel-blindness.

It has taken 50 years to bring us to the threshold. I would be interested to hear the predictions of present day top quantum experimentalists, how many years it will be before they think that we will cross the threshold. I wouldn't call such people "quantum mysterians". These are experimentalists, engineers, building real stuff, pushing the envelope further all the time. Very practical people.

I do know that the PhD students whose job it is to do those experiments have already been recruited, in several labs around the world.

I try to keep an open mind, and I concentrate on studying (and trying to contribute to) the logic and the statistics. There's no need to pick sides. In fact I think it would be foolish to pick either side, right now. It certainly does lead to tunnel-blindness.

Enter your reply below (Please read the **How To**)

Heine Rasmussen :

( January 9th, 2015 5:49pm UTC )

Edit your feedback below

Selecting the first of multiple events in a given time slot as the representative of that time slot is a strictly local selection, i.e. it can be done without any information about what is happening in the other wing of the experiment. With some natural assumptions, this kind of selection is harmless, it can not make a local realistic model "artificially" violate the inequality.

The only way a relevant Bell inequality can be violated with a local realist model by post-selection, is for the post-selection procedure to sneak in non-locality through the back door, so to speak, i.e., the post-selection procedure must itself take advantage of nonlocal data treatment, directly or indirectly. This seems not to be the case in Chistensen et. al, since the selection procedure depends only on data from the local wing.

The only way a relevant Bell inequality can be violated with a local realist model by post-selection, is for the post-selection procedure to sneak in non-locality through the back door, so to speak, i.e., the post-selection procedure must itself take advantage of nonlocal data treatment, directly or indirectly. This seems not to be the case in Chistensen et. al, since the selection procedure depends only on data from the local wing.

Donald Graft:

( January 9th, 2015 8:37pm UTC )

Edit your feedback below

Again, Christensen et al do in fact sneak nonlocality in the backdoor. Don't fault them for it; they were unaware of it. Thank you for your feedback.

Heine Rasmussen :

( January 9th, 2015 9:02pm UTC )

Edit your feedback below

Yes, I read your replies. This was just accidentally double posted, and now it is to late to delete it.

Donald Graft:

( January 9th, 2015 9:11pm UTC )

Edit your feedback below

IID post-selection might be thought of as just like IID detection losses. But that is not what the Christensen et al analysis does; the post-selection is not IID. I was shocked to discover that, but we have to play the cards we are dealt. And again, I don't fault Christensen et al.

Heine Rasmussen :

( January 9th, 2015 9:50pm UTC )

Edit your feedback below

Your apology accepted. Good that you found my feedback useful. You say that the Christensen et. al. post-selection procedure is not local; can you elabrate on that, since this is not clear from your paper? It is bed time in my time zone, so I am afraid I will not be able to reply until tomorrow.

Donald Graft:

( January 9th, 2015 10:38pm UTC )

Edit your feedback below

The discarding of events is a function of the results from both sides. Please look at the two .m files and the resulting event selections to see this. It actually happens; what else could explain the resulting selections? Discarding doesn't occur independently at the two sides during detection, it occurs during the correlation process.

Richard Gill:

( January 10th, 2015 6:04am UTC )

Edit your feedback below

Discarding "events" within a time-slot , on the basis of local criteria, is not evil. Discarding time-slots is evil. Any procedure which computes a single binary outcome from each wing of the experiment and for each time-slot, using data only from that wing of the experiment and that time-slot, is legitimate. One can imagine it "inside" the measurement device. The measurement device outputs one measurement outcome per time-slot. That is the picture we should have in mind. Forget about particles, forget about QM. We have a macroscopic experimental arrangement (buttons, output screens, clocks). It is a huge black box. We imagine that what goes on in the black box has a LR description. We deduce consequences for the observed data. We find out if the data agreed, yes or no.

Donald's QM physics has an entangled state decohering to a separable state. Once this has happened, the QM predictions of what gets observed can be replicated by an LR model. So the source might just as well have generated decohered particle pairs, as far as experimental,predictions are concerned. So Donald's physical predictions can be replicated by LR. So if it is correct, CH can't be violated. But it was violated. So Donald's model has been experimentally disproved.

Donald's QM physics has an entangled state decohering to a separable state. Once this has happened, the QM predictions of what gets observed can be replicated by an LR model. So the source might just as well have generated decohered particle pairs, as far as experimental,predictions are concerned. So Donald's physical predictions can be replicated by LR. So if it is correct, CH can't be violated. But it was violated. So Donald's model has been experimentally disproved.

Heine Rasmussen :

( January 10th, 2015 7:21pm UTC )

Edit your feedback below

Donald wrote:

"Discarding doesn't occur independently at the two sides during detection, it occurs *during the correlation process*, which is necessarily nonlocal. I will try to create a simple scenario to make it really clear, while you sleep."

The post-selection can very well be performed long after the detection (even after the physical part of the experiment is over) and still be harmless, as long as the selection procedure for wing A does not use data generated at wing B, and vice versa. Looking forward to seeing your simple scenario demonstrating that this is not the case in Christensen et. al., since then you might have something publishable.

"Discarding doesn't occur independently at the two sides during detection, it occurs *during the correlation process*, which is necessarily nonlocal. I will try to create a simple scenario to make it really clear, while you sleep."

The post-selection can very well be performed long after the detection (even after the physical part of the experiment is over) and still be harmless, as long as the selection procedure for wing A does not use data generated at wing B, and vice versa. Looking forward to seeing your simple scenario demonstrating that this is not the case in Christensen et. al., since then you might have something publishable.

Donald Graft:

( January 14th, 2015 4:20pm UTC )

Edit your feedback below

"Looking forward to seeing your simple scenario demonstrating that this is not the case in Christensen et. al., since then you might have something publishable."

Please refer to the Christensen et al data and the two .m files I published. The effect cannot be denied. I am preparing a further paper with a local model and further analysis on this effect.

Please refer to the Christensen et al data and the two .m files I published. The effect cannot be denied. I am preparing a further paper with a local model and further analysis on this effect.

Enter your reply below (Please read the **How To**)

Donald Graft:

( January 9th, 2015 7:07pm UTC )

Edit your feedback below

If you grant me license to post-select (doing no more than what Christensen et al do), it is trivial to make a local model. It is good to have the mysterians on record claiming it is impossible. As a sketch, in case someone wants to work on it before I am able to do so: Start with a local model that saturates the CH metric. Make a source that generates a poissonian distribution of source events. Do windowing that includes only a maximum of one event per side. Voila! A mathematician could show it with straightforward math. Engineers may prefer simulations to show it. You can easily see the mechanism operating by comparing the two MATLAB .m files I have cited several times. It is child's play. The quantum mysterians pretend not to understand, and petulantly demand demonstration of the obvious. Fear not, Planck showed that one can recover. First the mysterians pretended not to understand joint versus marginal probabilities and sampling, copulas, etc. Now the desperation is apparent, as they deny simple demonstrations of fact.

The quantum mysterians have claimed proof, based only on mathematics ("a formal proof"). You see, their physics is unfalsifiable and is deductively true. Nonlocality cannot possibly be false. OK, I get it, and readers around the world get it too. It is one for the ages. Yes, Bell's work is not challenged; that is the formal mathematical part that our interlocutors appeal to. What is challenged is that QM predicts a violation and that experiments show it. That is physics, not mathematics. If all of our knowledge could be discovered through mathematics, there would be no reason to perform experiments. The monumental experimental physicists of the past, people like Galileo, Boyle, Newton, Coulomb, etc., would be deeply saddened to learn that their work was superfluous.

The quantum mysterians have claimed proof, based only on mathematics ("a formal proof"). You see, their physics is unfalsifiable and is deductively true. Nonlocality cannot possibly be false. OK, I get it, and readers around the world get it too. It is one for the ages. Yes, Bell's work is not challenged; that is the formal mathematical part that our interlocutors appeal to. What is challenged is that QM predicts a violation and that experiments show it. That is physics, not mathematics. If all of our knowledge could be discovered through mathematics, there would be no reason to perform experiments. The monumental experimental physicists of the past, people like Galileo, Boyle, Newton, Coulomb, etc., would be deeply saddened to learn that their work was superfluous.

Jan-Åke Larsson:

( January 9th, 2015 9:12pm UTC )

Edit your feedback below

Donald, you could have made better use of this opportunity. You have been discussing this with experts in the field. These attacks are unneeded, disrespectful, and counter-productive. You are not the king, you are the player that lost.

To other contributors I would suggest we leave this thread as is, there will be no further progress here, only re-iterated unsupported claims by Graft. And some attacks on scientists here and there. At least for me, the issue is closed.

There is the fact that Graft has put his paper on arxiv. Richard, I suggest that we write a response that clarifies the situation. Would you agree to that?

To other contributors I would suggest we leave this thread as is, there will be no further progress here, only re-iterated unsupported claims by Graft. And some attacks on scientists here and there. At least for me, the issue is closed.

There is the fact that Graft has put his paper on arxiv. Richard, I suggest that we write a response that clarifies the situation. Would you agree to that?

Donald Graft:

( January 9th, 2015 10:02pm UTC )

Edit your feedback below

Going on record on arXiv saying it is mathematically impossible may be premature. Didn't you run my model sketch in your head? Engineers and rational physicists are good at such things, thanks to their closeness to raw nature.

My claims are not unsupported. I refer again to two simple MATLAB files. I'm sure that together with the original data and my event histogram they are being carefully inspected around the globe, whether or not anyone considers the discussion closed.

"You have been discussing this with experts in the field."

While I typically do not indulge in your brand of elitism, I too am an expert in the field.

My claims are not unsupported. I refer again to two simple MATLAB files. I'm sure that together with the original data and my event histogram they are being carefully inspected around the globe, whether or not anyone considers the discussion closed.

"You have been discussing this with experts in the field."

While I typically do not indulge in your brand of elitism, I too am an expert in the field.

Richard Gill:

( January 10th, 2015 5:00am UTC )

Edit your feedback below

Yes Jan-Åke I agree we should write a response.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( January 10th, 2015 12:10am UTC )

Edit your feedback below

I sense a steamroll attempt here but it is wishful thinking to suggest that the issue is resolved. We have an excellent experiment, with data available. We have the CH inequality and we have two methods to analyze the data.

Method 1, from the original authors involves ignoring a large portion of the raw data (long after the fact) and shows a violation of the CH inequality.

Method 2, from Don, involves using all the raw data and shows no violation, and no agreement with QM.

The mysterians appear to be arguing that Method one is "also" correct. When rather they should be explaining why Method 2 is wrong. By so doing they've admitted that it is fine to introduce "non-locality" well after the experiment by special data analysis. If nature is operating mystically as the mysterians believe, how come nature appears to "need" the special data analysis of ignoring data? If nature is operating mystically, and Method 2 is "also" correct, then both methods should show a violation.

My question to the mysterians therefore is the following:

As far as the CH inequality is concerned, what precisely, in your opinion, is wrong with Method 2 that caused it to disagree with QM and Method 1?

Method 1, from the original authors involves ignoring a large portion of the raw data (long after the fact) and shows a violation of the CH inequality.

Method 2, from Don, involves using all the raw data and shows no violation, and no agreement with QM.

The mysterians appear to be arguing that Method one is "also" correct. When rather they should be explaining why Method 2 is wrong. By so doing they've admitted that it is fine to introduce "non-locality" well after the experiment by special data analysis. If nature is operating mystically as the mysterians believe, how come nature appears to "need" the special data analysis of ignoring data? If nature is operating mystically, and Method 2 is "also" correct, then both methods should show a violation.

My question to the mysterians therefore is the following:

As far as the CH inequality is concerned, what precisely, in your opinion, is wrong with Method 2 that caused it to disagree with QM and Method 1?

Donald Graft:

( January 10th, 2015 2:23am UTC )

Edit your feedback below

Great questions. Thank you. I hope you will get answers.

"and no agreement with QM"

Your point is clear enough, but I like to be a bit more careful here, else you get accused of saying QM is wrong, and being dismissed as a crackpot. There is no agreement with the quantum*joint* prediction, and it is irrational to think that a joint prediction can be recovered with marginal measurements. The QM prediction using marginals works fine, and accounts for the experimental results confirming locality. There may be some circumstances or experiments where the joint prediction can be sampled and recovered, but EPRB is not one of them, as I have clearly shown.

Just always say "the QM joint prediction" instead of "QM".

"and no agreement with QM"

Your point is clear enough, but I like to be a bit more careful here, else you get accused of saying QM is wrong, and being dismissed as a crackpot. There is no agreement with the quantum

Just always say "the QM joint prediction" instead of "QM".

Richard Gill:

( January 10th, 2015 5:38am UTC )

Edit your feedback below

What is wrong with Method 2 is that it creates more than one pair of measurement outcomes per time-slot. The experimental unit is "time-slot", not "particle-pair". The point of the experiment is to be advocate of the devil (LR) and avoid use of words like "particle", "entangled", "quantum". See Bertlmann. We think LR, and make a prediction about the result of the experiment.

Of course, the experimenter wants to show that nature contradicts the LR point of view.

Why Method 2 gets different results: probably because it is less effective in combining pairs of events, one in each wing of the experiment, which actually (thinking QM) belong together.

It has already been explained why Method 1 is fine. If LR is true, Method 1 can't lead to violation of CH inequality. Christensen et al used method 1 and observed violation of CH. Hence LR is in trouble. Hence local realists are up in arms.

Reference: Bell (Bertlmann's socks is the best), Jan-Ake Larsson's survey paper, and no doubt hundreds of others.

I don't see why the logical reasoning I have just tried to explain implies that the person making the logical reasoning has to called a "quantum-mysterian". That seems to me an ad-hominem remark, illustrating the poverty of the arguments of whoever made them. Apparently they don't like the logical conclusion of the argument, can't fault the premises or the logic, hence start shouting and swearing. That is not the scientific method. It is not adult. This is behaving like a spoilt child. (It reminds me of #JeSuisCharlie: you don't like their scientific paper, so you shoot the author. Next thing we know, the editorial office of PRL is going to need police protection).

Of course, the experimenter wants to show that nature contradicts the LR point of view.

Why Method 2 gets different results: probably because it is less effective in combining pairs of events, one in each wing of the experiment, which actually (thinking QM) belong together.

It has already been explained why Method 1 is fine. If LR is true, Method 1 can't lead to violation of CH inequality. Christensen et al used method 1 and observed violation of CH. Hence LR is in trouble. Hence local realists are up in arms.

Reference: Bell (Bertlmann's socks is the best), Jan-Ake Larsson's survey paper, and no doubt hundreds of others.

I don't see why the logical reasoning I have just tried to explain implies that the person making the logical reasoning has to called a "quantum-mysterian". That seems to me an ad-hominem remark, illustrating the poverty of the arguments of whoever made them. Apparently they don't like the logical conclusion of the argument, can't fault the premises or the logic, hence start shouting and swearing. That is not the scientific method. It is not adult. This is behaving like a spoilt child. (It reminds me of #JeSuisCharlie: you don't like their scientific paper, so you shoot the author. Next thing we know, the editorial office of PRL is going to need police protection).

Jan-Åke Larsson:

( January 10th, 2015 9:48am UTC )

Edit your feedback below

No steamrolling, just disappointment. I am not a quantum mysterian. I meet people in the business that are annoyed with me for repeating that local realism has not yet been properly violated in experiment. I am arguing against these people when they claim local realism has already been rejected. I am on Donald's side here. It is a pity he does not realize that.

I set out to help Donald have his paper published. This involves making sure that the reasoning holds together, otherwise the chance of publication is very low. Like Richard, I thought that there was something interesting in the paper. However, I found a place where the reasoning was weak. This happens in science, so not so strange. The standard course of action is to point this out to the author, allowing him to improve his argument.

I did exactly this, in this thread. Again: Donald has shown that the violation disappears if all clicks correspond to events (it is actually unclear to me if he also includes coincidences for all events or just singles for some of them, but perhaps beside the point). The weak point is that this does not imply that the violation is false if instead time-slots correspond to events.

That the violation is true if time-slots correspond to events (that the CH inequality holds, that Method 1 is correct) was then explained at least four times in this thread, with references where this is proven in print (Bertlmann's socks; Larsson et al PRA 90:032107 (2014) p3, second column, last paragraph). There are more references, but two would suffice.

This shows that the weakness is large. Again a standard scientific argument.

Also, I thought back to the coincidence counting used in Giustina et al; Don's argument does not even apply there. All clicks correspond to events.

What is the result? Reasonable scientific arguments are met by nothing more than repeated claims and abusive remarks. If you have a scientific argument, please forward that. And stop with the abusive remarks.

We will be following scientific practice, collecting our thoughts on this in a paper.

I set out to help Donald have his paper published. This involves making sure that the reasoning holds together, otherwise the chance of publication is very low. Like Richard, I thought that there was something interesting in the paper. However, I found a place where the reasoning was weak. This happens in science, so not so strange. The standard course of action is to point this out to the author, allowing him to improve his argument.

I did exactly this, in this thread. Again: Donald has shown that the violation disappears if all clicks correspond to events (it is actually unclear to me if he also includes coincidences for all events or just singles for some of them, but perhaps beside the point). The weak point is that this does not imply that the violation is false if instead time-slots correspond to events.

That the violation is true if time-slots correspond to events (that the CH inequality holds, that Method 1 is correct) was then explained at least four times in this thread, with references where this is proven in print (Bertlmann's socks; Larsson et al PRA 90:032107 (2014) p3, second column, last paragraph). There are more references, but two would suffice.

This shows that the weakness is large. Again a standard scientific argument.

Also, I thought back to the coincidence counting used in Giustina et al; Don's argument does not even apply there. All clicks correspond to events.

What is the result? Reasonable scientific arguments are met by nothing more than repeated claims and abusive remarks. If you have a scientific argument, please forward that. And stop with the abusive remarks.

We will be following scientific practice, collecting our thoughts on this in a paper.

Donald Graft:

( January 10th, 2015 2:34pm UTC )

Edit your feedback below

Method 1 throws away data. Method 2 does not. That is the reality. All the rest is smoke. A trick is needed to show violation. I have revealed it. Please do go on record saying that in a proper data analysis, you must discard data.

Unregistered Submission:

( January 10th, 2015 3:26pm UTC )

Edit your feedback below

Richard Gill: Is the quantum prediction about particle pairs or "experiment units"?

Jan-Åke Larsson:

( January 10th, 2015 3:36pm UTC )

Edit your feedback below

Already answered earlier in the thread. This is not productive.

Unregistered Submission:

( January 10th, 2015 3:38pm UTC )

Edit your feedback below

"Local-realist" and "quantum mysterain" are descriptive non derogatory terms. If you believe QM is nonlocal or non real then you are in the mysterian camp. If you believe local realism is true then you are in the local realist camp. What do you believe? Funny that you will find an accurate description of your beliefs derogatory.

Unregistered Submission:

( January 14th, 2015 3:35pm UTC )

Edit your feedback below

""Local-realist" and "quantum mysterain" are descriptive non derogatory terms."

They describe people, not physical theories. Therefore, they are not useful in this discussion or other discussions here.

They describe people, not physical theories. Therefore, they are not useful in this discussion or other discussions here.

Enter your reply below (Please read the **How To**)

Richard Gill:

( January 10th, 2015 11:36am UTC )

Edit your feedback below

Another beautiful local realistic model, also being discussed on PubPeer right now: Justin Lee's random precession model.

Take a look at:

http://vixra.org/abs/1408.0063

http://rpubs.com/gill1109/JustinLee

https://pubpeer.com/publications/2C868EDB0FB35A23005FF34F50ED90

Take a look at:

http://vixra.org/abs/1408.0063

http://rpubs.com/gill1109/JustinLee

https://pubpeer.com/publications/2C868EDB0FB35A23005FF34F50ED90

Jan-Åke Larsson:

( January 10th, 2015 12:07pm UTC )

Edit your feedback below

I will. Perhaps I will not make the same effort there as here, but I would encourage others to do so. Proper scientific discussion on this subject is very important. It is good that you make the effort, Richard.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( January 10th, 2015 6:01pm UTC )

Edit your feedback below

Question or Richard and Jan:

Is the QM prediction for entangled photons or for "experimental units" as Richard calls it?

Is the QM prediction for entangled photons or for "experimental units" as Richard calls it?

Heine Rasmussen :

( January 10th, 2015 9:10pm UTC )

Edit your feedback below

Actually, these experiments have nothing to do with the QM prediction per se. They are not designed to confirm QM. They simply try to answer the question "Is there something in the physical universe that cannot be explained by a local realistic model?" Which means that you are allowed to try anything: Ions, photons, electrons, experimental units, brownies, frogs... you name it. Imagination is the only limit.

But of course, with what we know after 90 years with QM, the obvious first place to look is in systems that exhibit quantum behaviour.

But of course, with what we know after 90 years with QM, the obvious first place to look is in systems that exhibit quantum behaviour.

Donald Graft:

( January 10th, 2015 11:05pm UTC )

Edit your feedback below

I would only caution people to realize that "quantum behavior" does not necessarily include nonlocality. As I have argued, the elimination of the irrational idea of nonlocality leaves QM intact. We need only to be careful in our application of QM and probability theory.

Quite right. Exploding balls, frogs, it doesn't matter. But you have to use all the experimental data, and not throw out the data that is inconvenient to your hypothesis.

Quite right. Exploding balls, frogs, it doesn't matter. But you have to use all the experimental data, and not throw out the data that is inconvenient to your hypothesis.

Unregistered Submission:

( January 11th, 2015 12:49am UTC )

Edit your feedback below

Heine Rasmussen wrote: "Actually, these experiments have nothing to do with the QM prediction per se."

Christensen article:

1) The title says : "Detection-Loophole-Free Test of **Quantum** Nonlocality, and Applications", maybe they should have consulted you in coming up with an appropriate title.

2) The second paragraph on page 2 of the article says. "The inequality can be violated using maximally entangled states (e.g.,(|HH> + |VV>) /√2, where H and V represent polarization of the photons...", it does not say "polarization of *experimental units*". It proceeds to say ... "For the background levels in our experiment, a value of r = 0.26 allows us to maximally violate the CH inequality. Again referring to the quantum description of an experiment involving photons, not "units".

3) Abstract says: "This violation is the first reported experiment with **photons** to close the detection loophole"

It appears the argument from mysterians that it is not about photons but "experimental units" is just grasping at straws.

In any case, how is it possible to say which photon or "experimental unit" or frog on one arm corrresponds to which other photon or "frog" on the other side, without transfering data from one side to the other? And if data from the other end is needed to calculate the joint probability, why does any reasonable person continue to think that failure of a separable joint probability assumption has anything to do with non-locality or any other mystery rather than the simple fact that post-processing is involved.

Christensen article:

1) The title says : "Detection-Loophole-Free Test of **Quantum** Nonlocality, and Applications", maybe they should have consulted you in coming up with an appropriate title.

2) The second paragraph on page 2 of the article says. "The inequality can be violated using maximally entangled states (e.g.,(|HH> + |VV>) /√2, where H and V represent polarization of the photons...", it does not say "polarization of *experimental units*". It proceeds to say ... "For the background levels in our experiment, a value of r = 0.26 allows us to maximally violate the CH inequality. Again referring to the quantum description of an experiment involving photons, not "units".

3) Abstract says: "This violation is the first reported experiment with **photons** to close the detection loophole"

It appears the argument from mysterians that it is not about photons but "experimental units" is just grasping at straws.

In any case, how is it possible to say which photon or "experimental unit" or frog on one arm corrresponds to which other photon or "frog" on the other side, without transfering data from one side to the other? And if data from the other end is needed to calculate the joint probability, why does any reasonable person continue to think that failure of a separable joint probability assumption has anything to do with non-locality or any other mystery rather than the simple fact that post-processing is involved.

Richard Gill:

( January 11th, 2015 5:24am UTC )

Edit your feedback below

The quantum physicist will of course use quantum physics to make predictions for "the experimental unit", whatever that is taken to be.

To test a particular hypothesis one is not obliged to use *all* the experimental data. One should of course try to use the experimental data in an optimal way. In a situation where one wants to compare two hypotheses, optimality in a narrow sense means "with maximal statistical power" but one also wants the statistical analysis to be reasonably simple. It also has to be correct and it has to be appropriate (fit for purpose).

To test a particular hypothesis one is not obliged to use *all* the experimental data. One should of course try to use the experimental data in an optimal way. In a situation where one wants to compare two hypotheses, optimality in a narrow sense means "with maximal statistical power" but one also wants the statistical analysis to be reasonably simple. It also has to be correct and it has to be appropriate (fit for purpose).

Donald Graft:

( January 11th, 2015 2:12pm UTC )

Edit your feedback below

"To test a particular hypothesis one is not obliged to use *all* the experimental data. One should of course try to use the experimental data in an optimal way."

Translation for rational physicists: "You may discard data until your hypothesis is proven."

"fit for purpose"

Translation for rational physicists: "Show nonlocality at all costs (because mathematics shows that it cannot possibly be falsified)".

The Christensen et al source emits photon pairs, not "experimental units", time windows, or any other constructs heuristically created to facilitate a data analysis. If the source emits photons, then we talk about the statistics of observed photon detections. If the source emits balls or frogs, then we talk about the statistics of detected balls or frogs.

Translation for rational physicists: "You may discard data until your hypothesis is proven."

"fit for purpose"

Translation for rational physicists: "Show nonlocality at all costs (because mathematics shows that it cannot possibly be falsified)".

The Christensen et al source emits photon pairs, not "experimental units", time windows, or any other constructs heuristically created to facilitate a data analysis. If the source emits photons, then we talk about the statistics of observed photon detections. If the source emits balls or frogs, then we talk about the statistics of detected balls or frogs.

Unregistered Submission:

( January 11th, 2015 4:57pm UTC )

Edit your feedback below

Richard Gill: "To test a particular hypothesis one is not obliged to use *all* the experimental data."

Purrleeese! If using *all* the experimental data falsifies the hypothesis, but ignoring some of the data allows the hypothesis to be proven, would you consider that valid science? It is pretty simple:

ALL DATA: no violation

SUBSET OF DATA: violation

Therefore the violation is *introduced* by the data selection procedure , not the source, or the photons, or entanglement, or quantum mysteries.

Purrleeese! If using *all* the experimental data falsifies the hypothesis, but ignoring some of the data allows the hypothesis to be proven, would you consider that valid science? It is pretty simple:

ALL DATA: no violation

SUBSET OF DATA: violation

Therefore the violation is *introduced* by the data selection procedure , not the source, or the photons, or entanglement, or quantum mysteries.

Richard Gill:

( January 11th, 2015 6:00pm UTC )

Edit your feedback below

No Don, those "experimental units", time windows, and any other constructs, are not heuristically created to facilitate a data analysis. They are deliberately put in place in order that an appropriate data analysis could actually prove something about the question we are interested in.

I have noticed that this is a subtle point which is hard for many practically minded physicists to grasp.

But it seems we now agree what is the fundamental issue: we have a clear point on which we disagree, so we can hope to make progress!

I have noticed that this is a subtle point which is hard for many practically minded physicists to grasp.

But it seems we now agree what is the fundamental issue: we have a clear point on which we disagree, so we can hope to make progress!

Richard Gill:

( January 11th, 2015 6:55pm UTC )

Edit your feedback below

Sorry, Unregistered Submitter,

(1) there is no absolute proof, there is only statistical evidence.

(2) Donald's analysis is wrong because the experimental unit on the basis of which statistics can be done is time slot, not pair of particles.

(3) the analysis in which only the first events in each time-slot are paired probably gives stronger correlations because the first events are closer together in time and more often do indeed belong to an originally emitted pair, while pairing events anywhere in a long time-slot also pairs events which belong to separate emissions, so we observe lower correlations. So Donald's analysis does not contradict the other analysis at all. Together they give a coherent picture.

(4) Donald already told us that he *knows* that quantum entanglement decoheres, he *knows* a priori that quantum non-locality does not exist. He is not prepared to entertain any alternative hypotheses at all. He is the one who picks and chooses the analysis which suits him best.

(5) These experiments do not (statistically) disprove local realism because they are both subject to the locality loophole. And one of them doesn't even have rapidly changing measurement settings so it is subject to the memory loophole.

(1) there is no absolute proof, there is only statistical evidence.

(2) Donald's analysis is wrong because the experimental unit on the basis of which statistics can be done is time slot, not pair of particles.

(3) the analysis in which only the first events in each time-slot are paired probably gives stronger correlations because the first events are closer together in time and more often do indeed belong to an originally emitted pair, while pairing events anywhere in a long time-slot also pairs events which belong to separate emissions, so we observe lower correlations. So Donald's analysis does not contradict the other analysis at all. Together they give a coherent picture.

(4) Donald already told us that he *knows* that quantum entanglement decoheres, he *knows* a priori that quantum non-locality does not exist. He is not prepared to entertain any alternative hypotheses at all. He is the one who picks and chooses the analysis which suits him best.

(5) These experiments do not (statistically) disprove local realism because they are both subject to the locality loophole. And one of them doesn't even have rapidly changing measurement settings so it is subject to the memory loophole.

Unregistered Submission:

( January 11th, 2015 10:51pm UTC )

Edit your feedback below

Sorry Richard,

(1) It is very easy to lie with statistics.

(2) Donald's analysis is correct. I haven't seen any critique other than hand-waving. I'll wait for your paper with Larsson, with a more substantive criticism because none has been presented here so far.

(3) The paper and both you and Larsson claim that the experiment is immune to the coincidence time loophole, but here you admit you need to assume the "first events" are closer in time. (aka coincidence time loophole). Why do you need any time based assumptions at all? How do you know that two particles from the same pair must arrive at the same time even when analyzed at different settings? How do you know that a violation does not mean your assumption of same time arrival is false? An entirely non-mystical explanation. Not convincing at all. Surely you must be kidding that Don's analysis does not contradict the analysis of the authors. Really, one shows violation, the other shows no violation. But maybe because quantum mysterians believe in cats being simultaneously dead and alive, in which case it is probably it may be second nature for them to claim that the data show both violation and non-violation simultaneously.

(4) That is not accurate, I haven't seen where Donald said anything of the sort. Rather, you haven't given a valid reason why only the first counts should be used. First you said it wasn't about photons, but "experimental units". Then you say it is really about making sure the right photons are matched with each other. But you continue to claim that the experiment is immune to the coincidence loophole. The authors did not attempt to match any of the other photons, they were simply thrown away. Sorry, your arguments really do not make any sense.

(5) As Don has already responded to this but why would an experiment that does not show violation with all the opportunity for signaling be expected to show violation when the stations are moved far away to exclude signaling? You can't argue that the reason the experiment does not show a violation is because the stations were too close to each other.

I would really like to see in your response paper (if one is ever written), a detailed discussion of why including all the data does not show a violation, but using just the first event in each cell, a violation is obtained, in an experiment that is supposed to be immune to both the detection efficiency loophole and the coincidence time loophole.

*My red lights go blink-blink when it is shown that an experiment claiming to be immune to data-selection, shows different results when all the data is used.*

(1) It is very easy to lie with statistics.

(2) Donald's analysis is correct. I haven't seen any critique other than hand-waving. I'll wait for your paper with Larsson, with a more substantive criticism because none has been presented here so far.

(3) The paper and both you and Larsson claim that the experiment is immune to the coincidence time loophole, but here you admit you need to assume the "first events" are closer in time. (aka coincidence time loophole). Why do you need any time based assumptions at all? How do you know that two particles from the same pair must arrive at the same time even when analyzed at different settings? How do you know that a violation does not mean your assumption of same time arrival is false? An entirely non-mystical explanation. Not convincing at all. Surely you must be kidding that Don's analysis does not contradict the analysis of the authors. Really, one shows violation, the other shows no violation. But maybe because quantum mysterians believe in cats being simultaneously dead and alive, in which case it is probably it may be second nature for them to claim that the data show both violation and non-violation simultaneously.

(4) That is not accurate, I haven't seen where Donald said anything of the sort. Rather, you haven't given a valid reason why only the first counts should be used. First you said it wasn't about photons, but "experimental units". Then you say it is really about making sure the right photons are matched with each other. But you continue to claim that the experiment is immune to the coincidence loophole. The authors did not attempt to match any of the other photons, they were simply thrown away. Sorry, your arguments really do not make any sense.

(5) As Don has already responded to this but why would an experiment that does not show violation with all the opportunity for signaling be expected to show violation when the stations are moved far away to exclude signaling? You can't argue that the reason the experiment does not show a violation is because the stations were too close to each other.

I would really like to see in your response paper (if one is ever written), a detailed discussion of why including all the data does not show a violation, but using just the first event in each cell, a violation is obtained, in an experiment that is supposed to be immune to both the detection efficiency loophole and the coincidence time loophole.

*My red lights go blink-blink when it is shown that an experiment claiming to be immune to data-selection, shows different results when all the data is used.*

Richard Gill:

( January 12th, 2015 12:44pm UTC )

Edit your feedback below

Unregistered submitter: I do not "assume" that the first events are closer in time. I simply offer that as a possible explanation for the higher correlations when we take those events to define the outcome of each time slot. Which we are allowed to do. Any local processing of the data recorded in each time slot is legitimate. It is not post-selection. The term post-selection is used, in this context, to refer to rejection of data from a complete time-slot because there was no event on one or the other side of the experiment.

At the risk of being accused of arrogant lecturing, I suggest that Unregistered Submitter and Donald Graft study some good survey papers on the topic we are discussing here (loopholes in Bell type experiments). They are both reporting very common and quite instinctive (practical physicists') reactions, rather than pointing out errors in logic or mathematics. Which they can't do because there are none. They not really familiar with the logic of Bell's theorem.

Loopholes in Bell Inequality Tests of Local Realism

Jan-Åke Larsson

http://arxiv.org/abs/1407.0363

Statistics, Causality and Bell's Theorem

Richard D. Gill

http://arxiv.org/abs/1207.5103

and please carefully re-read "Bertlmann's socks" especially the text surrounding Figure 7.

At the risk of being accused of arrogant lecturing, I suggest that Unregistered Submitter and Donald Graft study some good survey papers on the topic we are discussing here (loopholes in Bell type experiments). They are both reporting very common and quite instinctive (practical physicists') reactions, rather than pointing out errors in logic or mathematics. Which they can't do because there are none. They not really familiar with the logic of Bell's theorem.

Loopholes in Bell Inequality Tests of Local Realism

Jan-Åke Larsson

http://arxiv.org/abs/1407.0363

Statistics, Causality and Bell's Theorem

Richard D. Gill

http://arxiv.org/abs/1207.5103

and please carefully re-read "Bertlmann's socks" especially the text surrounding Figure 7.

Richard Gill:

( January 12th, 2015 3:05pm UTC )

Edit your feedback below

"You're right on one thing, Richard, physicists cannot make sense of what you say."

I have gotten used to that a long time ago. But one keeps on trying. And some do understand! That's good enough.

A lot of physicists did not (and still do not) make sense of what Bell said either. I'm only saying what he said, but in different words, and connecting them to standard ideas in statistics: the use of randomisation, blinding, and the "intention to treat" principle, in clinical trials.

"Evidence based medicine" has tools for evidence based physics. But of course physicists tend to think like Rutherford: "if you need statistics, you did the wrong experiment".

I have gotten used to that a long time ago. But one keeps on trying. And some do understand! That's good enough.

A lot of physicists did not (and still do not) make sense of what Bell said either. I'm only saying what he said, but in different words, and connecting them to standard ideas in statistics: the use of randomisation, blinding, and the "intention to treat" principle, in clinical trials.

"Evidence based medicine" has tools for evidence based physics. But of course physicists tend to think like Rutherford: "if you need statistics, you did the wrong experiment".

Unregistered Submission:

( January 12th, 2015 7:33pm UTC )

Edit your feedback below

Richard Gill: You say you do not make that assumption, but you just pointed to a paper that does. I fact, that assumption is the basis of the claim that the coincidence loophole can be avoided by picking only the closest event to the beginning of the pockel cell. It seems to me, Christensen et al have been misled by Larsson. It is easy to see why Larsson is wrong. He continues to assume that both particles should be detected at the same time (on what basis???). See the first paragraph of page 30 of the first article you cited.

Unfortunately both of you have severely misunderstood the problem with Coincidence. You should have listened more carefully to what Karl Hess was trying to tell you. By assuming that, you have restricted your "theorems" to models in which no time delay exists between the arrival times of the particles. That is, violation could simply mean the "same time of arrival" assumption is wrong as Hans De Raedt and others have shown convincingly.

So again, let me ask both you Richard Gill, and Jan-Ake Larsson. On what basis do you assume both particles should arrive at the same time? And why shouldn't you interpret a violation as a failure of the "same arrival time" assumption?

Thanks you for drawing my attention to the Larsson Paper. The whole of section 3.6 now appears to be suspect and reveals probably where the Christensen group got the idea to measure just the first particle pair in each cell.

Unfortunately both of you have severely misunderstood the problem with Coincidence. You should have listened more carefully to what Karl Hess was trying to tell you. By assuming that, you have restricted your "theorems" to models in which no time delay exists between the arrival times of the particles. That is, violation could simply mean the "same time of arrival" assumption is wrong as Hans De Raedt and others have shown convincingly.

So again, let me ask both you Richard Gill, and Jan-Ake Larsson. On what basis do you assume both particles should arrive at the same time? And why shouldn't you interpret a violation as a failure of the "same arrival time" assumption?

Thanks you for drawing my attention to the Larsson Paper. The whole of section 3.6 now appears to be suspect and reveals probably where the Christensen group got the idea to measure just the first particle pair in each cell.

Unregistered Submission:

( January 14th, 2015 3:50pm UTC )

Edit your feedback below

Donald Graft: ( January 10th, 2015 11:05pm UTC ): "But you have to use all the experimental data, and not throw out the data that is inconvenient to your hypothesis. I imagine that you would agree with that."

No reason to agree with that. In science there is no "have to". An experiment is what it is and an analysis is what it is (although another analysis for the same eperiment is possible and sometimes desiderable). Even if the analysis is not (logically or statistically) sound it produces some result. If a theory predicts a different result for that experiment and that analysis then the theory is refuted or the description of the experiment and analysis (as used for the prediction) is wrong.

No reason to agree with that. In science there is no "have to". An experiment is what it is and an analysis is what it is (although another analysis for the same eperiment is possible and sometimes desiderable). Even if the analysis is not (logically or statistically) sound it produces some result. If a theory predicts a different result for that experiment and that analysis then the theory is refuted or the description of the experiment and analysis (as used for the prediction) is wrong.

Enter your reply below (Please read the **How To**)

Donald Graft:

( January 11th, 2015 7:11pm UTC )

Edit your feedback below

re 2. We will see what the referees think. I'm sorry for being so confused as to suggest that physicists like to shoot photons and then try to detect them. It is all just experimental units, statistics, and data discarding. Entangled photon pairs don't even enter the picture at all.

re 3. I can't make any sense of this. Sorry.

re 4. I never said any such thing. Gill is misrepresenting me. Locality is an empirical matter to be resolved by experiment. That is surely the point of analyzing an experiment, as my paper does! Gill is the only one appealing to a priori's. This is classic psychological projection. I ask only that data is not discarded (http://en.wikipedia.org/wiki/Cherry_picking_%28fallacy%29, and http://en.wikipedia.org/wiki/Selection_bias). Gill should write a book: "Data discarding for fun and profit". As I have repeated several times, I mentioned decoherence only as an analogy to show that we do not always blindly expect the quantum joint prediction to apply. How many times do I have to say these things? Gill went silent after I showed the irrationality of applying the joint prediction to EPRB experiments (which deliver only marginal samples), and schooled him on copulas. One might expect more from a trained statistician. Now he is expostulating on "experimental units" and other distractions when physicists know perfectly well that we are talking about photons.

re 5. The locality and memory loopholes come into consideration only if the experiment shows a violation. I have said it before but Gill apparently prefers to lecture rather than to listen. The only loophole here is the post-selection loophole, invoked by the mysterians, to justify irrationality. The whole game changes when we turn the tables and assert that QM does not predict a violation and that experiments agree. The old mysterian ploys ("found another loophole?") no longer function. Gill is slow to catch on.

re 3. I can't make any sense of this. Sorry.

re 4. I never said any such thing. Gill is misrepresenting me. Locality is an empirical matter to be resolved by experiment. That is surely the point of analyzing an experiment, as my paper does! Gill is the only one appealing to a priori's. This is classic psychological projection. I ask only that data is not discarded (http://en.wikipedia.org/wiki/Cherry_picking_%28fallacy%29, and http://en.wikipedia.org/wiki/Selection_bias). Gill should write a book: "Data discarding for fun and profit". As I have repeated several times, I mentioned decoherence only as an analogy to show that we do not always blindly expect the quantum joint prediction to apply. How many times do I have to say these things? Gill went silent after I showed the irrationality of applying the joint prediction to EPRB experiments (which deliver only marginal samples), and schooled him on copulas. One might expect more from a trained statistician. Now he is expostulating on "experimental units" and other distractions when physicists know perfectly well that we are talking about photons.

re 5. The locality and memory loopholes come into consideration only if the experiment shows a violation. I have said it before but Gill apparently prefers to lecture rather than to listen. The only loophole here is the post-selection loophole, invoked by the mysterians, to justify irrationality. The whole game changes when we turn the tables and assert that QM does not predict a violation and that experiments agree. The old mysterian ploys ("found another loophole?") no longer function. Gill is slow to catch on.

Richard Gill:

( January 12th, 2015 12:49pm UTC )

Edit your feedback below

There is *no* selection bias involved in defining the outcome for each time-slot to be the first observed measurement result (+1 or -1) and "no outcome" if there is none. The experimental unit is "time slot". A new setting is chosen at random for each time slot in each wing of the experiment. Treatment = setting, outcome = what we choose to define to be outcome, any function of what is observed in the relevant time-slot.

Yes, statistical thinking is hard for physicists, and statisticians do indeed know lots of ways to lie with statistics; but they also know (regarding estimation) smart and legitimate ways to eliminate bias, to compute standard errors, and to minimise the size of standard errors; and (regarding testing of hypotheses) smart and legitimate ways to create unbiased statistical tests (actual significance level = nominal significance level) with maximal power.

"the only loophole here is the post-selection loophole, invoked by the mysterians, to justify irrationality". Notice the abuse: "mysterians", "irrationality". If you don't understand something, being rude to someone who could possibly teach you something which you don't know yet, is not a smart way to learn. Of course if you are certain you are genius and the other is a fool then you may as well be rude to them, if it boosts your ego. But I still don't think it is wise.

Yes, statistical thinking is hard for physicists, and statisticians do indeed know lots of ways to lie with statistics; but they also know (regarding estimation) smart and legitimate ways to eliminate bias, to compute standard errors, and to minimise the size of standard errors; and (regarding testing of hypotheses) smart and legitimate ways to create unbiased statistical tests (actual significance level = nominal significance level) with maximal power.

"the only loophole here is the post-selection loophole, invoked by the mysterians, to justify irrationality". Notice the abuse: "mysterians", "irrationality". If you don't understand something, being rude to someone who could possibly teach you something which you don't know yet, is not a smart way to learn. Of course if you are certain you are genius and the other is a fool then you may as well be rude to them, if it boosts your ego. But I still don't think it is wise.

Donald Graft:

( January 12th, 2015 1:39pm UTC )

Edit your feedback below

There is no bias in systematically discarding data so that your hypothesis is proven when it would have been disproven if all the data were included? This is standard operating procedure for the mysterians: show nonlocality at all costs; after all, it cannot be falsified and the gravy train depends on it. Physicists and enginers can't understand probability and Bell's work; that is reserved for trained statisticians. My extensive library on the subject is totally wasted on me. Why did I waste all that money?

"Of course if you are certain you are genius and the other is a fool then you may as well be rude to them, if it boosts your ego."

That is another display of psychological projection. My ego does not require any boosts (in Buddhism, we try very hard to eliminate the ego, not boost it [http://the-wanderling.com/ego.html]), nor is such rhetoric considered to be scientific discourse by rational people.

Regarding the term "mysterian", I agree with another poster here, it is an accurate and neutral term that denotes the belief that there is some kind of mysterious (didn't Einstein call it "spooky") physics that allows information transfer in violation of Lorentz invariance. If the shoe fits, wear it with pride. Regarding the term "irrational", again, if the shoe fits, wear it with pride. The belief that a correlated joint distribution can be recovered with marginal measurements is irrational. I certainly won't be dissuaded from using precise and meaningful terminology. One might say it is "fit for purpose".

"Of course if you are certain you are genius and the other is a fool then you may as well be rude to them, if it boosts your ego."

That is another display of psychological projection. My ego does not require any boosts (in Buddhism, we try very hard to eliminate the ego, not boost it [http://the-wanderling.com/ego.html]), nor is such rhetoric considered to be scientific discourse by rational people.

Regarding the term "mysterian", I agree with another poster here, it is an accurate and neutral term that denotes the belief that there is some kind of mysterious (didn't Einstein call it "spooky") physics that allows information transfer in violation of Lorentz invariance. If the shoe fits, wear it with pride. Regarding the term "irrational", again, if the shoe fits, wear it with pride. The belief that a correlated joint distribution can be recovered with marginal measurements is irrational. I certainly won't be dissuaded from using precise and meaningful terminology. One might say it is "fit for purpose".

Jan-Åke Larsson:

( January 12th, 2015 2:46pm UTC )

Edit your feedback below

Donald, It seems that you have cooled down somewhat. I think that is good.

It is dishonest to so heavily edit and delete remarks that you have made earlier in the thread. It gives the thread a weird look when some of our responses are referring to comments by you that are not there anymore. I know you believe this is acceptable tactics, but the general impression is dishonesty on your part.

You have removed the more caustic remarks, which I suppose is good. I still have them, having saved the page several times in the process. But I will not re-post them, I will just add: There is no psychological projection by Richard, only an observation of your behaviour.

It is dishonest to so heavily edit and delete remarks that you have made earlier in the thread. It gives the thread a weird look when some of our responses are referring to comments by you that are not there anymore. I know you believe this is acceptable tactics, but the general impression is dishonesty on your part.

You have removed the more caustic remarks, which I suppose is good. I still have them, having saved the page several times in the process. But I will not re-post them, I will just add: There is no psychological projection by Richard, only an observation of your behaviour.

Richard Gill:

( January 12th, 2015 2:54pm UTC )

Edit your feedback below

I'm sorry Don if I misrepresented you, but I was not aware of doing so. On the other hand I am very aware of the same thing happening in the other direction. It certainly causes a lot of irritation and that degrades the quality of the discussion.

Seems to me that we could all do with a bit more Buddhism.

Seems to me that we could all do with a bit more Buddhism.

Donald Graft:

( January 12th, 2015 3:35pm UTC )

Edit your feedback below

Calling me dishonest in public requires a response. I have explained my editing policies once already (convergence toward a crystallized rendering of the divine creative spark of thought). I regret that you find it dishonest and object to my methods. You are going to be very busy if you want to collect all my posts and edits. I freely confess that I do a lot of editing, having worked as a technical writer and editor for Bell Laboratories early in my life. It is a great auxiliary skill for a scientist to have. And you'd better grab this one before it disappears. You can use it later to rebut my science.

The mysterians are now reduced to attacking me for my hypothesized personal flaws, their scientific arguments having failed miserably.

The mysterians are now reduced to attacking me for my hypothesized personal flaws, their scientific arguments having failed miserably.

Richard Gill:

( January 12th, 2015 4:55pm UTC )

Edit your feedback below

Donald Graft divides the world into "realists" and "mysterians". He says he is not biased and berates me for saying that he is. Every penetrating scientific analysis which he doesn't understand means, to him, that the writer thereof is a fool or is evil or both. Anyone who doesn't understand him is accused of misrepresenting him, when they try to re-express what he seems to be saying in other words. Anyone who tries to explain another point of view is pedantic and arrogant. He and he alone has the moral high ground of being a "physicist". His ego doesn't require boosting. He boasts of invading the mind of his enemies. Of destroying Zeilinger's reputation. He and his allies sustain a brilliant spark of creative energy. Hmm... time for some calm insight meditation, I think.

Donald Graft:

( January 12th, 2015 5:10pm UTC )

Edit your feedback below

Ohhhh! As I said: The mysterians are now reduced to attacking me for my imagined personal flaws, their scientific arguments having failed miserably. Now they have added ridiculing my spiritual beliefs. Note also how it comes immediately after they take me to task for what they perceive as abuse! Monty Python would be green with envy.

Do not take ravings as accurate representations of what I have said and done. I leave it to rational readers to judge that.

I am a (retired) computer engineer, notwithstanding having also studied both Natural Sciences and Philosophy. Engineering keeps one close to raw nature, and makes one sensitive to nature's unmerciful crushing of deluded thinking. Physicists, too, are close to raw nature. I do not denigrate physicists; I admire them.

Do not take ravings as accurate representations of what I have said and done. I leave it to rational readers to judge that.

I am a (retired) computer engineer, notwithstanding having also studied both Natural Sciences and Philosophy. Engineering keeps one close to raw nature, and makes one sensitive to nature's unmerciful crushing of deluded thinking. Physicists, too, are close to raw nature. I do not denigrate physicists; I admire them.

Richard Gill:

( January 13th, 2015 8:57am UTC )

Edit your feedback below

"Engineering keeps one close to raw nature, and makes one sensitive to nature's unmerciful crushing of deluded thinking. Physicists, too, are close to raw nature. I do not denigrate physicists, as Gill does; I admire them."

I have deep admiration of physicists. And engineers for that matter.

Yet sometimes physicists are so close to raw nature that they become blind to logic. Remember, for them, mathematics is merely the language in which they describe nature, whereas for a mathematician, there is an independent mathematical universe.

For instance a respected colleague published a paper on a CHSH experiment exhibiting experimental violation of Tsirelson's inequality hence disproof of quantum theory. Neither he nor the referees or editors of PRL were aware of the momentous nature of this experiment ... nor of the loopholes in his experiment. Another respected colleague claimed that a single run of his GHZ experiment (I mean one single set of measurements of three entangled particles) generated an outcome logically incompatible with local realism. In fact, a single run of his experiment also generated outcomes incompatible with the GHZ state ...

Long ago I was very struck by a remark either by Patrick Suppes or by Arthur Fine that the general level of discourse concerning (possibly non-deterministic) causality in the social sciences and in economics is much much higher than in physics. It is actually *harder* to investigate causality in those fields, and hence people have been *forced* to think more deeply and carefully about it. Does anyone know which paper this was said in? I can't locate it right now.

It's amusing that Donald Graft seems to believe that I am ridiculing his spiritual beliefs. He earlier made some comments which suggested to me that he and I actually had similar ones. Not that that has any place in the present discussion.

I have deep admiration of physicists. And engineers for that matter.

Yet sometimes physicists are so close to raw nature that they become blind to logic. Remember, for them, mathematics is merely the language in which they describe nature, whereas for a mathematician, there is an independent mathematical universe.

For instance a respected colleague published a paper on a CHSH experiment exhibiting experimental violation of Tsirelson's inequality hence disproof of quantum theory. Neither he nor the referees or editors of PRL were aware of the momentous nature of this experiment ... nor of the loopholes in his experiment. Another respected colleague claimed that a single run of his GHZ experiment (I mean one single set of measurements of three entangled particles) generated an outcome logically incompatible with local realism. In fact, a single run of his experiment also generated outcomes incompatible with the GHZ state ...

Long ago I was very struck by a remark either by Patrick Suppes or by Arthur Fine that the general level of discourse concerning (possibly non-deterministic) causality in the social sciences and in economics is much much higher than in physics. It is actually *harder* to investigate causality in those fields, and hence people have been *forced* to think more deeply and carefully about it. Does anyone know which paper this was said in? I can't locate it right now.

It's amusing that Donald Graft seems to believe that I am ridiculing his spiritual beliefs. He earlier made some comments which suggested to me that he and I actually had similar ones. Not that that has any place in the present discussion.

Enter your reply below (Please read the **How To**)

Heine Rasmussen :

( January 12th, 2015 6:35pm UTC )

Edit your feedback below

Sigh, I really think it is time to close this thread. The arguments that the selection Christensen et. al. employed is innocent, and essentiallly amounts to noice reduction (as opposed to "strategic post-selection") have been clearly presented. Nothing more can come out of this discussion.

Donald Graft:

( January 12th, 2015 6:37pm UTC )

Edit your feedback below

When I suggest returning to the science, the mysterians decide that further discussion should be prohibited.

Heine, I respect your right to express your opinions, sighs and all. I don't respect your desire to shut down discussion of matters that you find inconvenient.

Note that the mysterians now admit that the post-selection actually occurs, after having claimed that it is impossible ("checkmate", "he has no move", "formal proof", etc.), and now seek to discard the inconvenient data as noise. I apparently overlooked the demonstration that the Christensen et al experiment is too noisy, or that only the second and subsequent events in a timeslot are noise.

Do I get any credit for discovering and communicating this important discovery (as well as my rational interpretation of EPR), never before advanced in the history of the EPR paradox? Of course not, I am just a rude crackpot who refuses to bow down to the "experts" of the field. One who must be silenced.

Heine, I respect your right to express your opinions, sighs and all. I don't respect your desire to shut down discussion of matters that you find inconvenient.

Note that the mysterians now admit that the post-selection actually occurs, after having claimed that it is impossible ("checkmate", "he has no move", "formal proof", etc.), and now seek to discard the inconvenient data as noise. I apparently overlooked the demonstration that the Christensen et al experiment is too noisy, or that only the second and subsequent events in a timeslot are noise.

Do I get any credit for discovering and communicating this important discovery (as well as my rational interpretation of EPR), never before advanced in the history of the EPR paradox? Of course not, I am just a rude crackpot who refuses to bow down to the "experts" of the field. One who must be silenced.

Heine Rasmussen :

( January 12th, 2015 7:06pm UTC )

Edit your feedback below

Donald wrote:

"That is your opinion and I respect your right to it, sighs and all. I don't respect your desire to shut down discussion of matters that you find inconvenient."

My desire was not to shut down discussion about matters I find inconvenient, but I do think any discussion that deteriorates into mud slinging and phsycological characterizations, devoid of any physical or mathematical arguments, should succumb to Darwinian selection and go extinct.

"Do I get any credit for discovering and communicating this important discovery? "

Of course you do. It was an interesting discovery, but after pondering over it for a while, I concluded that this was a completely legal selection procedure that could not make an otherwise local realist model violate the CH-inequality. So the discovery was interesting, the consequences not so much.

"That is your opinion and I respect your right to it, sighs and all. I don't respect your desire to shut down discussion of matters that you find inconvenient."

My desire was not to shut down discussion about matters I find inconvenient, but I do think any discussion that deteriorates into mud slinging and phsycological characterizations, devoid of any physical or mathematical arguments, should succumb to Darwinian selection and go extinct.

"Do I get any credit for discovering and communicating this important discovery? "

Of course you do. It was an interesting discovery, but after pondering over it for a while, I concluded that this was a completely legal selection procedure that could not make an otherwise local realist model violate the CH-inequality. So the discovery was interesting, the consequences not so much.

Donald Graft:

( January 12th, 2015 7:26pm UTC )

Edit your feedback below

Regarding the mud-slinging, after an egegriously scurrilous attack on my person, I asked to return to science. Your immediate response was to request that the thread be shut down, with condescending sigh.

Thank you for pondering my work, and you have the right to your opinion. I ask only for fair consideration. The consequences of the finding are uncertain. I ask only for all the data to be included on an equal basis. It must be possible for trained statisticians to find a way to do this. I remain open to further discussion of the science, but responding to personal attacks is tiresome.

Thank you for pondering my work, and you have the right to your opinion. I ask only for fair consideration. The consequences of the finding are uncertain. I ask only for all the data to be included on an equal basis. It must be possible for trained statisticians to find a way to do this. I remain open to further discussion of the science, but responding to personal attacks is tiresome.

Unregistered Submission:

( January 12th, 2015 7:55pm UTC )

Edit your feedback below

Heine, what in your opinion is the real problem with the coincidence loophole, that allows it to defeat Bell's inequalities?

Heine Rasmussen :

( January 12th, 2015 7:59pm UTC )

Edit your feedback below

Donald wrote:

"Regarding the mud-slinging, after an egegriously scurrilous attack on my person, I asked to return to science. Your immediate response was to request that the thread be shut down, with condescending sighs and all. "

My post was not directed to anyone in particular, and I certainly did not mean that you were the only one to contribute to the degrading of this thread. Point being, this thread no longer serves any rational purpose.

"Regarding the mud-slinging, after an egegriously scurrilous attack on my person, I asked to return to science. Your immediate response was to request that the thread be shut down, with condescending sighs and all. "

My post was not directed to anyone in particular, and I certainly did not mean that you were the only one to contribute to the degrading of this thread. Point being, this thread no longer serves any rational purpose.

Unregistered Submission:

( January 12th, 2015 10:19pm UTC )

Edit your feedback below

Before you go Heine, I'll appreciate if you answer my question and possibly a follow up based on your answer. Thanks.

**What in your opinion is the problem with the coincidence loophole that allows it to violate the inequality?**

**What in your opinion is the problem with the coincidence loophole that allows it to violate the inequality?**

Donald Graft:

( January 12th, 2015 10:22pm UTC )

Edit your feedback below

Thank you for clarifying, Heine. You are right, there is no reason why we cannot return to the science. Unregistered has just asked a scientific question. It looks interesting.

What better way to celebrate the International Year of Light!

What better way to celebrate the International Year of Light!

Richard Gill:

( January 13th, 2015 7:39am UTC )

Edit your feedback below

Unregistered submitter: you asked Heine "what is the real problem with the coincidence loophole, that allows it to defeat Bell's inequalities?".

Do you mean: "how or why it allows ... " or do you mean "why is it a problem?". And do you really mean the coincidence loophole, or are you talking of the detection efficiency loophole?

Jan-Ake Larsson's survey paper is very clear, I think, and well worth reading rather carefully. http://arxiv.org/abs/1407.0363

Section 3.5 is on the coincidence loophole.

Do you mean: "how or why it allows ... " or do you mean "why is it a problem?". And do you really mean the coincidence loophole, or are you talking of the detection efficiency loophole?

Jan-Ake Larsson's survey paper is very clear, I think, and well worth reading rather carefully. http://arxiv.org/abs/1407.0363

Section 3.5 is on the coincidence loophole.

Unregistered Submission:

( January 13th, 2015 1:32pm UTC )

Edit your feedback below

Richard Gill, I think my question is clear enough. Do you have an answer for it, I'm trying to get one of the mysterians to state it succinctly. For example, the problem with the detection loophole is that it is possible for perfectly normal local realistic model to have local angle based losses. Now to show just how flawed the arguments in this thread have been, I would appreciate if one of you would state precisely what the problem for the coincidence time loophole is.

Note, I have read Larsson's paper, as well as your joint paper with him so I don't appreciate the school-teacher attitude.

Note, I have read Larsson's paper, as well as your joint paper with him so I don't appreciate the school-teacher attitude.

Richard Gill:

( January 13th, 2015 3:11pm UTC )

Edit your feedback below

Sorry, I don't think your question was clear at all. Hence it provoked a school-teacher approach.

And why do you call me (or anyone else, for that matter) a mysterian? I do not reject local realism. I don't have mystical beliefs concerning quantum non-locality one way or the other. We are talking about science here, not about personal religion, and also I hope not about personal psychology.

But anyway, now I know what problem you are talking about, I can answer that the problem with the coincidence loophole is similar to that with the detection loophole: it is possible for a perfectly normal local realistic model to have local timing delays or speed ups (depending both on hidden variables and on the local setting), just as a perfectly normal local realistic model can have local angle based losses. With the coincidence loophole, some events become counted as "singles" because the time interval between them is "too long". With the detection loophole, some events are counted as singletons (or even lost altogether) because of local losses (depending both on hidden variables and on the local setting).

The problem with the coincidence loophole is *worse* than with the detection loophole, since it now *easier* for local realistic processes to reproduce the quantum correlations, as Jan-Ake Larsson explains in his survey paper.

And why do you call me (or anyone else, for that matter) a mysterian? I do not reject local realism. I don't have mystical beliefs concerning quantum non-locality one way or the other. We are talking about science here, not about personal religion, and also I hope not about personal psychology.

But anyway, now I know what problem you are talking about, I can answer that the problem with the coincidence loophole is similar to that with the detection loophole: it is possible for a perfectly normal local realistic model to have local timing delays or speed ups (depending both on hidden variables and on the local setting), just as a perfectly normal local realistic model can have local angle based losses. With the coincidence loophole, some events become counted as "singles" because the time interval between them is "too long". With the detection loophole, some events are counted as singletons (or even lost altogether) because of local losses (depending both on hidden variables and on the local setting).

The problem with the coincidence loophole is *worse* than with the detection loophole, since it now *easier* for local realistic processes to reproduce the quantum correlations, as Jan-Ake Larsson explains in his survey paper.

Unregistered Submission:

( January 13th, 2015 4:41pm UTC )

Edit your feedback below

My question was very clear. So let me follow up if you agree that the problem is because it is possible for a local realistic model to have delays between detection events, why would picking just the two events that are closest to the beginning of the start of the pockel cell avoid the detection loophole as Larsson claims and you agree? Besides Larsson erroneously claims that the detection and coincidence loopholes are due to experimental imperfections.

Unregistered Submission:

( January 13th, 2015 6:03pm UTC )

Edit your feedback below

I meant "coincidence loophole" in the last sentence of my previous post. Why would picking only the events at the beginning of the pockel cell avoid the coincidence loophole.

Richard Gill:

( January 13th, 2015 7:19pm UTC )

Edit your feedback below

Unregistered submitter: why it avoids the coincidence loophole: because there is no selective rejection of experimental units (paired time slots). The processing of data in each wing of the experiment, in each time slot, is local. If however you decide on the basis of comparing results in the two wings of the experiment to reject altogether some of the experimental units, you may very well introduce bias since the pair not being rejected depends on the measurement settings applied to each time slot, and the local variables.

The coincidence loohole, and the detection loophole, both work through selectively discarding some of the experimental units, in a way which might depend on hidden variables and on the measurement setting applied to the experimental unit.

See "Bertlmann's socks", especially the discussion around Figure 7. Picking and reporting the first event in the time-slot is part of what happens (or may be thought to happen) entirely *within* the time limits and the space limits of the big box drawn in the figure. It's just a little computer chip built into the measurement device. The measurement device generates a binary outcome at a certain place and within a certain time limit.

The coincidence loohole, and the detection loophole, both work through selectively discarding some of the experimental units, in a way which might depend on hidden variables and on the measurement setting applied to the experimental unit.

See "Bertlmann's socks", especially the discussion around Figure 7. Picking and reporting the first event in the time-slot is part of what happens (or may be thought to happen) entirely *within* the time limits and the space limits of the big box drawn in the figure. It's just a little computer chip built into the measurement device. The measurement device generates a binary outcome at a certain place and within a certain time limit.

Unregistered Submission:

( January 14th, 2015 2:01pm UTC )

Edit your feedback below

As I have explained on the discussion page for the Larsson and Gill, Coincidence loophole paper, https://pubpeer.com/publications/E0F8384FC19A6034E86D516D03BB38, Richard Gill's arguments above are mistaken. The experimental unit can't possibly be "time-slots". They must be the particle pairs since the integrand of the paired expectation value is the shared lambda between particle pairs, and not the shared time of the time-slots. By selecting just the first particle in a time-slot, you allow for the possibility that the space of effective lambdas for the outcomes is no longer the same for each probability term in the inequality which makes violation likely. I repeat, by selecting the first particles in paired-time slots, you force the times to be the same, but allow for lambdas to be different which violates the core assumption for deriving the inequality in the first place.

One way to mitigate it will be to use *all* particle particles within each time-slot, so that you effectively assume the time is included in lambda, but without *systematically* discarding lambdas which introduce longer time delays. This would give you a much better match of the experimental analysis to the assumptions used to derive of the inequality. This is exactly what Donald has done. His method is superior to that suggested by Larsson and used by Christensen et al. That his results show no violation indeed shows that this experiment firmly lands on the local realism side of the debate, and not the quantum mystery side.

One other thing. You say "selectively discarding". I would say it is more accurate to say "systematically discarding". Selective has a motive connotation which is unnecessary. The same can be accomplished innocently by naive treatment of the data, such as considering only the first particle pair in paired time slots. It does not appear "selective" but it is definitely "systematic".

One way to mitigate it will be to use *all* particle particles within each time-slot, so that you effectively assume the time is included in lambda, but without *systematically* discarding lambdas which introduce longer time delays. This would give you a much better match of the experimental analysis to the assumptions used to derive of the inequality. This is exactly what Donald has done. His method is superior to that suggested by Larsson and used by Christensen et al. That his results show no violation indeed shows that this experiment firmly lands on the local realism side of the debate, and not the quantum mystery side.

One other thing. You say "selectively discarding". I would say it is more accurate to say "systematically discarding". Selective has a motive connotation which is unnecessary. The same can be accomplished innocently by naive treatment of the data, such as considering only the first particle pair in paired time slots. It does not appear "selective" but it is definitely "systematic".

Donald Graft:

( January 14th, 2015 2:13pm UTC )

Edit your feedback below

The correct link to the new thread is:

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

Richard Gill:

( January 14th, 2015 5:15pm UTC )

Edit your feedback below

Thanks, Donald. The discussion is continued there.

Unregistered Submitter: if you think there are some points you have raised which I did not yet react to, please raise them again, on the other thread.

Unregistered Submitter: if you think there are some points you have raised which I did not yet react to, please raise them again, on the other thread.

Enter your reply below (Please read the **How To**)

Richard Gill:

( January 16th, 2015 6:44am UTC )

Edit your feedback below

Actually, I would suggest that one picks the event in each time-slot closest to the midpoint of the time-slot. Least chance of getting an event which should really have been paired with an event in another time-slot.

Unregistered Submission:

( January 18th, 2015 5:59pm UTC )

Edit your feedback below

Richard Gill: ( January 16th, 2015 6:44am UTC ): "Actually, I would suggest that one picks the event in each time-slot closest to the midpoint of the time-slot. Least chance of getting an event which should really have been paired with an event in another time-slot."

Good idea if not too hard for the electronics. Time slot interference could be avoided with a blind time (when no detection is indicated) in the beginning of each time slot.

Good idea if not too hard for the electronics. Time slot interference could be avoided with a blind time (when no detection is indicated) in the beginning of each time slot.

Donald Graft:

( January 18th, 2015 7:01pm UTC )

Edit your feedback below

Can we please keep the coincidence window stuff in the other thread? I do not address this specifically in my paper, and it will be useful to keep all the relevant arguments in one thread. Thank you.

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( January 20th, 2015 12:10am UTC )

Edit your feedback below

This excellent published paper by Graft which demolishes Bell's Theorem is relevant to the present discussion.

http://rationalqm.us/papers/bell.pdf

The quantum mysterians are in total despair. Ours is an Universe without spook actions at distance. Herr Einstein was right.

http://rationalqm.us/papers/bell.pdf

The quantum mysterians are in total despair. Ours is an Universe without spook actions at distance. Herr Einstein was right.

Donald Graft:

( January 20th, 2015 12:58pm UTC )

Edit your feedback below

Thanks for the kind words but I have to point out that it was my first paper, naive in some ways, and applies only to Bell's original 3-term inequality. As I and others have showed, that inequality can be violated by simple contextuality. But CH cannot be so violated. So while interesting, it's not highly relevant here.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( January 22nd, 2015 4:42pm UTC )

Edit your feedback below

Are you saying that the results in the paper are not correct? Because there you gave the pseudo-code of a Monte Carlo simulation of a local-realistic hidden variables model which violates Bell's inequality, and Quantum Mysterians like Gill say that this is impossible to do.

Donald Graft:

( January 22nd, 2015 5:41pm UTC )

Edit your feedback below

The paper is correct in that a model is demonstrated that violates Bell's original 3-term inequality. However, the same approach cannot be used to violate the CH inequality. Technically, the derivations of the 3-term inequality and the CH inequality have completely different assumptions. Nobody cares that much about the original 3-term inequality anymore (as it is so easy to violate). You'll find the mysterians interested only in CHSH and CH, and variants thereof.

What is wrong/naive in the paper is the suggestion that all inequality violations (even for CH, etc.) are due only to lack of joint measurability of incompatible arrangements. I no longer believe this and I have shown in a later paper that the CH inequality is derived (by Clauser and Horne) without any requirement for joint measurability of incompatible arrangements, and that correctly applied the CH inequality can validly test aspects of locality.

I definitely agree with you that Einstein was right about the universality of Lorentz invariance in nature.

What is wrong/naive in the paper is the suggestion that all inequality violations (even for CH, etc.) are due only to lack of joint measurability of incompatible arrangements. I no longer believe this and I have shown in a later paper that the CH inequality is derived (by Clauser and Horne) without any requirement for joint measurability of incompatible arrangements, and that correctly applied the CH inequality can validly test aspects of locality.

I definitely agree with you that Einstein was right about the universality of Lorentz invariance in nature.

Richard Gill:

( January 24th, 2015 2:18pm UTC )

Edit your feedback below

It is very good that we agree on this very fundamental issue. So the only problem is what we mean by "correctly applied".

The following remarks concern a side issue, but I'd like to mention the connections between the various Bell-type inequalities we are talking about here. My apologies to those who already know all this (it is very standard).

Bell's three term inequality is derived (under local realism) under the condition of a further "one term equality". So it does actually involve four correlations. Seen this way, it is a special case of the CHSH inequality, obtained from CHSH by furthermore demanding that one of the four correlations equals - 1.

Moreover, as was mentioned before, under the no-signalling assumption, CHSH and CH are equivalent. (No-signalling = Alice can't see at her site what Bob is doing, and vice-versa). There really is only one inequality for the two party, two setting, two outcome situation: CHSH. As was proven by A. Fine (1982), Phys. Rev. Lett. 48, 291 (and actually also by somebody else before, but their paper was forgotten, and I have forgotten who they were, too). The Clauser-Horne trick was to reduce a three outcome situation (+1, -1, no detection) to two by coarse-graining: merge -1 and no detection. Then use no-signalling (a set of *equalities*) to rewrite the inequality into something which looks quite different but is actually equivalent, if all the equalities are true. Of course with experimental data those equalities are not exact but only true up to statistical variation.

Unregistered Submission:

( January 25th, 2015 7:15pm UTC )

Edit your feedback below

Question for Gill and Graft: Are you suggesting that the terms in the expectations in CH inequality can be over different probability measures? If not what do you mean by "without any requirement for joint measurability"

Donald Graft:

( January 25th, 2015 11:37pm UTC )

Edit your feedback below

Well, first, there are no expectations in CH, only probabilities.

It's all described clearly in this paper: arXiv 1404.4329, "On Bell-Like Inequalities for Testing Local Realism", section 2.3.

Executive summary: inserting numbers into a tautologous inequality is not the same as sampling from a joint distribution of incompatible measurements.

It's all described clearly in this paper: arXiv 1404.4329, "On Bell-Like Inequalities for Testing Local Realism", section 2.3.

Executive summary: inserting numbers into a tautologous inequality is not the same as sampling from a joint distribution of incompatible measurements.

Donald Graft:

( January 25th, 2015 11:47pm UTC )

Edit your feedback below

"under the no-signalling assumption, CHSH and CH are equivalent"

That is what I understood but Larsson suggested that they were equivalent, period. That is not true.

No-signalling is an additional unnecessary assumption with unclear significance. CH alone is fine with one clear assumption (factorizability) related to nonlocality.

You can see that "no-signalling" complicates things unnecessarily, because, for example, Bierhorst's "new loophole" affects only the CH+no-signalling inequality, and not the bare CH inequality, as I have described. The (irrationally applied) quantum joint prediction predicts a violation of bare CH, so why would it be necessary to apply additional assumptions whose significance is not clear?

That is what I understood but Larsson suggested that they were equivalent, period. That is not true.

No-signalling is an additional unnecessary assumption with unclear significance. CH alone is fine with one clear assumption (factorizability) related to nonlocality.

You can see that "no-signalling" complicates things unnecessarily, because, for example, Bierhorst's "new loophole" affects only the CH+no-signalling inequality, and not the bare CH inequality, as I have described. The (irrationally applied) quantum joint prediction predicts a violation of bare CH, so why would it be necessary to apply additional assumptions whose significance is not clear?

Richard Gill:

( January 26th, 2015 9:31am UTC )

Edit your feedback below

"No-signalling" is implied by "no action at a distance". I think that that is an assumption which does have physical significance. Especially since it can easily be experimentally verified (up to statistical error).

To be precise, "no-signalling" is the statement that the probability that Alice sees (say) "+1" when she uses setting "a" and Bob uses setting "b" does not depend on the setting "b" which Bob is using in the other wing of the experiment.

It's the most basic form of "information causality" ("*m* = 0") which is described in the 2009 paper doi:10.1038/nature08400 "Information causality as a physical principle" by Pawłowski, Paterek, Kaszlikowski, Scarani, Winter & Zukowski

Alice learns nothing from Bob when Bob sends her 0 bits. The general information causality principle is that Alice learns no more than*m* bits of information when Bob sends her *m* bits. The Tsirelson bound 2 sqrt 2 in a CHSH-type experiment is exactly the bound set by information causality. Anything bigger than 2 sqrt 2 would violate information causality.

To be precise, "no-signalling" is the statement that the probability that Alice sees (say) "+1" when she uses setting "a" and Bob uses setting "b" does not depend on the setting "b" which Bob is using in the other wing of the experiment.

It's the most basic form of "information causality" ("

Alice learns nothing from Bob when Bob sends her 0 bits. The general information causality principle is that Alice learns no more than

Unregistered Submission:

( January 26th, 2015 9:44am UTC )

Edit your feedback below

And "no signalling" is a part of local realism mentioned in the title of the article, so unavoidable in this discussion even if unnecessary in a wider context.

Donald Graft:

( January 26th, 2015 1:05pm UTC )

Edit your feedback below

No signalling is an extra, unnecessary assumption above and beyond the bare CH conditions. There is a large literature on the significance and meaning of no signalling; it is not so clear as one may think. The Christensen et al experiment concerns the bare CH inequality appealing to a single assumption related to locality (factorizability), and therefore so does my paper. No signalling can be discussed elsewhere if anybody finds it interesting.

Unregistered Submission:

( January 27th, 2015 10:36am UTC )

Edit your feedback below

Donald Graft: ( January 26th, 2015 1:05pm UTC ): "No signalling is an extra, unnecessary assumption above and beyond the bare CH conditions."

Other than "no signalling", what there remains that should be experimentally tested? And how would the CH inequality be relevant to those remaining experiments?

Other than "no signalling", what there remains that should be experimentally tested? And how would the CH inequality be relevant to those remaining experiments?

Donald Graft:

( January 27th, 2015 2:51pm UTC )

Edit your feedback below

"Other than "no signalling", what there remains that should be experimentally tested? And how would the CH inequality be relevant to those remaining experiments?"

I've already said it here several times, including in the very post that you cite, and my paper "On Bell-Like Inequalities ..." covers it in detail, including the validity and relevance of the bare CH inequality for testing locality. Locality is embodied in the factorizability assumption (Bell locality). That is all that is needed to test locality. No further additional assumptions are needed or desirable.

I've already said it here several times, including in the very post that you cite, and my paper "On Bell-Like Inequalities ..." covers it in detail, including the validity and relevance of the bare CH inequality for testing locality. Locality is embodied in the factorizability assumption (Bell locality). That is all that is needed to test locality. No further additional assumptions are needed or desirable.

Unregistered Submission:

( January 27th, 2015 3:07pm UTC )

Edit your feedback below

I still don't understand what difference it makes whether to be tested is called "locality" or "no-signalling", or why testing one would be more important than testing the other.

Unregistered Submission:

( January 27th, 2015 3:34pm UTC )

Edit your feedback below

Graft: "Well, first, there are no expectations in CH, only probabilities."

So it is OK to add probabilities from different measures?

So it is OK to add probabilities from different measures?

Unregistered Submission:

( January 27th, 2015 3:44pm UTC )

Edit your feedback below

Graft: "It's all described clearly in this paper: arXiv 1404.4329, "On Bell-Like Inequalities for Testing Local Realism", section 2.3.

Executive summary: inserting numbers into a tautologous inequality is not the same as sampling from a joint distribution of incompatible measurements."

in arxiv 1404.4329, aren't you integrating over the same lambda? (equations 10 and 11), so there is an assumption there that all the measurements are compatible, otherwise you can't integrate.

Executive summary: inserting numbers into a tautologous inequality is not the same as sampling from a joint distribution of incompatible measurements."

in arxiv 1404.4329, aren't you integrating over the same lambda? (equations 10 and 11), so there is an assumption there that all the measurements are compatible, otherwise you can't integrate.

Donald Graft:

( January 27th, 2015 4:21pm UTC )

Edit your feedback below

"So it is OK to add probabilities from different measures?"

Sure, as long as we don't try to sample the measures simultaneously.

Sure, as long as we don't try to sample the measures simultaneously.

Donald Graft:

( January 27th, 2015 4:24pm UTC )

Edit your feedback below

"in arxiv 1404.4329, aren't you integrating over the same lambda? (equations 10 and 11), so there is an assumption there that all the measurements are compatible, otherwise you can't integrate."

I don't see any problem there (it doesn't have to be the same lambda) but if you'd like to discuss it further, let's do it in a thread for that paper. Thank you.

I do thank you for making this point, as the paper was indeed unclear. I have modified the version to be published to make it clear that the lambdas are different for each experimental arrangement: lambda(alpha1, beta1), lambda(alpha1, beta2), etc.

I don't see any problem there (it doesn't have to be the same lambda) but if you'd like to discuss it further, let's do it in a thread for that paper. Thank you.

I do thank you for making this point, as the paper was indeed unclear. I have modified the version to be published to make it clear that the lambdas are different for each experimental arrangement: lambda(alpha1, beta1), lambda(alpha1, beta2), etc.

Donald Graft:

( January 27th, 2015 4:35pm UTC )

Edit your feedback below

"I still don't understand what difference it makes whether to be tested is called "locality" or "no-signalling", or why testing one would be more important than testing the other."

We are talking about different assumptions that can be formulated mathematically, not general philosophical concepts and how they are labeled. If for you it*doesn't* make a difference which assumption is used (factorizability versus constraints on marginals), then you can be happy with my choice to follow Clauser and Horne, and Christensen et al.

We are talking about different assumptions that can be formulated mathematically, not general philosophical concepts and how they are labeled. If for you it

Unregistered Submission:

( January 27th, 2015 9:47pm UTC )

Edit your feedback below

Donald Graft said: "Well, first, there are no expectations in CH, only probabilities."

Lemma. P(A) = E[I_A]

Lemma. P(A) = E[I_A]

Richard Gill:

( January 28th, 2015 11:39am UTC )

Edit your feedback below

"No-signalling" is locality at the level of actually observed variables. Violation of no-signalling would enable action at a distance - action at a distance which could be exploited by us in the macroscopic world. "Quantum non-locality" or "violation of local realism" is non-locality involving physical variables which only have a mathematical existence, under an assumption called "realism". Which is actually the *idealistic* standpoint that also outcomes of non-performed measurements exist in space-time, or can be placed in space-time, alongside of the outcomes of the actually performed measurements. If quantum-non-locality is proven, *and* if you assume that quantum phenomena have an underlying classical-like explanation, *then* there is action at a distance at this underlying (hidden) level. But not action at a distance which we can use for any purpose. Since quantum mechanics (Copenhagen interpretation) is entirely compatible with relativistic causality.

It is only when you attempt to add to quantum mechanics (which is what Einstein wanted to do) that you appear to get into difficulties (Bell).

Donald Graft:

( February 1st, 2015 3:19pm UTC )

Edit your feedback below

"Lemma. P(A) = E[I_A]"

It's not clear what this is saying. Expectations range over -1 to 1. Therefore, a probability cannot be equated to an expectation. An inhomogeneous inequality cannot be converted to a homogeneous one by application of a lemma.

It's not clear what this is saying. Expectations range over -1 to 1. Therefore, a probability cannot be equated to an expectation. An inhomogeneous inequality cannot be converted to a homogeneous one by application of a lemma.

Richard Gill:

( February 1st, 2015 4:00pm UTC )

Edit your feedback below

The equation P(A) = E[I_A] is saying that probabilities *are* expectations. A probability *can* be equated to an expectation value.

An inhomogeneous inequality (CHSH) can be converted to a homogeneous one (CH) by application of two lemmas. We also need "no-signalling".

An inhomogeneous inequality (CHSH) can be converted to a homogeneous one (CH) by application of two lemmas. We also need "no-signalling".

Donald Graft:

( February 1st, 2015 4:13pm UTC )

Edit your feedback below

"probabilities are expectations"

As shown, that cannot be true when negative expectations are present, such as in CHSH. It's a distraction because CH contains only probabilities.

"An inhomogeneous inequality (CHSH) can be converted to a homogeneous one (CH)..."

This incorrectly reverses the meanings of inhomogeneous and homogeneous in the context of EPRB.

'We also need "no-signalling".'

Exactly my point.

As shown, that cannot be true when negative expectations are present, such as in CHSH. It's a distraction because CH contains only probabilities.

"An inhomogeneous inequality (CHSH) can be converted to a homogeneous one (CH)..."

This incorrectly reverses the meanings of inhomogeneous and homogeneous in the context of EPRB.

'We also need "no-signalling".'

Exactly my point.

Richard Gill:

( February 2nd, 2015 12:28pm UTC )

Edit your feedback below

http://www.amazon.com/Probability-Expectation-Springer-Texts-Statistics/dp/0387977643

My meaning of inhomogenous and homogenous is as is usual in mathematics. I'm not familiar with how the legendary Emilo Santos used the terms.

Please keep your language civil. I don't see the need for phrases like "condescending lecturing", "schoolboy howlers", and "LOL".

My meaning of inhomogenous and homogenous is as is usual in mathematics. I'm not familiar with how the legendary Emilo Santos used the terms.

Please keep your language civil. I don't see the need for phrases like "condescending lecturing", "schoolboy howlers", and "LOL".

Donald Graft:

( February 2nd, 2015 2:31pm UTC )

Edit your feedback below

The cited text does not assert that a probability is the *same thing* as an expectation; probabilities are not expectations. An expectation is the expected value of a variable. Depending on the range of the variable, the expectation may not be limited to [0,1] and so cannot *in general* be a probability.

Your use of inhomogenous versus homogeneous is simply wrong in the context of EPRB discussed here, as all competent foundations researchers instantly recognize. It is revealing that you cannot allow yourself to acknowledge and correct a simple mistake.

Your use of inhomogenous versus homogeneous is simply wrong in the context of EPRB discussed here, as all competent foundations researchers instantly recognize. It is revealing that you cannot allow yourself to acknowledge and correct a simple mistake.

Richard Gill:

( February 2nd, 2015 3:52pm UTC )

Edit your feedback below

A probability is an expectation value of a zero-one valued random variable. Probabilities are expectations. Nobody said that expectations are probabilities.

Huygens built his axiomatic "probability theory" on the concept of expectation value. Probabilities were a derived concept. Various modern authors have done the same.

Huygens built his axiomatic "probability theory" on the concept of expectation value. Probabilities were a derived concept. Various modern authors have done the same.

Donald Graft:

( February 2nd, 2015 4:04pm UTC )

Edit your feedback below

"probabilities are expectations [P = E]"

That is incorrect in general, as I have explained. Once again, probabilities must be [0,1] while the expectations in the CHSH inequality are [-1,1]. Therefore, they cannot be equated.

"Probabilities were a derived concept."

Exactly, they are not the same thing, which is precisely what I first pointed out. Now we see equivocation, to hide the error, by pretending that we are not talking about expectations in the CHSH inequality.

Regardless, it is all irrelevant to my paper, which applies the bare CH inequality, formulated solely with real probabilities of coincidences and singles (and thus CH is an*inhomogeneous* inequality). It's just galloping that contributes nothing useful, and serves only to distract from the demonstration that Larsson-Gill counting creates artifactual violations of CH through illegitimate post-selection, and that the Christensen et al. experiment confirms local realism and disconfirms the quantum joint prediction.

That is incorrect in general, as I have explained. Once again, probabilities must be [0,1] while the expectations in the CHSH inequality are [-1,1]. Therefore, they cannot be equated.

"Probabilities were a derived concept."

Exactly, they are not the same thing, which is precisely what I first pointed out. Now we see equivocation, to hide the error, by pretending that we are not talking about expectations in the CHSH inequality.

Regardless, it is all irrelevant to my paper, which applies the bare CH inequality, formulated solely with real probabilities of coincidences and singles (and thus CH is an

Enter your reply below (Please read the **How To**)

Donald Graft:

( January 26th, 2015 4:50pm UTC )

Edit your feedback below

Status update: Today I received notification of acceptance of this paper for publication pending revisions, and I will need to make my final revisions within a few days. So, I would welcome any further points that peers wish to raise before I have to draw the line. I greatly appreciate all the penetrating feedback from peers and will incorporate revisions to clarify the issues peers have raised. I believe PubPeer is a fantastic resource offering not only post-publication peer review but also a mechanism to obtain high quality pre-publication peer review that complements the journal reviews. Thanks to PubPeer and the generous peers, my paper will be significantly better than the preprint version.

Donald Graft:

( February 2nd, 2015 3:50pm UTC )

Edit your feedback below

The final version was accepted for J. Adv. Phys.:

http://www.aspbs.com/jap/Special%20issue%20of%20JAP.pdf

Thank you once again to PubPeer and the peers and other commenters for helping me to greatly improve the paper. The anonymous journal referees and editor also helped me with penetrating analysis, and brought to my attention several new ideas. I regret that the anonymity precludes me from citing them directly.

http://www.aspbs.com/jap/Special%20issue%20of%20JAP.pdf

Thank you once again to PubPeer and the peers and other commenters for helping me to greatly improve the paper. The anonymous journal referees and editor also helped me with penetrating analysis, and brought to my attention several new ideas. I regret that the anonymity precludes me from citing them directly.

Unregistered Submission:

( February 3rd, 2015 11:47pm UTC )

Edit your feedback below

Congrats, Donald. You should consider adjusting that piece of the paper that says that one of the problems with EPR experiments is that we are trying to determine a joint distribution sampling only the two marginals. It's correct and well known that the two marginals do not determine the joint distribution if we do not assume independence, and we can't assume that, of course. But in an EPR experiment when you pair the detections in your favorite way you establish the corresponding empirical joint distribution, from which you compute correlations, etc. Please, think about it, or find another way to explain what you were really thinking about this point. Best wishes.

Donald Graft:

( February 4th, 2015 4:09am UTC )

Edit your feedback below

Thank you for your profound observation and stay well. The joint distribution is fixed by the physics of the photon-pair source generating the singlet stream. The question is whether that distribution can be recovered (sampled) from separated (marginal) measurements. Nobody is just making up arbitrary joint distributions. We simply acknowledge that the source distribution cannot be recovered (sampled). You might think of post-selection (data discarding) as a sort of perverse copula, inserted prior to the correlation of the outcome streams, but then the result is arbitrary and artificial; the desired result is simply engineered through the perverse ad hoc copula. Our conclusions about locality should not be dependent on whose "favorite way" we choose. The quantum joint prediction predicts a violation even for direct correlation without any copula (which was the rational way experiments were analyzed before the mysterians realized that they could not violate CH that way), so why would we think of inserting an arbitrary, unjustified copula? Best wishes for your projects and may the divine creative spark nourish and empower you, as you embrace and sustain it.

Richard Gill:

( February 4th, 2015 8:40am UTC )

Edit your feedback below

From the quantum mechanics point of view, we should distinguish between the joint distribution of ideal simultaneous measurement outcomes at the source, and the joint distribution actually observed of imperfect simultaneous measurement outcomes at the measurement stations. Obviously, the joint quantum state of the two particles may have changed in the meantime. And all kinds of other stuff goes on, for instance, in the detectors.

Bell-CHSH-CH and all that is about the joint distribution of actual measurement outcomes at two distant measurement stations. I think that Bell's Bertlmann's socks paper made that abundantly clear about 25 years ago. I would not call it a profound observation, but it is a very important one.

Bell-CHSH-CH and all that is about the joint distribution of actual measurement outcomes at two distant measurement stations. I think that Bell's Bertlmann's socks paper made that abundantly clear about 25 years ago. I would not call it a profound observation, but it is a very important one.

Unregistered Submission:

( February 5th, 2015 3:32am UTC )

Edit your feedback below

Gill: "From the quantum mechanics point of view, we should distinguish between the joint distribution of *ideal simultaneous* measurement outcomes at the source and the joint distribution actually observed of *imperfect simultaneous* measurement outcomes at the measurement stations. "

And how do you know that the ideal measurement should be simultaneous? Quantum mechanics does not make that assumption.

And how do you know that the ideal measurement should be simultaneous? Quantum mechanics does not make that assumption.

Unregistered Submission:

( February 5th, 2015 4:57am UTC )

Edit your feedback below

Yes, QM does make that assumption. But it is true. The measurements must be simultaneous.

Unregistered Submission:

( February 5th, 2015 3:36pm UTC )

Edit your feedback below

Unregistered Submission: ( February 5th, 2015 3:32am UTC ): "And how do you know that the ideal measurement should be simultaneous? Quantum mechanics does not make that assumption."

Quantum mechanics makes no assumption or conclusion about what anything should be. It only tells what happens if you do this way or that way.

Quantum mechanics makes no assumption or conclusion about what anything should be. It only tells what happens if you do this way or that way.

Enter your reply below (Please read the **How To**)

Richard Gill:

( February 5th, 2015 7:03am UTC )

Edit your feedback below

I do not insist on simultaneous. But Quantum states evolve in time so the measurements should be close together in time.

Unregistered Submission:

( February 5th, 2015 12:15pm UTC )

Edit your feedback below

This is quite naive. The fact that particles evolve in time does not mean they will be observed simultaneously at a specific space-time coordinate.

If one travels a 2 m long fiber optic path and the other a 2 km long fiber optic path before reaching their detectors, you will not detect them simultaneously, this is well understood in physics 101. Even if the path lengths are the same, if one passes through a device which rotates it's polarization, or changes it's phase from the other, the detection times will not be simultaneous. Please study the Faraday effect and corresponding experimental results.

If one travels a 2 m long fiber optic path and the other a 2 km long fiber optic path before reaching their detectors, you will not detect them simultaneously, this is well understood in physics 101. Even if the path lengths are the same, if one passes through a device which rotates it's polarization, or changes it's phase from the other, the detection times will not be simultaneous. Please study the Faraday effect and corresponding experimental results.

Richard Gill:

( February 10th, 2015 6:10pm UTC )

Edit your feedback below

Unregistered submitter: I don't know relativistic quantum field theory so I don't know how to do this properly. Does it matter? Local realism makes certain predictions and experimenters try to show that those predictions are experimentally violated. I don't know how rigorous and thorough the QM calculations done by Christensen et al. are.

Enter your reply below (Please read the **How To**)

Donald Graft:

( February 5th, 2015 3:00pm UTC )

Edit your feedback below

This discussion belongs in the other thread about coincidence windows, etc. My paper does not address any of this and my coincidence counting method does not involve comparison of detection times. Thank you.

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

https://pubpeer.com/publications/B087561066AD645C5348ADC2E4CF1C

Unregistered Submission:

( February 5th, 2015 3:29pm UTC )

Edit your feedback below

Donald Graft: ( February 5th, 2015 3:00pm UTC ): "my coincidence counting method does not involve comparison of detection times"

Then why is the paragraph Time-of-flight delay corrections there in the Methodology section?

Then why is the paragraph Time-of-flight delay corrections there in the Methodology section?

Donald Graft:

( February 5th, 2015 8:08pm UTC )

Edit your feedback below

"Then why is the paragraph Time-of-flight delay corrections there in the Methodology section?"

It is to align all the events to the Pockels cell openings. I do not compare individual arrival times to determine coincidences. By all means, go ahead and discuss it here if you like, although it seems more suited for the other thread. Remember, I am not claiming any coincidence window effects.

I am skeptical of appeals to the coincidence widow effect in real experiments, because the differential delay through, for example, an electro-optic modulator at different settings is very small compared to the analytical window size, and arrival time histograms support that view (Hnilo, Aguero, and others).

It is to align all the events to the Pockels cell openings. I do not compare individual arrival times to determine coincidences. By all means, go ahead and discuss it here if you like, although it seems more suited for the other thread. Remember, I am not claiming any coincidence window effects.

I am skeptical of appeals to the coincidence widow effect in real experiments, because the differential delay through, for example, an electro-optic modulator at different settings is very small compared to the analytical window size, and arrival time histograms support that view (Hnilo, Aguero, and others).

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( February 5th, 2015 7:38pm UTC )

Edit your feedback below

Richard Gill: "distinguish between the joint distribution of ideal simultaneous measurement".

Since Einstein we know that simultaneity is a relative concept which depends on the motion of the observer.

Regarding "Bell's fifth position", mentioned by Gill, one venue of investigation is some sort of limitation imposed on EPR-B experiments by the time-energy uncertainty relation DE x Dt ~ h (notice that time is not an operator in QM; also, although this uncertainty relation is presented in elementary books, its interpretation is controversial and covers a large body of literature). Maybe the tiny time window needed in the detections to establish the pairings of the photons prohibits "something". Maybe Graft's gut feeling that the sample units cannot be defined arbitrarily is correct.

Since Einstein we know that simultaneity is a relative concept which depends on the motion of the observer.

Regarding "Bell's fifth position", mentioned by Gill, one venue of investigation is some sort of limitation imposed on EPR-B experiments by the time-energy uncertainty relation DE x Dt ~ h (notice that time is not an operator in QM; also, although this uncertainty relation is presented in elementary books, its interpretation is controversial and covers a large body of literature). Maybe the tiny time window needed in the detections to establish the pairings of the photons prohibits "something". Maybe Graft's gut feeling that the sample units cannot be defined arbitrarily is correct.

Donald Graft:

( February 5th, 2015 8:15pm UTC )

Edit your feedback below

"Maybe Graft's gut feeling that the sample units cannot be defined arbitrarily is correct."

Your point is surely germane and crucial. The idea that sample blocks cannot become too small is supported by both QM and local realism, but how large can they be? And what block size should we use to analyze our experiments? That's a subject of active research. Let's share what we find here.

Your point is surely germane and crucial. The idea that sample blocks cannot become too small is supported by both QM and local realism, but how large can they be? And what block size should we use to analyze our experiments? That's a subject of active research. Let's share what we find here.

Richard Gill:

( February 7th, 2015 12:29pm UTC )

Edit your feedback below

"Since Einstein we know that simultaneity is a relative concept". Of course. That's why Bell insisted that the measurement process at Bob's side B is finished before the random setting choice on Alice's side A could be communicated from A to B, and vice versa.

It's up to the QM specialists to figure out how to model joint measurements on a bipartite quantum system, when the two components are far apart in space. Obviously the notion of "simultaneous measurement" is problematic if not meaningless. It seems to me that this has to be tackled using the framework of relativistic quantum field theory.

According to Donald, the joint state has already changed because of the separation. An idea which goes back to Furry (Physical Review, 1936). It has become separable (= probability mixture of product states). I don't know what physical process Graft proposes to make this happen [typo corrected - thanks for alert!].

It's up to the QM specialists to figure out how to model joint measurements on a bipartite quantum system, when the two components are far apart in space. Obviously the notion of "simultaneous measurement" is problematic if not meaningless. It seems to me that this has to be tackled using the framework of relativistic quantum field theory.

According to Donald, the joint state has already changed because of the separation. An idea which goes back to Furry (Physical Review, 1936). It has become separable (= probability mixture of product states). I don't know what physical process Graft proposes to make this happen [typo corrected - thanks for alert!].

Donald Graft:

( February 7th, 2015 12:55pm UTC )

Edit your feedback below

"According to Donald, ..."

This misrepresents me; I never made any such claim and specifically disavowed it several times.

This misrepresents me; I never made any such claim and specifically disavowed it several times.

Unregistered Submission:

( February 7th, 2015 1:09pm UTC )

Edit your feedback below

Richard Gill: ( February 7th, 2015 12:29pm UTC ): "It's up to the QM specialists to figure out how to model joint measurements on a bipartite quantum system, when the two components are far apart in space."

As long as all measured quantities are discrete like spins or polarisations, a sum of products is fine.

"Obviously the notion of "simultaneous measurement" is problematic if not meaningless." In QM one measurement is always before or after the other. However, the order does not matter when distinct particles are measured, as the result will be the same anyway.

The situation is different when we want to say something about all local realistic theories, mainly because "all" includes so diverse alternatives.

As long as all measured quantities are discrete like spins or polarisations, a sum of products is fine.

"Obviously the notion of "simultaneous measurement" is problematic if not meaningless." In QM one measurement is always before or after the other. However, the order does not matter when distinct particles are measured, as the result will be the same anyway.

The situation is different when we want to say something about all local realistic theories, mainly because "all" includes so diverse alternatives.

Richard Gill:

( February 7th, 2015 3:28pm UTC )

Edit your feedback below

It's not a *lie* Donald. It's apparently a misunderstanding on my part.

A lie is telling a falsehood with intention to deceive. I tried to understand what you wrote ("*Just as we would not blindly expect the joint prediction to apply in the presence of heavy decoherence, we should not expect it to apply in a case of separated measurement*"), and I tried to re-express it in my own words.

So you don't think that what is going on in these experiments is decoherence leading to joint states which are separable?

A lie is telling a falsehood with intention to deceive. I tried to understand what you wrote ("

So you don't think that what is going on in these experiments is decoherence leading to joint states which are separable?

Peer 2:

( February 7th, 2015 5:02pm UTC )

Edit your feedback below

I think the problem here is that "Joint measurement" is an ill defined concept. Maybe that is the starting point. What does "Joint measurement" mean in terms of laboratory experiments?

Richard Gill:

( February 7th, 2015 5:38pm UTC )

Edit your feedback below

Peer 2, perhaps you should read "Bertlmann's socks". Alice and Bob are kilometers apart. They have synchronised their clocks, as well as they can. They each, very rapidly, toss a coin and do a measurement in a way determined by the outcome of their coin toss. Alice and Bob are both finished before they could possibly know what the other's coin toss was. Isn't that clear?

Enter your reply below (Please read the **How To**)

Richard Gill:

( February 8th, 2015 7:27pm UTC )

Edit your feedback below

I am finding Donald Graft's description of the Christensen et al. data very illuminating and very useful. Also, his reworking of the data-sets which he has made available on his website (links in his paper) is really valuable.

I find that I can quite easily read the plain text files data1.txt, data2.txt etc into R; even the largest (by Mb) is not too big to process in the usual way: it is "just" 5039636 observations of three variables.

Here is a fragment from one of the files:

.

.

.

4320915017980 21 15

4320915274005 21 15

4320915530005 21 15

4320915786031 21 15

4320916042031 21 15

4320916298056 21 15

4320916317948 21 2

4320916317995 21 2

4320916318822 21 1

4320916554056 21 15

4320916810081 21 15

4320917066081 21 15

4320917322106 21 15

4320917578106 21 15

.

.

.

Graft explains "The first field of each line is the extracted timetag of the event in the original time unit of 156.25 picoseconds, as distributed in the raw data. The second field represents the angle settings for the event (11, 12, 21, or 22, as described in the authors' data notes). The third field is the detection event type: value 15 denotes an opening of the Pockels cell; value 1 denotes a photon detection at side 1; and value 2 denotes a photon detection at side 2."

If I think of each opening of the Pockels cell as marking the boundary of a "trial" then in this fragment we see 9 trials with no events in them at all, and 1 trial with actually a double detection on Bob's side and a single detection on Alice's side. I would propose to analyse this data using the CHSH inequality, whereby each "trial" supplies exactly two binary outcomes, one from each wing of the experiment. The two possible outcomes would stand for "no detection events" vs. "one or more detection events.

I would like to hear from other peers, and of course especially from Donald, whether they agree that this is a meaningful and legitimate way to analyse the data.

PS Here are the lengths of the 20 datasets:

1259503, 1259609, 1260342, 12598292, 5054125

5041134, 10098427, 5043176, 5064834, 5041534

5042601, 5039636, 5041341, 5041192, 7576395

10082869, 7563232, 5042704, 5043425, 5042238

Knowing that in advance, one can read them in faster.

I find that I can quite easily read the plain text files data1.txt, data2.txt etc into R; even the largest (by Mb) is not too big to process in the usual way: it is "just" 5039636 observations of three variables.

Here is a fragment from one of the files:

.

.

.

4320915017980 21 15

4320915274005 21 15

4320915530005 21 15

4320915786031 21 15

4320916042031 21 15

4320916298056 21 15

4320916317948 21 2

4320916317995 21 2

4320916318822 21 1

4320916554056 21 15

4320916810081 21 15

4320917066081 21 15

4320917322106 21 15

4320917578106 21 15

.

.

.

Graft explains "The first field of each line is the extracted timetag of the event in the original time unit of 156.25 picoseconds, as distributed in the raw data. The second field represents the angle settings for the event (11, 12, 21, or 22, as described in the authors' data notes). The third field is the detection event type: value 15 denotes an opening of the Pockels cell; value 1 denotes a photon detection at side 1; and value 2 denotes a photon detection at side 2."

If I think of each opening of the Pockels cell as marking the boundary of a "trial" then in this fragment we see 9 trials with no events in them at all, and 1 trial with actually a double detection on Bob's side and a single detection on Alice's side. I would propose to analyse this data using the CHSH inequality, whereby each "trial" supplies exactly two binary outcomes, one from each wing of the experiment. The two possible outcomes would stand for "no detection events" vs. "one or more detection events.

I would like to hear from other peers, and of course especially from Donald, whether they agree that this is a meaningful and legitimate way to analyse the data.

PS Here are the lengths of the 20 datasets:

1259503, 1259609, 1260342, 12598292, 5054125

5041134, 10098427, 5043176, 5064834, 5041534

5042601, 5039636, 5041341, 5041192, 7576395

10082869, 7563232, 5042704, 5043425, 5042238

Knowing that in advance, one can read them in faster.

Donald Graft:

( February 8th, 2015 8:24pm UTC )

Edit your feedback below

Larsson-Gill counting has already been discredited, because it illegitimately discards valid data. Galloping in circles is not productive.

"Knowing that in advance, one can read them in faster."

Just read until EOF. Anyway, if you are interested in loading the data quickly, then use the compiled binary dataset.

"Knowing that in advance, one can read them in faster."

Just read until EOF. Anyway, if you are interested in loading the data quickly, then use the compiled binary dataset.

Heine Rasmussen :

( February 8th, 2015 8:38pm UTC )

Edit your feedback below

This is obviously a valid way to analyze the data. No data is being discarded here.

Richard Gill:

( February 8th, 2015 9:23pm UTC )

Edit your feedback below

R reads till the EOF. It reads faster (because of memory allocation issues) if it is told in advance how far it is going to have to read.

I don't know how to read the binary file in R and I don't want to know. I like to know what I'm reading.

Graft thinks Larsson-Gill counting has been discredited but he has no arguments for this, only his a priori instincts (just as he knows that local realism is correct).

I don't know how to read the binary file in R and I don't want to know. I like to know what I'm reading.

Graft thinks Larsson-Gill counting has been discredited but he has no arguments for this, only his a priori instincts (just as he knows that local realism is correct).

Unregistered Submission:

( February 9th, 2015 5:10am UTC )

Edit your feedback below

The files don't contain the observed polarization for each detection?

Richard Gill:

( February 9th, 2015 8:23am UTC )

Edit your feedback below

Unregistered submitter: the procedure is not "detect a photon; then measure its polarization". The procedure is "listen for photons with given polarization".

You can imagine this as a filter followed by a detector.

You can imagine this as a filter followed by a detector.

Unregistered Submission:

( February 9th, 2015 11:27am UTC )

Edit your feedback below

Richard Gill: ( February 8th, 2015 7:27pm UTC ): "Graft explains "[...] The second field represents the angle settings for the event (11, 12, 21, or 22, as described in the authors' data notes). [...]""

Do 1 and 2 represent opposite or turned settings?

Do 1 and 2 represent opposite or turned settings?

Donald Graft:

( February 9th, 2015 8:17pm UTC )

Edit your feedback below

"Do 1 and 2 represent opposite or turned settings?"

Yes.

Yes.

Unregistered Submission:

( February 10th, 2015 11:01am UTC )

Edit your feedback below

Donald Graft: ( February 9th, 2015 8:17pm UTC )

Which?

"Do 1 and 2 represent opposite or turned settings?"

Yes.

Which?

Donald Graft:

( February 10th, 2015 5:52pm UTC )

Edit your feedback below

I guess I didn't understand the original question. Can you clarify it? What is turned? What is opposite?

11 means settings a1 and b1, etc.

11 means settings a1 and b1, etc.

Unregistered Submission:

( February 10th, 2015 6:33pm UTC )

Edit your feedback below

The question whas, what exactly do the settings set. What does 1 mean and what does 2?

Donald Graft:

( February 10th, 2015 8:46pm UTC )

Edit your feedback below

They are the possible angle settings of the polarizers. CH/CHSH uses two possible angle settings at side A and two at side B. Refer to Christensen et al if you need to know the actual angle values.

Enter your reply below (Please read the **How To**)

Richard Gill:

( February 9th, 2015 8:35am UTC )

Edit your feedback below

Donald, here is a short question about your data files. If several events are simultaneous (I mean: have exactly the same time stamp), do you always list them in a fixed order e.g. pockels cell openings always before detections?

Unregistered Submission:

( February 9th, 2015 11:05am UTC )

Edit your feedback below

You should check the distribution of detection times relative to previous pockels cell opening and also distribution of times from one opening to the next. Distribution of times between successive events (irrespective of type) might also help.

Donald Graft:

( February 9th, 2015 1:17pm UTC )

Edit your feedback below

"If several events are simultaneous (I mean: have exactly the same time stamp), do you always list them in a fixed order e.g. pockels cell openings always before detections? "

Yes, pockels cell openings come before detections in that opening (otherwise my compilation process would not be correct). Be aware that there are no cases in the data where a detection has the same timestamp as a pockels cell opening, so one need not worry that I may have wrongly excluded events from their openings by ordering incorrectly. I wrote a program to check for such cases.

Note that the format of the binary data file is clearly documented in the paper so there is no reason why anybody should not know what they are reading when loading the binary data file. You always have the option to read the text files instead, or even read the original MATLAB data files, but if you want to run the program hundreds or evens thousands of times with different degrees of freedom of the analysis, the start-up overhead becomes very annoying. Yes, you can also load the text files once and keep the data in memory, but then why not also make a binary file image of the memory contents, so you can restart the program with low overhead. Typically, program restarts will be required, e.g., to revise and re-compile the analysis code, after recovery from computer reboots and crashes, etc. These are everyday matters for computer scientists.

Yes, pockels cell openings come before detections in that opening (otherwise my compilation process would not be correct). Be aware that there are no cases in the data where a detection has the same timestamp as a pockels cell opening, so one need not worry that I may have wrongly excluded events from their openings by ordering incorrectly. I wrote a program to check for such cases.

Note that the format of the binary data file is clearly documented in the paper so there is no reason why anybody should not know what they are reading when loading the binary data file. You always have the option to read the text files instead, or even read the original MATLAB data files, but if you want to run the program hundreds or evens thousands of times with different degrees of freedom of the analysis, the start-up overhead becomes very annoying. Yes, you can also load the text files once and keep the data in memory, but then why not also make a binary file image of the memory contents, so you can restart the program with low overhead. Typically, program restarts will be required, e.g., to revise and re-compile the analysis code, after recovery from computer reboots and crashes, etc. These are everyday matters for computer scientists.

Richard Gill:

( February 9th, 2015 2:43pm UTC )

Edit your feedback below

Donald, I don't program very often in C. I very often program in R. I read your text files into R and then save them on disk in R's binary format. So as to minimize start-up overhead in subsequent analyses ...

You mention two delay times: so you subtract one "delay time" off all of Alice's detection times, and a different "delay time" off all of Bob's detection times? Leaving the pockel cell opening times unaltered?

You mention two delay times: so you subtract one "delay time" off all of Alice's detection times, and a different "delay time" off all of Bob's detection times? Leaving the pockel cell opening times unaltered?

Donald Graft:

( February 9th, 2015 2:48pm UTC )

Edit your feedback below

Yes, per instructions from Christensen. The pockels cell time is the objective stable time reference and the detection times are corrected at both sides by their respective delays. One sees the same thing in the Christensen et al analysis code. It's all clearly described in my paper.

In the Giustina et al experiment, there is no objective stable reference and it is sufficient to shift only one side's detections (relative to the other), to allow for comparing the detection times to determine coincidences. I'm a bit worried about the low granularity (one bit) of the shift in Giustina et al, but completion of my independent analysis will take some time.

In the Giustina et al experiment, there is no objective stable reference and it is sufficient to shift only one side's detections (relative to the other), to allow for comparing the detection times to determine coincidences. I'm a bit worried about the low granularity (one bit) of the shift in Giustina et al, but completion of my independent analysis will take some time.

Enter your reply below (Please read the **How To**)

Richard Gill:

( February 10th, 2015 6:13pm UTC )

Edit your feedback below

A discussion has been started up on the Clauser-Horne inequality (not by me). In case anyone is interested: https://pubpeer.com/publications/D040649FD6120333A5525C7C045DE3

Richard Gill:

( February 12th, 2015 9:51am UTC )

Edit your feedback below

In the mean time I have been able to reproduce some of the Christensen et al. analyses starting from Graft's pre-processed data sets (the text files) and working in the R language.

http://rpubs.com/gill1109/christensen0

http://rpubs.com/gill1109/christensen1

http://rpubs.com/gill1109/christensen2

http://rpubs.com/gill1109/christensen3

These four passes could easily be reduced to two (or even everything done in just one go) but I prefer to keep them separate.

http://rpubs.com/gill1109/christensen0

http://rpubs.com/gill1109/christensen1

http://rpubs.com/gill1109/christensen2

http://rpubs.com/gill1109/christensen3

These four passes could easily be reduced to two (or even everything done in just one go) but I prefer to keep them separate.

Heine Rasmussen :

( February 12th, 2015 4:26pm UTC )

Edit your feedback below

Does this mean that there might be something wrong with the data in three of the files? (Nr 3, 5, and 16)

Richard Gill:

( February 12th, 2015 5:32pm UTC )

Edit your feedback below

Heine, I don't know. More likely there is a bug in my programs ... or possibly in the pre-processing of the data by Donald. I've asked Brad Christensen for comments.

It can also be statistical variation. I have updated the last script so that it also computes a standard error and hence also a t-statistic.

I also tried computing the CHSH statistic for the 20 data-sets. Not one of them exceeded 2! This suggests to me that the rate of detections is slowly varying in time, so that there is an apparent violation of "no-signalling". Making the different versions of CHSH (such as CH and Eberhard) not equivalent, after all. Alice's detection rate should not depend on Bob's setting. But if settings are not being changed at random very rapidly and if there are slow drifts in detection rates, then these different statistics can give rather different answers. We already saw this in the Giustina et al. data. Different versions of Eberhard gave rather different answers.

This all underlines the fact that we need a loophole free experiment! In particular, we need new random settings for every "trial".

Or it could underline the fact that I'm not that good at programming ...

Correction: there was a bug in my CHSH calculation. There are a lot of violations, but it seems less effective than CH. http://rpubs.com/gill1109/christensen4

It can also be statistical variation. I have updated the last script so that it also computes a standard error and hence also a t-statistic.

I also tried computing the CHSH statistic for the 20 data-sets. Not one of them exceeded 2! This suggests to me that the rate of detections is slowly varying in time, so that there is an apparent violation of "no-signalling". Making the different versions of CHSH (such as CH and Eberhard) not equivalent, after all. Alice's detection rate should not depend on Bob's setting. But if settings are not being changed at random very rapidly and if there are slow drifts in detection rates, then these different statistics can give rather different answers. We already saw this in the Giustina et al. data. Different versions of Eberhard gave rather different answers.

This all underlines the fact that we need a loophole free experiment! In particular, we need new random settings for every "trial".

Or it could underline the fact that I'm not that good at programming ...

Correction: there was a bug in my CHSH calculation. There are a lot of violations, but it seems less effective than CH. http://rpubs.com/gill1109/christensen4

Heine Rasmussen :

( February 12th, 2015 6:19pm UTC )

Edit your feedback below

Interesting! I agree that we need some technical developments in these experiments to increase sensitivity. I would like to see a solid violation; after all, the theoretical QM value for CHSH is 2*sqrt(2). Ideally an experiment with a sigma-confidence at absurd levels so that this matter is put to rest, at least experimentally. There will always be someone who will claim the *theorems* are wrong, of course.

Donald Graft:

( February 12th, 2015 7:07pm UTC )

Edit your feedback below

The theorems are not wrong, but they do not apply if convergence is not reached, i.e., the experiments are not close enough to infinite in length.

Heine Rasmussen :

( February 12th, 2015 7:15pm UTC )

Edit your feedback below

But that's why we have statistics and probability theory and confidence intervals. They can all be made part of the theorems. If I flip a coin 1 000 times, and get 900 heads, I can tell you the exact probability of this (900 or more heads) happening with a fair coin. Now it's up to you to believe whether the coin is fair or not. I think you would reach your conclusion without requesting that we should flip the coin for eternity.

Unregistered Submission:

( February 12th, 2015 7:23pm UTC )

Edit your feedback below

The theorems are not wrong mathematically. The problem is that they do not apply to the physics of QM and the physics of the EPRB experiments.

Donald Graft:

( February 12th, 2015 8:11pm UTC )

Edit your feedback below

"I also tried computing the CHSH statistic for the 20 data-sets. Not one of them exceeded 2!"

A local realist would not find that surprising. It's great to see a replication of the analysis confirming local realism.

A local realist would not find that surprising. It's great to see a replication of the analysis confirming local realism.

Unregistered Submission:

( February 13th, 2015 2:22am UTC )

Edit your feedback below

Interesting, Richard. Even with your clever choice of sample units to allow CHSH to be used in this context, the fact that detectors angles are not selected independently on each "trial" seems to play a considerable role. One suggestion: "thin" the sequence of events to break this dependence between trials. Do it in a deterministic way. Using the terminology of your Statistical Science paper, you would be applying your Theorem 1 to a sub-spreadsheet of data.

Richard Gill:

( February 13th, 2015 5:12am UTC )

Edit your feedback below

CHSH was OK after all: http://rpubs.com/gill1109/christensen4

Richard Gill:

( February 13th, 2015 8:43am UTC )

Edit your feedback below

Exactly. The Christensen et al. experiment shouldn't be vulnerable to the so-called "production-rate loophole" since we know the numbers of windows with "no detection event, no detection event", but despite this it seems that even after correcting for the production rate, the probability of detections varies in time, hence there *is* apparent violation of no-signalling, hence the different versions of CH, and CHSH itself, can all give rather different results on the same experiment.

Christensen et al have 20 data-sets, of varying time durations (from around 200 to 2000 seconds worth of data), but in each data-set, settings are only switched once a second. So one should combine all experiments and just pick the data from one trial per one-second setting combination. This is not going to be enough to get a statistically significant violation of CHSH ...

Since production and detection rates vary slowly in time, it means that everything has changed from one second interval to the next, in other words, the detection rate at Bob's place can effectively depend on the setting at Alice's.

Some mathematical statisticians should figure out a good statistical model and try to do an optimal statistical analysis under clearly understood extra assumptions. The Christensen et al. experiment *does* need some version of the fair-sampling assumption, to take care of the fact that all probabilities are slowly varying in time and settings remain constant for very many trials one after another.

There may of course just be some serious bugs in my R programs or serious misunderstanding on my part of the data-pre-processing!

Update: there were some small bugs. I have updated the R html notebooks on RPubs.

The experiment with the worst results (experiment 3) has the smallest number of coincidences, and one of the shortest durations.

PS: I aggregated the data over the 20 sub-experiments and got

## count

## setting SA C SB N

## 11 46070 29173 46202 27153018

## 12 48077 34146 146243 28352345

## 21 150837 34473 47448 27827311

## 22 150713 1869 144371 27926988

very close indeed to Christensen et al's table 1! The differences are presumably due to rounding errors and/or the distinction between " < " and " < = "

A very nice check that both my code and Graft's (text files) data are OK

Actually, the fact of slowly changing parameters means that we *should* analyse the data by cutting the data-stream into small pieces and estimating the CHSH quantity S or the Clauser-Horne quantity B many times, with of necessity large statistical error, and only aggregating afterwards.

Christensen et al have 20 data-sets, of varying time durations (from around 200 to 2000 seconds worth of data), but in each data-set, settings are only switched once a second. So one should combine all experiments and just pick the data from one trial per one-second setting combination. This is not going to be enough to get a statistically significant violation of CHSH ...

Since production and detection rates vary slowly in time, it means that everything has changed from one second interval to the next, in other words, the detection rate at Bob's place can effectively depend on the setting at Alice's.

Some mathematical statisticians should figure out a good statistical model and try to do an optimal statistical analysis under clearly understood extra assumptions. The Christensen et al. experiment *does* need some version of the fair-sampling assumption, to take care of the fact that all probabilities are slowly varying in time and settings remain constant for very many trials one after another.

There may of course just be some serious bugs in my R programs or serious misunderstanding on my part of the data-pre-processing!

Update: there were some small bugs. I have updated the R html notebooks on RPubs.

The experiment with the worst results (experiment 3) has the smallest number of coincidences, and one of the shortest durations.

PS: I aggregated the data over the 20 sub-experiments and got

## count

## setting SA C SB N

## 11 46070 29173 46202 27153018

## 12 48077 34146 146243 28352345

## 21 150837 34473 47448 27827311

## 22 150713 1869 144371 27926988

very close indeed to Christensen et al's table 1! The differences are presumably due to rounding errors and/or the distinction between " < " and " < = "

A very nice check that both my code and Graft's (text files) data are OK

Actually, the fact of slowly changing parameters means that we *should* analyse the data by cutting the data-stream into small pieces and estimating the CHSH quantity S or the Clauser-Horne quantity B many times, with of necessity large statistical error, and only aggregating afterwards.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( February 13th, 2015 3:54pm UTC )

Edit your feedback below

The detectors settings are randomly changed every second? My suggestion: working with the longest data stream fix some integer JUMP, and compute the correlations only for the sub-stream defined by data[seq(from = 1, to = length(data), by = JUMP),]. Varying JUMP you will probably find a range in which CHSH is violated. Best. Zen.

Heine Rasmussen :

( February 13th, 2015 4:31pm UTC )

Edit your feedback below

But that would amount to cherry picking (searching for a range so that the inequality is violated), so the violation would no longer be statistically significant, or at least it would take a much more complicated calculation of significance levels.

Basically, if one starts trying different things with the data and compare CHSH values, and pick the method that gives the greatest violation, one has introduced a subtle non-locality in the data processing.

Basically, if one starts trying different things with the data and compare CHSH values, and pick the method that gives the greatest violation, one has introduced a subtle non-locality in the data processing.

Richard Gill:

( February 13th, 2015 5:50pm UTC )

Edit your feedback below

In fact, Zen and Heine, it seems that Christensen already does one cherry-picking like optimisation: namely he chooses delay times for the two sides of the experiment which give the best results. All of Alice's events are shifted to the left by some amount, all of Bob's to the left by another amount, these two relative to a third fixed time series of "Pockels cell openings" which basically marks bursts from the laser. The Pockels cell openings are used to locate the windows (we don't want to include the incidental detections which are not close to these "PC openings"). I don't know if he does this once for a pilot experiment and then applies those delays to all subsequent experiments, or if he only figures out the best delay after all experiments are done.

Also in the Giustina et al experiment there is some post-experiment optimising by choice of delays.

Then there is also an optimization over the window length!

Also in the Giustina et al experiment there is some post-experiment optimising by choice of delays.

Then there is also an optimization over the window length!

Donald Graft:

( February 13th, 2015 6:10pm UTC )

Edit your feedback below

Welcome to the messy world of EPRB experiment analysis! You can choose different ways to determine the delays from the data, e.g., maximize singles in the window, maximize coincidences, maximize the CH metric, trust the authors' determination, etc. I wouldn't call the delay alignment cherry-picking, though, because it does have an objective motivation, and I don't think you can create a violation out of whole cloth by adjusting the delays (but you could destroy one). Regarding the window size, it should at least be bigger than the TES/detector jitter.

Christensen et al use one delay set for all the experiments, and so I follow that. For my Weihs et al analysis, I needed per-experiment delays to recreate the results. Weihs et al never published the details of their data analysis.

Determining the delay set is a multidimensional optimization in Christensen and so you may find that R is unsuitable due to its slowness. The delay set is one of the things I refer to as degrees of freedom of the analysis.

"we should analyse the data by cutting the data-stream into [small] pieces"

One of the points of my paper is that the positivity analysis (50% rule) is the way to go. All of this underlines how important the analysis phase is, and how important it is that the full analysis be published.

Christensen et al use one delay set for all the experiments, and so I follow that. For my Weihs et al analysis, I needed per-experiment delays to recreate the results. Weihs et al never published the details of their data analysis.

Determining the delay set is a multidimensional optimization in Christensen and so you may find that R is unsuitable due to its slowness. The delay set is one of the things I refer to as degrees of freedom of the analysis.

"we should analyse the data by cutting the data-stream into [small] pieces"

One of the points of my paper is that the positivity analysis (50% rule) is the way to go. All of this underlines how important the analysis phase is, and how important it is that the full analysis be published.

Enter your reply below (Please read the **How To**)

Unregistered Submission:

( February 13th, 2015 4:53pm UTC )

Edit your feedback below

Hi, Heine. It's not exactly cherry picking, because you can build explicitly a statistical model which changes directions randomly every 1 second. Under this model it's possible to construct a suitable significance test. Since Gill's style of inequality distinguishes clearly between model/process parameters and observables we have enough room to do this kind of modeling. The "jumping analysis" proposed above is just one way to check (to have a first feeling) if this direction is fruitful, at least for this particular data set. It's a heuristic.

Heine Rasmussen :

( February 13th, 2015 7:49pm UTC )

Edit your feedback below

Hi, Zen. I agree that correcting the significance levels is theoretically possible, but once one starts postprocessing data in order to optimize violations, this soon leads one into an intractable mathematical mess, for all practical purposes. Best not to try.

Richard Gill:

( February 13th, 2015 8:14pm UTC )

Edit your feedback below

R is not dreadfully slow, if you are smart with it! Use vectorised code (avoid explicit loops). Occasionally use Rcpp (the interface with C++) if you need to write explicit loops. And bear in mind the time it takes to program and debug, as well as the time it takes to run...

About post-processing to optimise statistical significance: it seems to me this should be replaced by data-spliting: use an initial pilot experiment to find good parameters and then apply them to subsequent "definitive" runs.

But as Donald says there is also a lot of sound physics sense in determining at least the rough size of these "degrees of freedom".

About post-processing to optimise statistical significance: it seems to me this should be replaced by data-spliting: use an initial pilot experiment to find good parameters and then apply them to subsequent "definitive" runs.

But as Donald says there is also a lot of sound physics sense in determining at least the rough size of these "degrees of freedom".

Enter your reply below (Please read the **How To**)

Richard Gill:

( February 14th, 2015 5:49pm UTC )

Edit your feedback below

Using the Christensen et al. data I compared CHSH and CH

It turns out that in the situation of that experiment, the relative statistical accuracy of CH (at least - of the version used by Christensen) is more than twice that of CHSH. You tend to get twice as large a violation of local realism measured in numbers of standard deviations from the bound.

http://rpubs.com/gill1109/compare

The two inequalities are algebraically equivalent under no-signalling.

To be precise: B = (S - 2) / 4.

But even under no signalling, they are not statistically equivalent.

It turns out that in the situation of that experiment, the relative statistical accuracy of CH (at least - of the version used by Christensen) is more than twice that of CHSH. You tend to get twice as large a violation of local realism measured in numbers of standard deviations from the bound.

http://rpubs.com/gill1109/compare

The two inequalities are algebraically equivalent under no-signalling.

To be precise: B = (S - 2) / 4.

But even under no signalling, they are not statistically equivalent.

Richard Gill:

( February 15th, 2015 9:44am UTC )

Edit your feedback below

PS on the other hand, if we do a standard CHSH type experiment (everything optimised, S = 2 sqrt 2) it turns out that CHSH has a 33% smaller standard error than CH, ie with CH you get two thirds of the number (which CHSH would give you) of standard deviations departure from the local realism bound.

http://rpubs.com/gill1109/compare1

There are, or course, in fact infinitely many inequalities equivalent under no-signaling, but there is only one "best" and I can tell you which it is.

Let's look at B. We want to test E(B) = 0 versus E(B) > 0. B is a linear combination of 6 relative frequencies.

No signalling tells us four equalities, le me write them as E(D) = 0. "D" is a vector of four differences between relative frequencies

These things have variances and covariances, denote them

Sigma_{BB} (a number)

Sigma_{BD} (a 1 x 4 matrix)

Sigma_{DD} (a 4x4 matrix)

The best test of local realism is based on

B - Sigma_{BD} Sigma_{DD}^{-1} D

and its variance is

Sigma_{BB} - Sigma_{BD} Sigma_{DD}^{-1} Sigma_{DB}

So maybe a reanalysis of the Christensen et al data can show an even stronger violation of local realism.

PS: I now added such an analysis. I compute the *optimal* test of local realism assuming that the true probabilities satisfy no-signalling and exhibit the same statistical dependence as in the Christensen et al data. http://rpubs.com/gill1109/compare3 .

PPS Sorry, and then I found some bugs.

I have done the computations again

http://rpubs.com/gill1109/compare3

I have to clean up this code and comment it properly so you can see what is going on.

Today, the optimal test for the Christensen et al. experiment is

CHSH + 0.5 * (Alice setting 1 constraint) + 1.8 * (Alice setting 2 constraint) + 0.5 * (Bob setting 1 constraint) + 1.8 * (Bob setting 2 constraint), and the result is an 8.5 standard deviation departure from local realism.

Compare to Clauser-Horne (7.6 standard deviations) and CHSH (3.7 standard deviations)

http://rpubs.com/gill1109/compare1

There are, or course, in fact infinitely many inequalities equivalent under no-signaling, but there is only one "best" and I can tell you which it is.

Let's look at B. We want to test E(B) = 0 versus E(B) > 0. B is a linear combination of 6 relative frequencies.

No signalling tells us four equalities, le me write them as E(D) = 0. "D" is a vector of four differences between relative frequencies

These things have variances and covariances, denote them

Sigma_{BB} (a number)

Sigma_{BD} (a 1 x 4 matrix)

Sigma_{DD} (a 4x4 matrix)

The best test of local realism is based on

B - Sigma_{BD} Sigma_{DD}^{-1} D

and its variance is

Sigma_{BB} - Sigma_{BD} Sigma_{DD}^{-1} Sigma_{DB}

So maybe a reanalysis of the Christensen et al data can show an even stronger violation of local realism.

PS: I now added such an analysis. I compute the *optimal* test of local realism assuming that the true probabilities satisfy no-signalling and exhibit the same statistical dependence as in the Christensen et al data. http://rpubs.com/gill1109/compare3 .

PPS Sorry, and then I found some bugs.

I have done the computations again

http://rpubs.com/gill1109/compare3

I have to clean up this code and comment it properly so you can see what is going on.

Today, the optimal test for the Christensen et al. experiment is

CHSH + 0.5 * (Alice setting 1 constraint) + 1.8 * (Alice setting 2 constraint) + 0.5 * (Bob setting 1 constraint) + 1.8 * (Bob setting 2 constraint), and the result is an 8.5 standard deviation departure from local realism.

Compare to Clauser-Horne (7.6 standard deviations) and CHSH (3.7 standard deviations)

Donald Graft:

( May 26th, 2015 4:19pm UTC )

Edit your feedback below

I have updated my analysis of Christensen et al on arXiv to include a discussion of accidentals as well as an expanded conclusion. This did not get notified because arXiv does not send notifications beyond revision 4, which seems silly to me.

http://arxiv.org/abs/1409.5158

http://arxiv.org/abs/1409.5158

Enter new comment below (Please read the **How To**)

- - Bell violation using entangled photons without the fair-sampling assumption
- - Macroscopic Observability of Spinorial Sign Changes under 2π Rotations
- - Bell's inequality and the coincidence-time loophole
- - Bell's inequality and the coincidence-time loophole
- - Bell's inequality and the coincidence-time loophole
- - Macroscopic Observability of Spinorial Sign Changes under 2π Rotations
- - Statistics, Causality and Bell's Theorem
- - Statistics, Causality and Bell's Theorem
- - Polishing the diamond of PubPeer
- - Experimental consequences of objective local theories
- - Experimental loophole-free violation of a Bell inequality using entangled electron spins separated by 1.3 km

a. The quantum joint prediction cannot be recovered in an experiment with separated (marginal) measurements, just as for classical probability. Quantum mechanics does not predict a violation of CH!

b. Valid experiments properly interpreted do not violate the CH inequality and therefore confirm local realism.

c. That does not mean quantum mechanics is wrong. The correct quantum mechanics prediction for an EPRB experiment must use the marginals and not the joint distribution. The essence of quantum mechanics is just fine; we need only to be careful about separated measurement situations, just as we are in classical probability theory. Just as we would not blindly expect the joint prediction to apply in the presence of heavy decoherence, we should not expect it to apply in a case of separated measurement.

Thank you for your consideration and best wishes to all for a wonderful 2015.

Permalink

It might be that the Christensen et al. experiment has been dealt a blow. But there are many legitimate ways to analyse the results.

I would like to see an analysis of the Christensen et al. data using a *fixed grid* of time windows. And some convention used to determine what (single) outcome, if any “counts” for each interval. eg. if there are several events we pick the first. [Note added later: better still, probably, the one nearest the middle of the window; and anyway, better to avoid the boundary of the window]. We would like the time windows so chosen that the setting is constant during the window, and switched at random from one window to the next.

There is a complete set of Bell inequalities for any number of outcomes of a measurement in a Bell-type experiment. The “correct analysis” of a three outcome experiment (e.g. with the three possible outcomes "+", "–", and "no-detection") just investigates all of the three-outcome generalised Bell-type inequalities. It's a finite list, not terribly long. In fact for a 2 party, 2 measurements per party, 3 outcomes per measurement experiment, there are only the CHSH inequalities obtained by various groupings of 3 to 2 outcomes ( = CH), and the CGLMP inequalities).

No need to pick one particular inequality in advance, or to go for ad-hoc modifications of CHSH (for lower detector efficiency) like Larsson’s. All there is, are the boundary hyperplanes of the “local polytope”. For a small number of possible outcomes there is a short list of generalised Bell-type inequalities, and that's all one needs to look at.

Richard

PS the author mentions that Matlab is expensive, but aren’t Matlab data files also readable with the open source Octave?

Permalink

## Are you sure you want to delete your feedback?

It is definitely a blow to quantum nonlocality, but Christensen et al is an experimental result. They performed the first decisive EPRB experiment and as experimentalists remain unbiased and uncommitted about theoretical analyses of the experimental results. The successful execution of useful experiments is a key feature of science. All experimental laboratories, regardless of the theoretical fashions of the day, or their particular guiding philosophies, must be supported and embraced as the essence of science.

"if there are several events we pick the first".

No. That is post-selection. One simply cannot do that. All of the experimental events must be included on an equal basis.

Permalink

## Are you sure you want to delete your feedback?

How To)## Are you sure you want to delete your feedback?