From Wikipedia, the free encyclopedia - View original article
No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.
In the early 1930s, the philosophical implications of the current interpretations of quantum theory troubled many prominent physicists of the day, including Albert Einstein. In a well-known 1935 paper, Einstein and co-authors Boris Podolsky and Nathan Rosen (collectively "EPR") sought to demonstrate by a paradox that QM was incomplete. This provided hope that a more-complete (and less-troubling) theory might one day be discovered. But that conclusion rested on the seemingly reasonable assumptions of locality and realism (together called "local realism" or "local hidden variables", often interchangeably). In the vernacular of Einstein: locality meant no instantaneous ("spooky") action at a distance; realism meant the moon is there even when not being observed. These assumptions were hotly debated within the physics community, notably between Nobel laureates Einstein and Niels Bohr.
In his groundbreaking 1964 paper, "On the Einstein Podolsky Rosen paradox", physicist John Stewart Bell presented an analogy (based on spin measurements on pairs of entangled electrons) to EPR's hypothetical paradox. Using their reasoning, he said, a choice of measurement setting here should not affect the outcome of a measurement there (and vice versa). After providing a mathematical formulation of locality and realism based on this, he showed specific cases where this would be inconsistent with the predictions of QM theory.
In experimental tests following Bell's example, now using quantum entanglement of photons instead of electrons, John Clauser and Stuart Freedman (1972) and Alain Aspect et al. (1981) convincingly demonstrated that the predictions of QM are correct in this regard. While this does not demonstrate QM is complete, one is forced to reject at least one of the principles of locality, realism, or freedom (the last leads to alternative superdeterministic theories). Two of these logical possibilities, non-locality and non-realism, correspond to well-developed interpretations of quantum mechanics, and have many supporters; this is not the case for the third logical possibility, non-freedom. Conclusive experimental evidence of the violation of Bell's inequality would drastically reduce the class of acceptable deterministic theories but would not falsify absolute determinism, which was described by Bell himself as '...not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined'.
These three key concepts – locality, realism, freedom – are highly technical and much debated. In particular, the concept of realism is now somewhat different from what it was in discussions in the 1930s. It is more precisely called counterfactual definiteness; it means that we may think of outcomes of measurements that were not actually performed as being just as much part of reality as those that were made. Locality is short for local relativistic causality. Freedom refers to the physical possibility to determine settings on measurement devices independently of the internal state of the physical system being measured.
Cornell solid-state physicist David Mermin has described the various appraisals of the importance of Bell's theorem within the physics community as ranging from "indifference" to "wild extravagance". Lawrence Berkeley particle physicist Henry Stapp declared: "Bell's theorem is the most profound discovery of science."
Bell's theorem states that any physical theory that incorporates local realism, favoured by Einstein, cannot reproduce all the predictions of quantum mechanical theory. Because numerous experiments agree with the predictions of quantum mechanical theory, and show differences between correlations that could not be explained by local hidden variables, the experimental results have been taken by many as refuting the concept of local realism as an explanation of the physical phenomena under test. For a hidden variable theory, if Bell's conditions are correct, the results that agree with quantum mechanical theory appear to indicate superluminal effects, in contradiction to the principle of locality.
The theorem is usually proved by consideration of a quantum system of two entangled qubits. The most common examples concern systems of particles that are entangled in spin or polarization. Quantum mechanics allows predictions of correlations that would be observed if these two particles have their spin or polarization measured in different directions. Bell showed that if a local hidden variable theory would hold, then these correlations would have to satisfy certain constraints, called Bell inequalities. However, for the quantum correlations arising in the specific example considered, those constraints are not satisfied, hence the phenomenon being studied cannot be explained by a local hidden variables theory.
Following the argument in the Einstein–Podolsky–Rosen (EPR) paradox paper (but using the example of spin, as in David Bohm's version of the EPR argument), Bell considered an experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions." The two particles travel away from each other to two distant locations, at which measurements of spin are performed, along axes that are independently chosen. Each measurement yields a result of either spin-up (+) or spin-down (−); it means, spin in the positive or negative direction of the chosen axis.
The probability of the same result being obtained at the two locations varies, depending on the relative angles at which the two spin measurements are made, and is strictly between zero and one for all relative angles other than perfectly parallel alignments (0° or 180°). Bell's theorem is concerned with correlations defined in terms of averages taken over very many trials of the experiment. The correlation of two binary variables is usually defined in quantum physics as the average of the product of the two outcomes of the pairs of measurements. (Note that this is different from the usual definition of correlation in statistics. The quantum physicist's "correlation" is the statisticians "raw (uncentered, unnormalized) product moment"). But also with the physicist's definition, if the pairs of outcomes are always the same, the correlation is +1, no matter which same value each pair of outcomes have. If the pairs of outcomes are always opposite, the correlation is -1. Finally, if the pairs of outcomes are perfectly balanced, being 50% of the times in accordance, and 50% of the times opposite, the correlation, being an average, is 0. The correlation is related in a simple way to the probability of equal outcomes, namely it is equal to twice this probability, minus one.
Measuring the spin of these entangled particles along anti-parallel directions—i.e., along the same axis but in opposite directions, the set of all results is perfectly correlated. On the other hand, if measurements are performed along parallel directions they always yield opposite results, and the set of measurements shows perfect anti-correlation. Finally, measurement at perpendicular directions has a 50% chance of matching, and the total set of measurements is uncorrelated. These basic cases are illustrated in the table below.
|Anti-parallel||Pair 1||Pair 2||Pair 3||Pair 4||…||Pair n|
|Correlation = (||+1||+1||+1||+1||…||+1||) / n = +1|
|Parallel||Pair 1||Pair 2||Pair 3||Pair 4||…||Pair n|
|Bob, 0° or 360°||−||+||+||−||…||−|
|Correlation = (||-1||-1||-1||-1||…||-1||) / n = -1|
|Orthogonal||Pair 1||Pair 2||Pair 3||Pair 4||…||Pair n|
|Bob, 90° or 270°||−||−||+||+||…||−|
|Correlation = (||−1||+1||+1||−1||…||+1||) / n = 0|
|(50% identical, 50% opposite)|
With the measurements oriented at intermediate angles between these basic cases, the existence of local hidden variables could agree with a linear dependence of the correlation in the angle but, according to Bell's inequality (see below), could not agree with the dependence predicted by quantum mechanical theory, namely, that the correlation is the negative cosine of the angle. Experimental results match the curve predicted by quantum mechanics.
Bell's theorem rules out local hidden variables as a viable explanation of quantum mechanics (though it still leaves the door open for non-local hidden variables). Bell concluded:
In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote. Moreover, the signal involved must propagate instantaneously, so that a theory could not be Lorentz invariant.
Over the years, Bell's theorem has undergone a wide variety of experimental tests. However, various common deficiencies in the testing of the theorem have been identified, including the detection loophole and the communication loophole. Over the years experiments have been gradually improved to better address these loopholes, but no experiment to date has simultaneously fully addressed all of them. However, scientists generally expect that someone will conduct such an experiment in a few years, and it is expected to confirm yet again quantum predictions. For example, Anthony Leggett has commented:
[While] no single existing experiment has simultaneously blocked all of the so-called loopholes, each one of those loopholes has been blocked in at least one experiment. Thus, to maintain a local hidden variable theory in the face of the existing experiments would appear to require belief in a very peculiar conspiracy of nature.
To date, Bell's theorem is generally regarded as supported by a substantial body of evidence and there are few supporters of local hidden variables, though the theorem is continually subject of study, criticism, and refinement.
Bell's theorem, derived in his seminal 1964 paper titled On the Einstein Podolsky Rosen paradox, has been called, on the assumption that the theory is correct, "the most profound in science". Perhaps of equal importance is Bell's deliberate effort to encourage and bring legitimacy to work on the completeness issues, which had fallen into disrepute. Later in his life, Bell expressed his hope that such work would "continue to inspire those who suspect that what is proved by the impossibility proofs is lack of imagination."
The title of Bell's seminal article refers to the 1935 paper by Einstein, Podolsky and Rosen that challenged the completeness of quantum mechanics. In his paper, Bell started from the same two assumptions as did EPR, namely (i) reality (that microscopic objects have real properties determining the outcomes of quantum mechanical measurements), and (ii) locality (that reality in one location is not influenced by measurements performed simultaneously at a distant location). Bell was able to derive from those two assumptions an important result, namely Bell's inequality, implying that at least one of the assumptions must be false.
In two respects Bell's 1964 paper was a step forward compared to the EPR paper: firstly, it considered more hidden variables than merely the element of physical reality in the EPR paper; and Bell's inequality was, in part, liable to be experimentally tested, thus raising the possibility of testing the local realism hypothesis. Limitations on such tests to date are noted below. Whereas Bell's paper deals only with deterministic hidden variable theories, Bell's theorem was later generalized to stochastic theories as well, and it was also realised that the theorem is not so much about hidden variables, as about the outcomes of measurements that could have been taken instead of the one actually taken. Existence of these variables is called the assumption of realism, or the assumption of counterfactual definiteness.
After the EPR paper, quantum mechanics was in an unsatisfactory position: either it was incomplete, in the sense that it failed to account for some elements of physical reality, or it violated the principle of a finite propagation speed of physical effects. In a modified version of the EPR thought experiment, two hypothetical observers, now commonly referred to as Alice and Bob, perform independent measurements of spin on a pair of electrons, prepared at a source in a special state called a spin singlet state. It is the conclusion of EPR that once Alice measures spin in one direction (e.g. on the x axis), Bob's measurement in that direction is determined with certainty, as being the opposite outcome to that of Alice, whereas immediately before Alice's measurement Bob's outcome was only statistically determined (i.e., was only a probability, not a certainty); thus, either the spin in each direction is an element of physical reality, or the effects travel from Alice to Bob instantly.
In QM, predictions are formulated in terms of probabilities — for example, the probability that an electron will be detected in a particular place, or the probability that its spin is up or down. The idea persisted, however, that the electron in fact has a definite position and spin, and that QM's weakness is its inability to predict those values precisely. The possibility existed that some unknown theory, such as a hidden variables theory, might be able to predict those quantities exactly, while at the same time also being in complete agreement with the probabilities predicted by QM. If such a hidden variables theory exists, then because the hidden variables are not described by QM the latter would be an incomplete theory.
Bell inequalities concern measurements made by observers on pairs of particles that have interacted and then separated. Assuming local realism, certain constraints must hold on the relationships between the correlations between subsequent measurements of the particles under various possible measurement settings.
The inequality that Bell derived can be written as:
where ρ is the correlation between measurements of the spins of the pair of particles and a, b and c refer to three arbitrary settings of the two analysers. This inequality is however restricted in its application to the rather special case in which the outcomes on both sides of the experiment are always exactly anticorrelated whenever the analysers are parallel. The advantage of restricting attention to this special case is the resulting simplicity of the derivation. In experimental work the inequality is not very useful because it is hard, if not impossible, to create perfect anti-correlation.
This simple form does have the virtue of being quite intuitive. It is easily seen to be equivalent to the following elementary result from probability theory. Consider three (highly correlated, and possibly biased) coin-flips X, Y, and Z, with the property that:
then X and Z must also yield the same outcome at least 98% of the time. The number of mismatches between X and Y (1/100) plus the number of mismatches between Y and Z (1/100) are together the maximum possible number of mismatches between X and Z (a simple Boole–Fréchet inequality).
Imagine a pair of particles that can be measured at distant locations. Suppose that the measurement devices have settings, which are angles—e.g., the devices measure something called spin in some direction. The experimenter chooses the directions, one for each particle, separately. Suppose the measurement outcome is binary (e.g., spin up, spin down). Suppose the two particles are perfectly anti-correlated—in the sense that whenever both measured in the same direction, one gets identically opposite outcomes, when both measured in opposite directions they always give the same outcome. The only way to imagine how this works is that both particles leave their common source with, somehow, the outcomes they will deliver when measured in any possible direction. (How else could particle 1 know how to deliver the same answer as particle 2 when measured in the same direction? They don't know in advance how they are going to be measured...). The measurement on particle 2 (after switching its sign) can be thought of as telling us what the same measurement on particle 1 would have given.
Start with one setting exactly opposite to the other. All the pairs of particles give the same outcome (each pair is either both spin up or both spin down). Now shift Alice's setting by one degree relative to Bob's. They are now one degree off being exactly opposite to one another. A small fraction of the pairs, say f, now give different outcomes. If instead we had left Alice's setting unchanged but shifted Bob's by one degree (in the opposite direction), then again a fraction f of the pairs of particles turns out to give different outcomes. Finally consider what happens when both shifts are implemented at the same time: the two settings are now exactly two degrees away from being opposite to one another. By the mismatch argument, the chance of a mismatch at two degrees can't be more than twice the chance of a mismatch at one degree: it cannot be more than 2f.
Compare this with the predictions from quantum mechanics for the singlet state. For a small angle θ, measured in radians, the chance of a different outcome is approximately . At two times this small angle, the chance of a mismatch is therefore about 4 times larger, since . But we just argued that it cannot be more than 2 times as large.
This intuitive formulation is due to David Mermin. The small-angle limit is discussed in Bell's original article, and therefore goes right back to the origin of the Bell inequalities.
Generalizing Bell's original inequality, John Clauser, Michael Horne, Abner Shimony and R. A. Holt, introduced the CHSH inequality, which puts classical limits on the set of four correlations in Alice and Bob's experiment, without any assumption of perfect correlations (or anti-correlations) at equal settings
where ρ denotes correlation in the quantum physicist's sense: the expected value of the product of the two binary (+/-1 valued) outcomes.
Making the special choice , denoting , and assuming perfect anti-correlation at equal settings, perfect correlation at opposite settings, therefore and , the CHSH inequality reduces to the original Bell inequality. Nowadays, (1) is also often simply called "the Bell inequality", but sometimes more completely "the Bell-CHSH inequality".
To prove Bell's theorem via a derivation of the Bell-CHSH inequality, we first have to formalize local realism. A common approach is the following:
where for accessibility of notation we assume that the probability measure has a density ρ that therefore is nonnegative and integrates to 1. The hidden parameter is often thought of as being associated with source but it can just as well also contain components associated with the two measurement devices.
Given this formalization of what is meant by local realism, or by a hidden variables theory, the CHSH inequality can be derived as follows.
The subsequent derivation is clearer if we use the following abbreviated notation: . Thus each of these four quantities is ±1 and each depends on λ. It follows that for any λ ∈ Λ, one of and is zero, and the other is ±2. From this it follows that
At the heart of this derivation is a simple algebraic inequality concerning four variables, , which take the values ±1 only:
The CHSH inequality is seen to depend only on the following three key features of a local hidden variables theory: (1) realism: alongside of the outcomes of actually performed measurements, the outcomes of potentially performed measurements also exist at the same time; (2) locality, the outcomes of measurements on Alice's particle don't depend on which measurement Bob chooses to perform on the other particle; (3) freedom: Alice and Bob can indeed choose freely which measurements to perform.
The realism assumption is actually somewhat idealistic, and Bell's theorem only proves non-locality with respect to variables that only exist for metaphysical reasons. However, before the discovery of quantum mechanics, both realism and locality were completely uncontroversial features of physical theories.
The measurements performed by Alice and Bob are spin measurements on electrons. Alice can choose between two detector settings labeled a and a′; these settings correspond to measurement of spin along the z or the x axis. Bob can choose between two detector settings labeled b and b′; these correspond to measurement of spin along the z′ or x′ axis, where the x′ − z′ coordinate system is rotated 135° relative to the x − z coordinate system. The spin observables are represented by the 2 × 2 self-adjoint matrices:
Let φ be the spin singlet state for a pair of electrons discussed in the EPR paradox. This is a specially constructed state described by the following vector in the tensor product
Now let us apply the CHSH formalism to the measurements that can be performed by Alice and Bob.
The operators correspond to Bob's spin measurements along x′ and z′. Note that the A operators commute with the B operators, so we can apply our calculation for the correlation. In this case, we can show that the CHSH inequality fails. In fact, a straightforward calculation shows that
Bell's Theorem: If the quantum mechanical formalism is correct, then the system consisting of a pair of entangled electrons cannot satisfy the principle of local realism. Note that 2√ is indeed the upper bound for quantum mechanics called Tsirelson's bound. The operators giving this maximal value are always isomorphic to the Pauli matrices.
Experimental tests can determine whether the Bell inequalities required by local realism hold up to the empirical evidence.
Actually, most experiments have been performed using polarization of photons rather than spin of electrons (or other spin-half particles). The quantum state of the pair of entangled photons is not the singlet state, and the correspondence between angles and outcomes is different from that in the spin-half set-up. The polarization of a photon is measured in a pair of perpendicular directions. Relative to a given orientation, polarization is either vertical (denoted by V or by +) or horizontal (denoted by H or by -). The photon pairs are generated in the quantum state
where and denotes the state of a single vertically or horizontally polarized photon, respectively (relative to a fixed and common reference direction for both particles).
When the polarization of both photons is measured in the same direction, both give the same outcome: perfect correlation. When measured at directions making an angle 45 degrees with one another, the outcomes are completely random (uncorrelated). Measuring at directions at 90 degrees to one another, the two are perfectly anti-correlated. In general, when the polarizers are at an angle θ to one another, the correlation is cos(2θ). So relative to the correlation function for the singlet state of spin half particles, we have a positive rather than a negative cosine function, and angles are halved: the correlation is periodic with period π instead of 2π.
Bell's inequalities are tested by "coincidence counts" from a Bell test experiment such as the optical one shown in the diagram. Pairs of particles are emitted as a result of a quantum process, analysed with respect to some key property such as polarisation direction, then detected. The setting (orientations) of the analysers are selected by the experimenter.
Bell test experiments to date overwhelmingly violate Bell's inequality.
The fair sampling problem was faced openly in the 1970s. In early designs of their 1973 experiment, Freedman and Clauser used fair sampling in the form of the Clauser–Horne–Shimony–Holt (CHSH) hypothesis. However, shortly afterwards Clauser and Horne made the important distinction between inhomogeneous (IBI) and homogeneous (HBI) Bell inequalities. Testing an IBI requires that we compare certain coincidence rates in two separated detectors with the singles rates of the two detectors. Nobody needed to perform the experiment, because singles rates with all detectors in the 1970s were at least ten times all the coincidence rates. So, taking into account this low detector efficiency, the QM prediction actually satisfied the IBI. To arrive at an experimental design in which the QM prediction violates IBI we require detectors whose efficiency exceeds 82.8% for singlet states, but have very low dark rate and short dead and resolving times. This is now within reach.
Because, at that time, even the best detectors didn't detect a large fraction of all photons, Clauser and Horne recognized that testing Bell's inequality required some extra assumptions. They introduced the No Enhancement Hypothesis (NEH):
A light signal, originating in an atomic cascade for example, has a certain probability of activating a detector. Then, if a polarizer is interposed between the cascade and the detector, the detection probability cannot increase.
Given this assumption, there is a Bell inequality between the coincidence rates with polarizers and coincidence rates without polarizers.
The experiment was performed by Freedman and Clauser, who found that the Bell's inequality was violated. So the no-enhancement hypothesis cannot be true in a local hidden variables model.
Recent experiments with photons no longer suffer from the detection loophole (see Bell test experiments). This makes the photon the first experimental system for which all main experimental loopholes have been surmounted, albeit presently only in separate experiments (Giustina et al. (2013), Bell violation using entangled photons without the fair-sampling assumption, Nature 497, 227–230; B.G. Christensen et al. (2013), Detection-Loophole-Free Test of Quantum Nonlocality, and Applications, arXiv:1306.5772).
Most advocates of the hidden-variables idea believe that experiments have ruled out local hidden variables. They are ready to give up locality, explaining the violation of Bell's inequality by means of a non-local hidden variable theory, in which the particles exchange information about their states. This is the basis of the Bohm interpretation of quantum mechanics, which requires that all particles in the universe be able to instantaneously exchange information with all others. A 2007 experiment ruled out a large class of non-Bohmian non-local hidden variable theories.
If the hidden variables can communicate with each other faster than light, Bell's inequality can easily be violated. Once one particle is measured, it can communicate the necessary correlations to the other particle. Since in relativity the notion of simultaneity is not absolute, this is unattractive. One idea is to replace instantaneous communication with a process that travels backwards in time along the past light cone. This is the idea behind a transactional interpretation of quantum mechanics, which interprets the statistical emergence of a quantum history as a gradual coming to agreement between histories that go both forward and backward in time.
A radical solution is offered by the many worlds theory of quantum mechanics. According to this, not only is collapse of the wave function illusory: also, the apparent random branching of possible futures when quantum systems interact with the macroscopic world is an illusion too. Measurement does not lead to a random choice of possible outcome: the only ingredient of quantum mechanics is the unitary evolution of the wave function. All possibilities co-exist forever and the only reality is the quantum mechanical wave function. According to this view, two distant observers both split into superpositions when measuring a spin. The Bell inequality violations are no longer counterintuitive, because it is not clear which copy of the observer B observer A will see when going to compare notes. If reality includes all the different outcomes, locality in physical space (not outcome space) places no restrictions on how the split observers can meet up.
This point underlines the fact that the argument that realism is incompatible with quantum mechanics and locality depends on a particular formalization of the concept of realism. The assumption, in its weakest form, is called counterfactual definiteness. This is the assumption that outcomes of measurements not performed are just as real as those of measurements that were performed. Counterfactual definiteness is an uncontroversial property of all classical physical theories prior to quantum theory, due to their determinism. Many worlds interpretations are not only counterfactually indefinite, they are factually indefinite. The results of all experiments, even ones that have been performed, are not uniquely determined.
If one chooses to reject counterfactual definiteness, reality has been made smaller, and there is no non-locality problem. On the other hand, one is thereby introducing irreducible or intrinsic randomness into our picture of the world: randomness that cannot be "explained" as merely the reflection of our ignorance of underlying, variable, physical quantities. Non-determinism becomes a fundamental property of nature.
Assuming counterfactual definiteness, reality has been enlarged, and there is a non-locality problem. On the other hand, in the many-worlds interpretation of quantum mechanics, reality consists only of a deterministically evolving wave function, non-locality is a non-issue.
There have also been repeated claims that Bell's arguments are irrelevant because they depend on hidden assumptions that, in fact, are questionable—though none of these claims have ever achieved much support. For example, E. T. Jaynes claimed in 1989 that there are two hidden assumptions in Bell's theorem that could limit its generality. According to him:
However, Richard D. Gill has argued that Jaynes misunderstood Bell's analysis. Gill points out that in the same conference volume in which Jaynes argues against Bell, Jaynes confesses to being extremely impressed by a short proof by Steve Gull presented at the same conference, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory. According to Jaynes (writing nearly 30 years after Bell's landmark contributions), it would probably take us another 30 years to fully appreciate Gull's stunning result.
The violations of Bell's inequalities, due to quantum entanglement, just provide the definite demonstration of something that was already strongly suspected, that quantum physics cannot be represented by any version of the classical picture of physics. Some earlier elements that had seemed incompatible with classical pictures included complementarity and wavefunction collapse. The Bell violations show that no resolution of such issues can avoid the ultimate strangeness of quantum behavior.
The EPR paper "pinpointed" the unusual properties of the entangled states, e.g. the above-mentioned singlet state, which is the foundation for present-day applications of quantum physics, such as quantum cryptography; one application involves the measurement of quantum entanglement as a physical source of bits for Rabin's oblivious transfer protocol. This non-locality was originally supposed to be illusory, because the standard interpretation could easily do away with action-at-a-distance by simply assigning to each particle definite spin-states for all possible spin directions. The EPR argument was: therefore these definite states exist, therefore quantum theory is incomplete, since they do not appear in the theory. Bell's theorem showed that the "entangledness" prediction of quantum mechanics has a degree of non-locality that cannot be explained away by any local theory.
In well-defined Bell experiments (see the paragraph on "test experiments") one can now falsify either quantum mechanics or Einstein's quasi-classical assumptions: currently many experiments of this kind have been performed, and the experimental results support quantum mechanics, though some point out that it is theoretically possible that detectors give a biased sample of photons, so that until the relative amount of "unpaired" photons is small enough, the final word has not yet been spoken. According to Marek Zukowski, quoted in Science Magazine (2011), experimenters expect the first loophole free experiment to be done in five years. According to one of the most foremost experimenters in this field, Anton Zeilinger (2013), the goal of a loophole free experiment is very close and will be a major achievement. According to Gregor Weihs (University of Innsbruck and conducted a 1998 Bell test experiment) at least four major experimental groups around the world are in the race to be first. In 2014, Jason Gallicchio, Andrew Friedman, and David Kaiser published a paper in Physical Review letters proposing an experiment to close the free will loophole that uses light from quasars at opposite directions in the sky (which have therefore not had any contact or communication since the Big Bang) to decide on the settings of particle detectors. As Kaiser explains it, an experiment would go something like this: A laboratory setup would consist of a particle generator, such as a radioactive atom that spits out pairs of entangled particles. One detector measures a property of particle A, while another detector does the same for particle B. A split second after the particles are generated, but just before the detectors are set, scientists would use telescopic observations of distant quasars to determine which properties each detector will measure of a respective particle. In other words, quasar A determines the settings to detect particle A, and quasar B sets the detector for particle B.
What is powerful about Bell's theorem is that it doesn't refer to any particular physical theory. It shows that nature violates the most general assumptions behind classical pictures, not just details of some particular models. No combination of local deterministic and local random variables can reproduce the phenomena predicted by quantum mechanics and repeatedly observed in experiments.
The following are intended for general audiences.