Pages

Sunday, May 4, 2014

Spekkens' 'Toy Theory' Part 2: Epistemic vs. Ontic

I don't like the dichotomy of 'epistemic vs. ontic'. It's helpful in certain ways, but it ultimately is a false dichotomy.

'Ontic' here loosely refers to interpretations that treat the quantum state as a real physical entity. This is the same as 'realism', but that term is ambiguous so it isn't necessarily a bad thing that there is this redundancy. The opposite of an ontic interpretation is a non-realist interpretation--one that takes the quantum state to not be a real entity itself. A non-realist interpretation is a position about explanation. It holds that there is a different explanation of events that is better than the realist explanation, and that these explanations cannot both be true.

'Epistemic' here refers to interpretations that take the quantum state to be 'states of belief about the system', which is horribly confusing. I don't like this definition. Belief doesn't actually factor into physics like this, so something is not right here. 

Let's consider another term instead: probabilistic. The term 'epistemic' pretty much refers to an explanation that is probabilistic. With phase space probability distribution in classical thermodynamics, there is some collection of outcomes which are not known deterministically in advance, but we can write up a function which will hold true of the distribution of outcomes we see. Regardless of whether or not each individual outcome is able to be found deterministically, we can use a probability distribution to understand the whole collection of outcomes. This is a probabilistic physical prediction, and Spekkens is simply arguing that a quantum state is just the same kind of thing as this phase space distribution. 

The tricky thing about all of this is that there's an assumption that these probabilistic (epistemic) theories are non-real theories, but that simply isn't the case. What makes these probability distrutions true are "chance set-up" situations. There has to be something that sets up the distribution to be predictable by a probability distribution. If there is no chance set-up, there's no reason to believe that a probabilistic prediction will be true. Nancy Cartwright writes much better than I about this aspect of probabilistic laws (1999). These chance set-ups, then are the description of the world that physics brings. Probability distributions are not lacking in ontological committments, since they require that the world be set up in this certain way. Maybe this is a less satisfying ontological claim than one about the existence or non-existence of something, but it undoubtedly is an ontological claim that physics makes.

So probabilistic (epistemic) theories are not necessarily non-real about quantum states. They are only non-real if they provide an alternate explanation for quantum effects compared to the the interpretation of quantum mechanics which takes quantum states to be real entities. Spekkens argues that there is alternate explanation that is better, we just don't know it yet. But he has no reason to argue that this is actually true. Remember, the 'Toy Theory' does not actually replace quantum mechanics in explanatory power. At best, it gives an account for what the new explanation would look like, but this is not enough to claim that interpretations that take quantum states to be real are incorrect. 

In fact, Spekkens seems to overlook the possibility that quantum mechanical theorizing is in some ways epistemic and that the wave function is also a real physically existing thing. This is the view I like best. In a sense, it's not quantum mechanics that has failed to understand some phenomena, but really it is classical mechanics that has failed. Classical mechanics asks where a particle is, and only gets a probability distribution, which quantum mechanics gives. What is obviously in partial belief (and probabilistic) is our knowledge of particle positions, so this is in a sense uncontrovertably epistemic. But, at the same time, what provides the probability distributions accurately? Quantum mechanics does.

Spekkens asks us to look for a third theory which can explain why our knowledge of particle positions seem to always be in partial belief, and therefore probabilistic, but the answer is something more obvious than what he proposes. In fact, quantum mechanics is a theory which correctly describes the real world, and provides correct probability distributions for our lack of knowledge of the positions of particles. 

Update: Upon reflection, I realize that this treatment of the dichotomy is not entirely fair to it. I think that the point I made here is still valid, I just don't think the case is closed on this dichotomy. It's absolutely a false dichotomy; there is just more to say about it that is less dissmissive. I'll write up another post soon...

Sunday, April 20, 2014

Spekkens' 'Toy Theory'

I just read the paper about Spekkens' 'Toy Theory' of quantum mechanical phenomena. I had heard about this paper from the interview with Fuchs, where the paper is cited as being a support of the quantum bayesianism interpretation. I was not disappointed. This essay is awesome, and I think it is going to be seen as one of the most influential papers in in recent history. 

You should go read it here.

That's not to say that I think it is right in what it argues. Far from it, I think that the essay is fundamentally mistaken in many ways, though it is a smart and insightful fresh take on the subject. I think that there is much to learn from it, despite being at heart incorrect.

The paper's main goal is is argue that quantum states are epistemic and not ontic. Essentially, this is a claim against the realist interpretation of quantum mechanics. Ontic, here means that it is a theory which describes real objects in the world in a reliable way. The opposite of this is to claim that, though it enjoys predictive success, quantum mechanics at its base does not exactly describe the world. It is only a phenomenological theory, in a sense. It gives a scientist the means to predict how the world will behave, but it does so in a kind of incidental way. Phenomenological theories do not give causal accounts of why things behave the way they do, just a way to know what they will do. These types of theories are missing the causal mechanism that explains what happens.

Get it? Toy theory?
The alternative is epistemic, which is a version of non-realism that is plausible for quantum mechanics. Essentially, it claims that quantum states are not physical states, but are rather states of lack of information. Instead of encoding something about the world, the superposition of Schrödinger's cat just describes the information that we are lacking: the life or death state of the cat. Epistemic interpretations of quantum mechanics argue that quantum mechanics does not describe the real world, just a kind of lack of information about the world. Quantum Bayesianism is one of the more well known epistemic interpretations of quantum mechanics. 

This distinction between epistemic and ontic interpretations is a false dichotomy: it's not necessary that an interpretation fall into either of these categories. Furthermore, the dichotomy has a latent presupposition, but that is the subject of a later post.

Quantum states come in two different varieties: 'pure' and 'mixed' quantum states. A 'pure' q-state is essentially just a a quantum system described by one state vector in a superposition. 'Mixed' states are when a quantum system is described by multiple state vectors. A common view of many philosophers of science is that pure q-states are ontic, while mixed q-states are considered epistemic since they are just states of incomplete knowledge about what pure state a system is in. Spekkens argues that both pure and mixed q-states, however, are epistemic.

He claims that the epistemic view of q-states is superior to ontic view because certain phenomena like interference, noncommutivity, entanglement, no cloning, teleportation, etc. are mysterious in the ontic view seem natural in the epistemic view.

The success of this argument is a mixed one. On the one hand, this toy theory is useful for understanding some aspects of quantum phenomena. I think that it indeed is useful precisely because there is something true to it.

But I don't think Spekkens' claim is, strictly speaking, a fully true one. While it can be used for insight, the toy theory does not replace quantum mechanics in predictive power. This is why his argument is misleading: the toy theory cannot account for the list of phenomena above. For example, you cannot apply the toy theory alone to the situations which give rise to quantum interference phenomena. Quantum mechanics is capable of making empirical predictions, but the toy theory simply piggy-backs on those predictions.

This means that the toy theory is not a replacement of quantum mechanics in any way. This should be a red flag: scientific endeavors aim for predictive power. The 'toy theory' may be a strong philosophical aside, but it clearly requires another step backwards in understanding before we can go forwards.

Monday, April 14, 2014

What is realism anyway?

I know when I first started studying quantum mechanics and the topic of realism in it, I had to spend some time trying to figure out what the heck realism actually meant. It is a topic that is in some ways quite natural, but figuring out how anyone could genuinely disagree about this was confusing for a while. Here is what I learned about the distinction between realism and anti realism, scientific or otherwise. 

Of course, if the distinction between realism and anti-realism is to be worth anything, it has to actually make sense on a basic level and be a view that people actually could conceivably hold. Understanding the view of anti-realism can give insight into realism.

-------

The disagreement between these two views is about ontology; it's a disagreement about what exists. Sometimes people use realism to signify the view that certain things exist and that this constitutes an explanation for certain phenomena. This is wrong: realism does not by itself provide an explanation. Much more is needed for an actual explanation than just what exists in an experimental situation. Especially in physics: one could express this by saying that the math has to work out as well. This is a blunder committed by some advocates of the Many-worlds interpretation. The interpretation is presented as a fix for quantum weirdness simply by supplying the right ontology, but this is misguided. For the many-worlds interpretation to be true, it must work out in the details, not just its ontological commitments. 

Ontology, by itself, is not an explanation; theory is.

-------

That being said, ontology is a very robust way to divide and categorize different theories. Every theory admits of a certain "cast of characters" that it uses in theorizing. For example, classical mechanics has equations like "F=ma", which use the terms "force", "mass", and "acceleration". These terms can be used across different equations, and so long as the circumstances are the same, they can be used interchangeably in different theoretical equations. The "m" in "F=ma" and "p=mv" both stand for mass, and can refer to the same thing in the right circumstances.


Since this "cast of characters" of abstract terms are what the theory uses to describe a given experimental setup, they form the vocabulary that we use to describe the world in that theory. Different theories may have some abstract terms in common (like Newtonian and Quantum mechanics both use Hamiltonians), but they will still have abstract terms that do not overlap. This is a good way to show that two different forms of theorizing truly are parts of separate theories: their "cast of characters" are not the same.

Since different theories use different vocabularies to describe the world, this implies that they can admit different ontologies. That is to say, different theories describe different ontologies because they describe the world in different ways. The same can be said of philosophical interpretations of a physical theory. For this reason, ontology is a good topic to use to distinguish different interpretations of quantum mechanics.

This should be understood with two grains of salt, however: 1. ontology is not always the best way to distinguish interpretations and 2. not all interpretations disagree on ontologies, yet I would be inclined to say they are different theories (for example, some interpretations disagree on empirically-verifiable grounds.

-------

So what would an opponent of realism have to say that would make sense? They'd say that while of course we have the impression of real persisting objects in the world around us, these impressions (or certain ones) are best explained in terms of some mechanism other than real existing objects.

For example, it's possible all of the impressions of physically existing things we perceive come from some extra-physical being called God, and no physical thing actually exists. (This is roughly a view that the famous philosopher Berkeley argued for).

Or maybe we are simply brains in vats, hooked up to some elaborate machinery that gives us the impressions that we are in the physical world? (This is probably a much more palatable version to contemporary readers.)

In both of these examples, our perceptions are not caused by what we would naturally think they are caused by. Instead, there is another explanation, and this explanation pictures reality differently. Anti-realism is a claim that what we perceive is best explained by an idea which does not admit of the existence of some thing.

-------

There's something else we can say about this alternate explanation. Not only is it asserted to be a better explanation, it is purported to be a better explanation of our impressions. If the anti-realist explanation is true, then it will bridge the gap between the impression as experienced and the explanation of that impression. This results in the meaningfulness of the words we use to describe the reality being replaced being undermined in a special way. If this anti-realist explanation is true, we would be able to exactly define the meaning of the words we used to describe the now-replaced ontology, and we'd be able to exactly define them in terms of how we came to know them. The meanings of the words we used would be definable through their epistemology.

That is far from clear, so imagine this example: there is someone permanently plugged into a realistic simulation of the outside world who was aware that this was the case. This person wanted to study how this simulation worked (and let's assume that he/she could genuinely understand it). Let's call this interested mind the 'matrix scientist'. Presumably, this matrix scientist would be able to figure out, in ideal circumstances, what sorts of mechanisms in the simulation were in action when he/she observed something, even though he/she was never taken out of the realistic simulation.

So let's say that this matrix scientist was trying to understand his/her own sense of smell, and the matrix produces this sensation through two distinct programs. You could even imagine this is a hardware difference, where these two aspects of smell run on two physically separate machines. The important point is that there are two distinct causal processes for producing the scientist's sense of smell. Let's call them process A and process B.

The matrix scientist could study these different processes and learn their differences, their similarities, how they work together, etc. Even after learning all about them, however, that textbook knowledge he/she had would still not count, strictly speaking, as knowledge of the unreality of his/her sense of smell. After all, that knowledge has not yet been connected to his/her perception of smell. In order to do that, the matrix scientist would have to be able to describe his/her sense experiences of smell in terms of these processes. Suppose process A controlled the smell of crayons, and everything else was controlled by process B. The matrix scientist might at some point smell crayons, and because he/she knows that this is a result of process A, he/she could say "process A is occurring" just as easily as he/she might say "I smell crayons". The matrix scientist could go on to do this with all aspects of his senses, given enough information about the causal forces that give rise to his/her sensations. The end result of this is a language which can be used to describe all aspects of the reality of perception without using any sensory concepts like "smell".

What this elaborate example is meant to illustrate is that, in order for some natural notion of reality (like sensory reality) to be replaced with another notion of what is real, we have to have ways of describing our old reality in terms of the new.

-------

So at the end of the day, anti-realism is a commitment to the following:

1. There may be some realist explanation of our impressions, but the anti-realist thinks that there is a better explanation that does not use the disliked ontological concept.
2. Our sense impressions can be completely described in terms of this alternate causal process. 

Monday, January 13, 2014

Quantum-Intentional Interpretation

In my post on quantum mechanics and consciousness I mentioned a category of quantum interpretations that I call "quantum-intentional interpretations". I defined these as any interpretation that holds that consciousness can be a cause of physical quantum effects.

I want to revise this definition slightly, however. I was conflating two types of ideas. Exactly what aspect of consciousness is acting as a cause? There are two obvious ways of thinking of it:
  1. Passive: conscious perception causes physical change.
  2. Active: conscious decision of the way in which a quantum system is measured brings about a physical change.
(There may be more ways of thinking about this, but this seems exhaustive me.)

Obviously, the second is more appropriately called the quantum-intentional interpretation since it deals with the intentions of the quantum physicist. I want to use this as the new definition of the term. Interpretations that fall into first of the above distinction should be called quantum-observational interpretations.

There are a couple of reasons I accidentally conflated these two categories. The first is that my understanding of perception is that it needn't be understood as passive. That is to say, any passive description of perception could be phrased so as to be an active description. This is not an immediately obvious point, however. In addition, it's not clear whether or not this is actually true, much less whether or not I am justified in assuming it. Arguing this point would not be important for the discussion, so it is best to avoid implicitly assuming it.

There is a very good reason to make this distinction: one of these two interpretations is very easily refuted. It is easy to show that the quantum-intentional interpretation is plainly ridiculous.

The second is that the quantum-intentional interpretation was just so plainly false. But first, let's see what the most compelling argument in favor of it looks like.

Arguing in Favor of the Quantum-Intentional Interpretation

The technical term for the way a quantum state is measured is called the basis. When one measures a quantum state, one must measure it with respect to basis.

A loose analogy would be to think of the coordinate grid you overlay on top of some two dimensional plane you want to measure. If you want to mathematically describe a coordinate direction, it must be described in terms of a coordinate system. This coordinate system can be changed at will, and there is established mathematics to describe the way in which the math will change as a result. 

Similar to how a coordinate overlay defines what coordinate directions one can use to describe a direction, basis defines the types of measurements we can make on a system. Changing the basis one measures in will change the way a quantum state will be measured and what the outcome will be.

In fact, for every quantum superposition, there is a basis in which to measure it such that it does not behave like a superposition and there will be no probabilistic outcome.

So the question is: if we are able to affect the superposition simply by the way in which we choose to measure it, does that intentional decision change reality? It would seem to be compelling to say yes. After all, a scientist could choose to measure something in one basis rather another, and this decision has made a real change in the stuff itself.

The Quantum-Intentional Interpretation is False

So why is this so ridiculous? That's because it's a category error (you know, that same distinction that has made dualism so passé in contemporary philosophy). There are different kinds of causes, and intentions are the wrong kind of cause to truly act as the kind of explanations we want them to be.

First, a joke. One is touring a physics lab when you see an elaborate device with pieces all around the room. You are intrigued, so you ask what it does. The physicist giving your tour begins to explain each part of the device. Each step it goes through is more elaborate than the next, and it all seems to rely on mechanisms that are being studied elsewhere in the lab, but you can't figure out which part is specifically being studied. At the end of the physicists explanation, you vaguely understand the way every part functions and how they all interact with each other, but you still cannot figure out what is being studied with this device. You ask "what do you study with this device?" In reply, the physicist just laughs and says: "we don't study anything with this device, it's just an elaborate Rube Goldberg device for making coffee."

The physicist has, in a sense, answered your question. He explained what device did in terms of all of the physical workings that are involved in its functioning. The explanation that satisfied you, however, was an explanation about the intentions of those who built the device: the device is for making coffee. The joke, obviously, is that the physicist offered the physical explanation instead of the one that was most helpful. Anyone who has dealt with physicists, or similar creatures, can tell you that this not an unrealistic situation.

When we give an explanation, we can give different kinds of causal explanations. Sometimes a certain kind of causal explanation is, strictly speaking, not an adequate explanation for a question, even if it is not a, strictly speaking, incorrect causal explanation.

When scientist designs a device to measure some quantum state, of course the intentions of the scientist play a factor in how the device ends up behaving. But when a scientist is trying to understand some physical system, they are looking for a physical cause. If someone were to ask why a quantum state is showing the behavior that it is upon measurement, saying that a scientist intentionally designed the apparatus to measure in a certain basis, this would not be an adequate explanation, regardless of whether or not it is a true explanation in some sense. The physicist is looking for a physical cause, and intentions are not a physical cause.

So yes, the intentional choice of measurement basis does, in a sense, cause reality to change. However, that cause is not a physical cause, and would be an inadequate kind of explanation for a physicist studying it.

Friday, October 25, 2013

Is Length Meaningless on the Planck-Scale?

Growing up, I had always heard about the Planck length was physically interpreted as being something like "the length at which our concept of length breaks down".

Here I'll talk only about the Planck length and not the Planck time, though these ideas apply to it just as much.

The Common Interpretation

Here is what Wikipedia currently says about the physical interpretation of the Planck length:

Snapshot of Wikipedia taken November 2nd
The smallest possible length? How does that make any sense? It baffled me when I was younger. How could a length define when the notion of length no longer worked? When you measured something just larger than the Planck length, your measurements would work, but try and measure something smaller than that and it would no longer work? This doesn't make much sense as a hard-and-fast line. Supposing it was a blurry effect didn't help: how could the effect be blurry, but the number be so well-defined? Couldn't we measure the "blurriness", and wouldn't that be the most interesting part? Was length "pixelated"?

Something like this train of thought bothered me, and I think it highlights just what is so absurd about the typical physical interpretation of this concept. How could one define the point at which a particular concept no longer made sense to use, when the definition of that point used that concept? It's as if someone temporarily decreed that no one was allowed to use calendar measures of time anymore. When asked when the decree would no longer be in effect, they would look silly if they said "two weeks from now".

What I think

So that's why the common interpretation is rubbish. Utterly unhelpful. The better interpretation of the Planck length is that it refers to when many of our physical theories begin to break down. This is very different. We can, and do, continue to use the concept of length well beyond the limit of the Planck length. The notion of space and dividing that space into points, lines, and distances are the mathematical basis for our physical theories. But while they are fundamental to the physical theory, they are not exactly "part of it" since they can be used meaningfully in the exact same way in many other contexts.

And that's exactly what we do. We use these concepts in quantum mechanics for different purposes, but they are nonetheless the same concepts. The concept of distance is unaltered at that scale, it's our classical theories that no longer work. And part of our classical theory is the notion that physical matter is made up of particles and those particles have locations. At the Planck scale, it doesn't really make sense anymore to ask where a particle is, since that is the scale at which their location begins to be indeterminate.

And that indeterminacy is definitely part of physics. When it's said that a particle has an indeterminate location, it doesn't mean that we simply cannot determine it's location. It means that that location is not determinate. It's location is blurry. This is what is significant about the Planck scale; the concept of length is still just as meaningful, it's just that physical particles no longer have well-defined locations. That indeterminacy is well-defined, actually. It's worth pointing out that physical quantum waves are not indeterminate at that scale; they are perfectly well defined at that scale.

Thursday, August 29, 2013

What I Think: Gleiser's on the Theory of Everything

I read an interesting NPR Opinion Piece written by Marcelo Gleiser. Go read it. In it he argues against pursuing a so-called "Theory of Everything", or TOE for short. It makes several good points.

First off, he's right that a TOE would not actually be a theory of everything. Rather, it would be a theory of everything that is considered the most fundamental building blocks of the physical universe. This leaves out a lot of stuff, like nearly everything in day to day life.

Second, he's also right to link the desire for unity to Judeo-Christian monotheism. This may seem like an odd idea nowadays since religion and science are seen as opposing sides of a dichotomy (they aren't). Though it may seem counterintuitive, historically these ideas come from the same source. In addition, monotheism and the search for scientific unity get support in similar ways. Science is often thought of as becoming unified in the "mind of God".

Ceteris Paribus

But there is one point that he makes that I think is not quite right. He argues that the TOE doesn't make sense, but I would disagree with this. This idea comes from his argument that theories will always be incomplete.
"A physical theory can only be proved wrong, never right, at least in any permanent sense. This is because every theory is necessarily incomplete, always ready for updates as we learn more about the physical world. What we can say about Nature depends on how we measure it, the precision and reach of our instruments dictating how 'far' we can see. As a consequence, no theory that attempts to unify current knowledge can be seriously considered a 'final' theory or a TOE given that we can't ever be sure that we aren't missing a huge piece of evidence."
Essentially, he's arguing that a TOE is impossible because all theories will always be incomplete. That is to say, there will always be some relevant data missing if we try and use it to explain everything.

But this is just an uninteresting feature of induction itself. No inductive inference can ever be proven true in the exact sense of a deductive statement. Inferences can only be stronger or weaker, never "proven".

This is not what prevents us from having a TOE, however. If a theory is shown to be sufficiently strong, then we say it is true. This is what we mean when we say that classical mechanics or relativity is true. It's certainly not true in the deductive and exact sense, but that doesn't mean it isn't true in a different sense.

Is Gleiser committed to the idea that our greatest scientific theories are not true? That is to say, not true in any sense of the word? I would hope not, because that argument showing that any TOE cannot be true necessarily also proves that any theory cannot be true necessarily. Regardless of whether or not this is the correct way to think of scientific theories (it's not), it clearly is not an idea in favor of his argument since it would admit that a "true TOE" is not any more or less sensible than the truth of any other scientific theory.

Outside influences may not integrate into theory

Gleiser's argument still allows the possibility of formulating a theory that could explain any sort of circumstance, had we enough data about that circumstance. And that is exactly what a TOE would supposedly do. It would allow us to explain anything we wanted, so long as we had enough information about it.

It is this idea about a TOE that is actually wrong. For a theory to work in the world there must not be any relevant outside influences acting on the system that is being studied. For physics, these are often things like thick shielding so as to prevent radiation or other similar influences. Making the theory work means making sure no influences like this could affect the system. Without these shielding conditions, our best theories could only come up with 'educated guesses' at best.

So how to make sense of these outside influences? Most who give it some thought might think that these outside influences could be understood within the theory if we knew enough about them.

Here's a mental picture: some scientists are studying fluid dynamics of a substance that is not well understood. If someone breaks the shielding conditions by reaching into an apparatus and waving their hand around, sufficiently disrupting the experiment, we might say that we could in principle factor their influence into the theory if we only knew the exact motions of their hand, etc.

This may be true, but it misidentifies the problem. The problem is not the fact that there is not enough information, but that the information we have is not the right kind. Presumably, we could learn many things about how this person waved their hand in the apparatus, like why they did it or the sensations they felt on their hand when they did it. None of these facts would be helpful, however. We would need to have the right kind of info: we would need to know the exact motion of their hand in the apparatus and the material properties of their hand, etc.

This is why a TOE is such an absurd notion: outside influences on a system might not make sense in the theory.

This is not to say that a TOE is impossible. It may be possible, but it seems highly unlikely we will find one. More importantly, however: searching for such a vain and unlikely scientific theory is very likely to hurt our efforts to understand less "symmetric" or "beautiful" phenomena. Not every scientific discoveries are "beautiful", but this does not make them insignificant by any stretch of the imagination. We should not spend our time seeking out these ideal theories when our attention is best put into doing what science does best: helping us to understand the world around us.


(It should be noted that this discussion is based heavily on the ideas found in Nancy Cartwright's book, The Dappled World.)

Wednesday, August 28, 2013

Survey of Quantum Interpretations

When I was writing my senior thesis about quantum mechanics I began to wonder which interpretations were most and least popular. My professor found this illuminating survey taken in July of 2011 at the conference named Quantum Physics and the Nature of Reality. In attendance were 33 individuals, most of whom were affiliated with physics, while a small handful were associated with math or philosophy. The study is by Maximilian Schlosshauer , Johannes Kofler, and Anton Zeilinger.

The poll is interesting to sort through. It is by no means a completely representative sample of all thinkers in the whole world, but it is sizable and professional enough to be taken seriously. It as good looking and informative graphics as well a solid statistical analysis.

Here is the link. I highly recommend checking it out yourself.

  • Interestingly, they were highly divided over the nature and solution of the measurement problem. Nearly equal split of support between different answers.
  • A majority of experts optimistically predicted we would have a "working and useful" quantum computer in 10 to 25 years and no one thought it would take longer than 50 years. Very few comparably think it will happen before ten years from now.
  • Copenhagen is the most popular interpretation. I think that this is still an unclear result, though, since it is not always clear what the Copenhagen interpretation is. The second most popular were information-based interpretations, and I'm not entirely sure what those are. I would have guessed many-worlds would be second, but it was in a relatively close third.
  • Apparently most of them thought that personal philosophical prejudice played a large part in choice of interpretation. This isn't too surprising: most of the time when one is heavily informed on a topic and debates that topic with others one is very likely to become frustrated with their views, regardless of how informed they are. While some might think this suggests that viewpoints are inherently about nothing more than our preferences, I don't think that's the right conclusion. While that could be true, I think that it suggests that smart people who strong opinions just have more opportunities to come up with excuses as to why others don't accept (their obviously true) views on some matter.
  • Very few people thought we would stop having conferences devoted to quantum foundations in the future. A great many thought we would. My guess is that we will, but they will be of a completely different nature. Instead of trying to figure out the "right answer" where the focus is on debating which is best, conferences in the future will be more of a "marketplace of ideas". My reasoning: the "right answer" is not obvious to the experts now so it would definitely not be obvious to most people in the near future. Even if we discover the "right answer" in 50 years, that doesn't mean it will be commonly accepted or taken seriously. Future conferences will probably not continue to focus on sorting things out and will instead be about exploring cool ideas. I like the authors' take on this:
"Among the different interpretive camps, adherents of objective (physical) collapse theories were the only group to believe, in significant numbers, that in fifty years from now, there will likely be still conferences devoted to quantum foundations. So perhaps this reflects the fact that those who pursue collapse theories tend to view quantum theory as an essentially unsatisfactory and unfinished edifice requiring long-term modification and construction efforts. Vice versa, it may be a sign that those who regard such efforts as unnecessary or even misguided are optimistic that the remaining foundational problems, whatever they may be, will soon be resolved."
  • I find it interesting that most of the correlations found in the data mostly just show logical conclusions. Not too surprising since the survey-takers are all thinkers by profession.
  • The largest consensus on a question was about quantum information, where a relatively large majority agreed that it "is a breath of fresh air for quantum foundations". I'm not entirely sure what quantum information is, so I'm surprised it has been such a big deal and I haven't heard much about it.