Quantum Computation Complexity: BQP

What started me on this whole complexity theory series was a question in
the comments about the difference between quantum computers and classical
computers. Taking the broadest possible view, in theory, a quantum computer
is a kind of non-deterministic machine - so in the best possible case,
a quantum machine can solve NP-complete problems in polynomial time. The set
of things computable on a quantum machine is not different from the set of
things computable on a classical machine - but the things that are tractable (solvable in a reasonable amount of time) on a quantum
computer may be different.

The current proposed designs for quantum computers fall well short of being general non-deterministic machines - so the set of problems that are
tractable on a quantum automaton (QA) may not be as large as the set of problems tractable on a general non-deterministic automaton (NDA).

To try to understand what we can do with a QA, there's a complexity class
BQP - bounded probability quantum polynomial - to represent the things
computable in polynomial time using a quantum computer. BQP is basically the equivalent of P for conventional computers - it is part of the nature of the quantum machine that its programs are always probabilistic.

To really understand what BQP means, we need to take a moment to look at what a quantum computer is, in contrast to a conventional computer. The canonical example of a conventional computer for the purposes of computation theory is the turing machine. To make the contrast between the quantum automaton and the turing machine clearer, we'll use a restricted definition of the Turing machine called a binary Turing machine. (The binary Turing machine is equivalent in power to the general Turing machine; in fact, some people use this restricted definition as the canonical definition of a TM.)

A binary Turing machine consists of:

  1. A tape, which is an unbounded sequence of numbered cells,
    each of which may contain either a one or a zero.
  2. A tape head, which is positioned over some tape cell at any
    point in time, and in any computation step, can read the value of a cell, write a new value into the cell, and move one step in either direction.
  3. A state consisting of a finite set of binary registers r1,...,rn.
  4. A state transition function λ : (r1...rn) × (0|1) → (r'1...r'n) × (0|1) × (left|right), which takes the current state and the symbol under the tape head as input, and produces a new state, a symbol to be written to the tape cell, and a direction to move the head.

A computation starts with some input written onto the tape, with the tape head positioned at cell zero. Each step, it invokes the transition function,
and updates the state, tape cell value, and head position based on the result. It continues to do this until it enters the state where all registers contain the value 0, at which point it halts. When the machine halts, the result of the computation is the value on the tape.

A basic quantum machine is very similar - it also has a tape, a register
set, and a transition function - but they're all probabilistic. Instead of a
quantum register/tape cell containing a specific value of one or zero, it contains a probability function describing the quantum state of the register - effectively specifying the probability of seeing a one when the value of the tape cell/register is observed; and instead of the transition function specifying a transition from a fixed state to a fixed state, it specifies how the probability function for the registers/tape cell varies.

The result is something similar to BPP, but not quite the same. In BPP, you get to flip a coin to choose a path. But once you've chosen a path, you're committed to that path. If that path turns out to be a dead end, there's no way to get back the time you spent on the dead end. You're committed to it, and you've already paid for it. In contrast, on a quantum machine, everything is happening in a sort of probability haze - until the end of the computation when you observe the result, it's not committed to a single path. It doesn't have to set the value of a register to zero; it can effectively just change the probability function associated with that register; if a computation is a dead end, its contribution to the probability function will fade out of the quantum state of the register.

But it's not all good news for the quantum machine. Unfortunately, here's where the direct correspondence between the quantum machine and the classical turing machine breaks down. In general, a quantum machine is effectively linear. You don't get to do something like a conditional in a quantum computation: at least for the technologies we're looking at so far, they execute a linear set of instructions. So you can't branch. If you want to explore multiple paths, you need to encode the paths together in the probability functions. So effectively, the while the quantum transition function does get to alter the state and the tape cell, it does not have a single fixed transition function: each transition function has a specific successor, and no matter what happens to the tape and the registers,
the transition function has to go through a specific sequence of values for the computation. So instead of a single transition function λ, the quantum machine has a sequence of functions {λ1,...,λn}, which must all be used in sequence during the computation. So, effectively, you can do small local
branches - that is, choices within the duration of a single transition function; but you can't make choices the way you could with a non-quantum machine.

So what's the difference between BPP and BQP? BQP allows the limited exploration of multiple paths, instead of requiring an
a priori commitment to one of them.

In terms of the complexity hierarchy, P ⊆ BPP ⊆ BQP ⊆ PP. So BQP is at least as powerful as BPP, and possibly better - but it's bounded from above by PP. And in fact, PP is sufficiently different from BQP that being able to get the solution to a BQP problem in constant time doesn't grant any additional capability to PP. (In fact, according to wikipedia, being able to solve BQP problems in zero time doesn't give any additional capability to PP; but I haven't seen the proof of that.) So from what we know, BQP is very close to BPP; if P!=NP (as many of us expect), then BQP may be slighty larger that BPP, but no larger than PP.

More like this

"n the best possible case, a quantum machine can solve NP-complete problems in polynomial time." => At best, it could actually do anything in NP in polynomial time, right? There's no need for the complete part, that just makes things easier.

Fun reading on the same topic: Computational Complexity and the Anthropic Principle by Scott Aaronson.

So the question remains: can quantum computers solve the NP-complete problems in polynomial time? Well, today we have strong evidence that the answer is no. Of course we can't prove that there's no fast quantum algorithm — we can't even prove that there's no fast classical algorithm! But what we can say is this. Suppose you threw away the structure of an NP-complete problem, and just considered a space (or landscape) of 2n possible solutions, together with a "black box" that tells you whether a given solution is correct or not. Then how many steps would it take to find a solution, supposing you could query the black box in quantum superposition? The answer turns out to be about 2n/2. So, you can do quadratically better than a classical algorithm — but only quadratically better. If n is (say) 100,000, then instead of 2100,000 steps, you now need "merely" 250,000 steps. Whup-de-doo!

Even if we do take the structure of NP-complete problems into account when designing quantum algorithms, the outlook is still pretty bleak. We do have a quantum analog of local search, called the quantum adiabatic algorithm. And in some situations where a classical local search algorithm would get trapped at a local minimum, the adiabatic algorithm is able to "tunnel" through potential barriers and thereby reach a global minimum. But only in some situations! In other situations, the adiabatic algorithm does almost as poorly as classical local search.

These findings suggest a more general question: is there any physical means to solve NP-complete problems in polynomial time? Maybe by going beyond quantum computers and exploiting closed timelike curves, or hypothetical nonlinearities in the Schrödinger equation, or quantum gravity effects that only become important at the Planck scale or near a black hole singularity? Well, it's good to keep an open mind about these things. But my own opinion is that eventually, the hardness of NP-complete problems will be seen the same way as (say) the Second Law of Thermodynamics or the impossibility of superluminal signalling are seen today. In other words: these are overarching meta-principles that are satisfied by every physical theory we know. Any of them might be falsified tomorrow, but it seems manifestly more useful to assume they're true, and then see what the consequences are for other issues!

Also this:

A beautiful result from ten years ago, due to Adleman, DeMarrais, and Huang, says that BQP, the class of problems solvable efficiently on a quantum computer, is contained in PP. The reason why this is so can be explained to a physicist in three words: "Feynman path integral." The probability that a quantum computer outputs "yes" can be expressed as an exponentially-large sum over all possible paths — some of which contribute positive amplitude to the final answer, others of which contribute negative amplitude. But an exponentially-large sum is exactly the sort of thing that can be calculated in PP. From my perspective, Richard Feynman won the Nobel Prize in physics essentially for showing that BQP is contained in PP.

Nice!

My mentor and coauthor Richard P. Feynman was not trying to analyze quantum computing when he invented Feynman integrals. Chronologically, he FIRST invented QED, and then, years later, became the greatgrandfather of quantum computing.

Dr. George Hockney, formerly at FermiLab, and then researching Quantum Computing at JPL, contends that Feynman's specific approach to quantum computing has proved to be wrong, but that doesn't matter, as Deutch and other grandfathers of quantum computing began their research because of Feynman's pioneering.

In the same way, Feynman is the greatgrandfather of Nanotechnology. It doersn't matter if the approach he describes in "There's Plenty of Room at the Bottom" works; what matters is that about half of the grandfathers of nanotechnology began their research because of Feynman's pioneering.

Dan:

The reason I said NP-complete was because I thought it was clearer than something like "problems in NP for which we don't know polynomial time solutions." NP-complete is a set of hard problems for which we don't know P-time solutions, and which most people who read this article are familiar with. So it was just an easy way of being a tad more concise.

Isn't binary Turing machine usually called "Post Machine"? Also, the phrase "Unfortunately, here's where the direct correspondence between the quantum machine and the classical turing machine." seems to miss a verb at the end.

Jonathan Vos Post said, in part:

Feynman was not trying to analyze quantum computing when he invented Feynman integrals. Chronologically, he FIRST invented QED, and then, years later, became the greatgrandfather of quantum computing.

I think we're all tolerably familiar with the chronology of QED (or at least those of us who know enough about it to even parse Aaronson's statement). Hence the joke Aaronson makes: "From my perspective, ..."

But wait, there's more!

This sentence needs to be corrected:

So instead of a single transition function &lamba;, the quantum machine has a sequence of functions {&lamba;1,...,&lamba;n}, which must all be used in sequence during the computation.

Somebody needs to multiply all those &lamba;s through by d.

Somebody needs to multiply all those &lamba;s through by d.

You're assuming that d commutes with a;. Actually, you need to multiply by (a;) -1 d .

(Did I mention I was a math nerd?)

nikita:

The post machine is slightly different. The binary machine is sometimes called a Post-Turing machine, but I prefer binary turing, because to me, "Post-Turing" sounds like we're talking about some kind of post-modern model of computation. :-)

Thanks for the corrections, I've fixed the ones you pointed out.

Blake:

You know what I really hate? When people tell the kind of jokes that make me laugh, but when I tell them to anyone else, I just get a confused blank stare. :-)

a quantum computer is a kind of non-deterministic machine - so in the best possible case, a quantum machine can solve NP-complete problems in polynomial time. The set of things computable on a quantum machine is not different from the set of things computable on a classical machine

Doesn't this assume C-T? I still don't see any proof that C-T really holds beyond that we haven't yet found anything that goes beyond it, and definitely no proof that all quantum algorithms are implementable on a classical Turing machine.

John:

Yes, this assumes that the Church-Turing thesis is true. But pretty much all of the work in computational complexity, and most of the work in theoretical computer science is predicated on Church-Turing, so we don't tend to qualify every statement with "Assuming C-T is true"; instead, we specifically state when we aren't assuming C-T.

Also, simply assuming that CT isn't true in and of itself doesn't tell us all that much.

We still have to come up with models to describe what we're doing, and most of them are, well ... uninteresting, IMO.

By Michael Ralston (not verified) on 16 Jan 2007 #permalink

MarkCC, thanks so much! This is incredibly helpful and clears up a bunch of stuff I've been trying to work out for a long time.

I do have a couple unrelated random questions/comments:

1. You actually might want to cut&paste some chunk of this article into the wikipedia article on Quantum Turing Machine, as that article is like five lines long and gives no specifics.

2. I'm trying to understand clearly this idea of the quantum machine you describe being "linear". While it seems clear what this would mean in the context of the real-world qubit-based systems we know how to build, I'm wondering what it means in the context of the quantum turing machine model you outline here. As I understand the model you've described, the tape cells each have a probability of being 1 or 0, and the operations on cells change the probability distributions of those cells. But what about the tape head? That is, is there a tape head, and is it allowed to be "in more than one place at once" probabilistically? Say, if one entry in the state transition function says "go left 50% of the time, right 50% of the time", then when this executes wouldn't the hypothetical path where the head moves left and the hypothetical path where the head moves right be roughly analogous to different code branches? Or are the symbols addressed in some other way than a tape head, or is the head not allowed to "smear" the way the cells can, or what? Or is the idea that when you say the execution is "linear", you just mean that branching effectively cases to have any effect, since whenever you branch all the branches have to be taken at once?

What exactly is the type of the quantum turing machine's analogue of the state transition function?

3. On a totally different note, in relation to the tangent about Feynman: When I was in the bookstore the other day I ran across a book called "The Feynman Lectures on Computation". Does anyone know about this book? If anybody's heard of it, is it worth reading, and would it still be informative or useful to someone who already has a working grasp on the theory of computation? Is the information on quantum computing still kind of relevant despite being from 1984?

4. Post Machine would be a great name for a band.

Coin:

The tape head in the quantum Turing machine has a fixed position. Binary values - like the value in a tape cell, or a register bit - are in quantum flux, and described by probability functions. The tape head is not - that should probably have been made clearer in the post.

Okay, CT is a default assumption, but it was formulated well before quantum computation ever came on the scene. I don't like unproven assumptions (the perils of being a mathematicians), and CT has never really sat well with me. I'm actually hoping that CT turns out to be true for classical computation, and that quantum computation will go beyond it.

Re: Quantum Computing beyond CT Thesis.

The Church-Turing Thesis is about computability, not efficiency. There is no Quantum model that we have that does not fall under the same conditions as classical computing in regard to Church-Turing. That is, if we can compute something on any quantum computer we know about, we could also compute it on a classical computer given enough resources. If you can come up with a NEW quantum computing model that cannot be emulated (even in beyond exponential time) on a regular Turing machine, then you have got it made and people will remember your name for a long time. So far no one knows about it.

What this means is that whether the Church-Turing Thesis is true or not doesn't effect Quantum Computing any more than classical - since we can simulate quantum computers (as we know them now) on classical computers. If CT is false then there must be a totally new model of computing that we cannot map to either current model quantum computers or current model classical computers.

The Feynman Lectures on Computation are an interesting read, despite hailing from 1984. He lays out finite state machines, introduces Turing machines, builds a few of them and then constructs a Universal Turing Machine before covering the halting problem. If you're familiar with all of that already, the more valuable part of the book would be the stuff relating information to thermodynamics. He has a very good section on Maxwell's Demon, for example.

His take on quantum computing was, roughly speaking, "Can we still do computations when our switches are the size of atoms?" Feynman develops a theory which indicates that we could do the same sort of computations on the quantum scale that we do on the macro-scale. Being pre-Shor's Algorithm and all that jazz, he doesn't take particular advantage of the new capabilities quantum computing has to offer.

All in all, I'd say that whether the book is worthwhile to buy depends on which ones you have in your library already.