Commenter Michael J. Biercuk asks about D-wave's machine:
What is the fundamental experimental test which would demonstrate the system is not simply undergoing a classical, incoherent process?
Of course there are answers to this question which involve some technically fairly challenging experiments (proving that a quantum computer is quantum computing is something which many experimentalists have struggled over, for far smaller systems than D-wave's system.) But there is a much simpler experiment which I haven't seen answered in any of the press on D-wave, and which, for the life of me, I don't understand why it hasn't been done and publicized.
This question is simply, does your system show any speedup over a classically equivalent procedure? The experimental running of D-wave's system is something like the following: set the couplings so that an easily achievable ground state is achievable. Cool the system down. Drag the couplings of the system from the easily achievable ground state to the setup which is the ground state of the instance of computational problem one wishes to solve. This procedure takes a certain amount of time and succeeds with a certain probability. Actually there are probably three times here: cooling time, adiabatic evolution time, and measurement time. Let's throw out that last time and just talk about the cooling time and adiabatic evolution time. One should be able to easily produce a plot of probability of successfully solving the problem as a function of those times.
Now if you want to say that the above procedure has any advantage over classical systems, then you might think you want to compare this to the best classical algorithms operating on our best classical computers. But what I'm interested in is something even more basic. If you carry out a procedure where you start the system hot, and the couplings such that the ground state is the solution to the instance of the computational problem you are trying to solve, and then you cool the system down, what does the probability of successfully solving the problem look like as a function of this cooling down time. And, of course, the most interesting question is whether there is any sum of times for the first experiment versus the second "classical" experiment for which the probability of succeeding is greater for the possibly quantum setup?
I mean, it seems like this is a rather simple setup, and I can't see how pretty much any information about their system can leak out to us poor academics (who would of course steal it and use it to build our own quantum computer) in performing this experiment, so why hasn't it been done? Or maybe it has been done, but the results aren't so flattering...
I imagine the D-Wave folks will answer to the effect that their model hasn't been able to scale to the point where a speedup would be noticeable. Not being a QC theorist I'm uncertain of this, but I'd imagine that you'd have to reach a certain qubit peak before something like Shor's factoring algorithm would have an advantage over, say, sieve factorization.
I'm sorry, I believe I've completely misapprehended your post. That is what you get with three hours of sleep.
I'm sorry, I believe I've completely misapprehended your post.
That doesn't mean it's not still a good guess as to what the D-Wave folks will say...
Doesn't this suffer from the usual difficulty in doing algorithm comparison experiments, namely you can't tell if the experimenter has "efficiently" implemented the comparison classical algorithm? (I'm taking these as actual experiments you'd do now, so I don't imagine you'd get enough data points on large data values to empirically estimate asymptotic leading coefficients and distinguish them that way.) I suppose you could require them to completely specify their classical experiment without giving away details about the "quantum" setup, and see if you can beat their classical experiments efficiency.
I would guess that the first post is likely the correct explanation. They've only built 16 and 28 qubits so far, so if the Hamiltonians in both can be morphed faster than what is experimentally resolvable, then they wouldn't be able to tell what the maximum morphing speed (or minimum annealing time to still get the right answer, or whatever) is.
I think that a question that i read on Scott's blog is a lovely little experiment that would show this. All you've got to do is monitor the performance as you heat the computer up. If there's a transition to a slower regime then that's fairly strong evidence that there was something quantum occurring.
That could be useful so long as said temperature is below T_C for the superconducting SQUIDs. If you drive them normal then the system really shouldn't work.
These suggestions all do make sense, but seem a bit generous to me. Could variable classical coupling mechanisms, as in QCA give results different than pure thermal annealing? What about engineering a "control" system in which one knows that all couplings and interactions are purely classical? SQUID devices with tunable couplings, but vanishingly small avoided-crossings, perhaps, such that the diabatic states as a function of flux frustration approximate the ground states to 10^-6 or something?