In Defense of D-wave

The Optimizer has gotten tired of everyone asking him about D-wave and gone and written a tirade about the subject. Like all of the optimizer's stuff it's a fun read. But, and of course I'm about to get tomatoes thrown on me for saying this, I have to say that I disagree with Scott's assessment of the situation. (**Ducks** Mmm, tomato goo.) Further while I agree that people should stop bothering Scott about D-wave (I mean the dudes an assistant professor at an institution known for devouring these beasts for breakfast), I personally think the question of whether or not D-wave will succeed is one of the most important and interesting questions in quantum computing. The fact that we interface with this black box of a company via press releases, an occasional paper, and blog posts at rose blog, for me, makes it all the funner! Plus my father was a lawyer, so if you can't argue the other side of the argument, well you're not having any fun! So, in defense of D-wave...

The Optimizer begins by with a list of questions from the skeptic:

Skeptic: Let me see if I understand correctly. After three years, you still haven't demonstrated two-qubit entanglement in a superconducting device (as the group at Yale appears to have done recently)?

Um, well, actually, Optimizer, entanglement has been demonstrated before the Yale group in superconducting qubit device. In phase qubits, I believe the Martinis group created entanglement between two qubits in 2006 (Science paper or if you want Bell inequality violations see this Nature paper.) As far as I know, no one has conclusively demonstrated entangled quantum states in flux qubits, which is what D-wave is using (the transmon qubits at Yale are charge superconducting qubits, right?) Okay, so well your facts are a little off Optimizer! But of course the real reason you bring this up is because you know (for pure states) that without entanglement there will be no quantum speedup. Actually I think one has to be very careful here as well. For example, in Richard Jozsa's wonderful article on simulating non-entangled systems its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states (update after Scott's comment below: damnit here I meant to say, slightly entangled states. The relevance being that for these states it may be difficult to detect their entanglement even though they are useful for quantum computing.). And of course the question for mixed states (a.k.a. the real world) is still open. So I would say that the "entanglement" question is not settled. And I might even argue that the reason you need to worry about this is in quantum computing's very history: it was well known that linear optics could not be used to quantum compute, but then, WHAM, KLM showed that if you had single photons and could detect single photons, you could build a quantum computer. Are we really that confident that quantum systems living somewhere just on the other side of entangled are not a useful resource. Of course my intuition is that for exponential speedups, yes, entaglement is necessary. But polynomial speedups?


You still haven't explained how your "quantum computer" demos actually exploit any quantum effects?

Please define "quantum effects." Also please read arXiv:0909.4321. That's an interesting paper, and I definitely agree that it doesn't demonstrate what I would call "quantum effects" it shows pretty clearly that the quantum description of what is going on in their flux qubits seems correct. And if you're going to build an adiabatic quantum computer, what you really care about is that you have well characterized your Hamiltonian and understand the physics of that system.


While some of your employees are authoring or coauthoring perfectly-reasonable papers on various QC topics, those papers still bear essentially zero relation to your marketing hype?

I hate the term "reasonable papers." Sorry. It sounds like the quantum computing gestapo to me. But beyond that what hype are you talking about in press releases. Their news section has absolutely zero about their latest NIPS demo (which is apparently what set you off, Dr. Optimizer.) If anything, I think your beef has to be with the science journalists who are producing articles on the recent paper or with Hartmut Neven whose blog post on the google research blog has more meat to argue about (the last lines are classic.)


The academic physicists working on superconducting QC--who have no interest in being scooped--still pay almost no attention to you?

Argument by authority? Really?

So, what exactly has changed since the last ten iterations?

Actually if you read the NIPS demo paper you would see that there is some interesting new stuff. In particular you would note that they believe they have 52 of their 128 "qubits" functioning. Independent of whether this thing quantum computes or represents a viable technology, getting 52 such flux qubits to operate in controllable manner such that they can read out the ground state to the combinatorial problem at all is, in my opinion, an impressive feat. The fact that they thought they would be at 128 qubits about a year ago is also a warning to me that this shit is hard. Also the paper gives a nice list of the "problems" they are encountering. In particular they acknowledge here the difficulties arising due to finite temperature and to parameter variability. You'd also read that their classifier doesn't outperform the one they compare against for false-positives (and the real issue with that paper is that comparison, and no comparison of running times! So yes there is something new here and yes it is interesting and yes it still makes me skeptical of D-wave's chances!)

Why are we still talking?

Good question! I hope you will forgive me, Optimizer, for I have sinned.

Okay, well now that I've got that out of my system. Whew. The rest of the Optimizer's discussion of D-wave rings partially true. Though his detour into criticizing their AQUA at home seems silly to me (who cares, really? Have you really met someone who makes the argument presented? Was he or she surrounded by Dorothy and the Tin Man?) Certainly the fact that they are working with Google doesn't convince me of much (sorry Google.) But I will stand by my criticism that just saying "quantum coherence" and "entanglement witness" as the things that must be demonstrated for D-wave to make an interesting device is wrong. Indeed, I'd probably argue that the reason quantum computing folks have made slow progress is that they themselves are hung up on this approach to building a quantum computer. It sure appeals to the scientist in everyone to validate every stage of everything you do, but technology development is different than science. For D-wave validating that their final and initial Hamiltonians are working as they think is important, but beyond that do they really care about whether they create entanglement? Of course, it's my own opinion that their system will fail (finite temperature and problems with parameter controls in the middle of the computation) but holding them to the quantum gate standards doesn't do it for me (though everytime I see their slogan I choke...quantum computing company? How about quantum technology company guys?) And the fact that we get our information second hand makes this whole argument rather academic: we don't really know what is going on behind the walls of their Surrey, B.C. offices (cue conspiracy theories.)

Like my pa said: if you can't defend both sides, your not having fun :) (Hey I said tomato throwing, not watermelons or knives! **Ducks**)

Oh, and peoples, stop pestering Scott about D-wave, he's just shown a major result in quantum computing and should be out celebrating (the jalapeno burger and beer at is good. I think I owe him one next time I'm in Boston.)


More like this

Thanks for the review and links! I didn't realize that the first author (Hartmut Neven) was the person whose start-up had been acquired by Google in 2006 and who worked in connection with D-Wave after that. This makes the whole thing way more suspicious...

I agree with you that not enough work has been done on understanding the role of entanglement in mixed-state computation. However, I see your Richard Jozsa paper and raise you a , which indicates that entanglement may well still be important in mixed-state algorithms. The jury is still out.

Nevertheless, it seems to me that this issue is beside the point. As far as I am aware, D-Wave claim to be implementing the adiabatic algorithm for solving NP-complete problems. The theoretical version of this algorithm requires pure states. As far as I am aware, a mixed state version of this algorithm has not been analyzed so I have to admit that I am completely stumped as to what D-Wave are really up to.

Regarding adiabatic algorithms in general, I am a bit skeptical that they offer any real advantages over conventional approaches to NP-complete problems. There is a bit of a debate going on at the arXiv at the moment about this. To summarize, it seems that the algorithm requires exponential time on random instances (, but the Farhi group hit back with a modified algorithm that might remove this difficulty ( Again, the jury seems to be out, but personally I would be extremely surprised if you could get more than a polynomial (probably quadratic) improvement over classical methods in the general case.

With all this uncertainty it seems hard to understand what the claim in the Google blog post to have achieved better results than classically possible means.

Addendum: I am recommending that we all immediately switch to Bing as our search engine of choice, since I think we should base our preference on which company has the better quantum computing research group. If we manage to make a small dip in Google usage then perhaps we can persuade them to pour some cash into quantum computing research. They would probably have to hire a Nobel laureate to counter Microsoft's Fields medalist. A small price to pay for all our quantumy eyeballs.

@Stas: not sure why Neven's company having been acquired by Google has much relevance. Far as I can tell the dude does some pretty amazing image recognition work (but that's not my field.)

Hi Dave,

I enjoy playing devil's-advocate too sometimes, but your devil's-arguments could be stronger! Reading your post, one finds you essentially agree with me about D-Wave's prospects---and while your grounds for skepticism are different, the biggest difference I can see is simply that you feel less rage.

To respond to some of your points:

1. A few months ago I attended a talk by Steve Girvin, which gave the strong impression that demonstrating 2-qubit entanglement in superconducting qubits was a new result -- I apologize if that impression was wrong. As you well know, I'm no expert on implementations, and don't have a horse in this race.

2. "Argument by authority? Really?" Of course D-Wave has been arguing by authority from the beginning ("but we're working with Harvard and Google!") -- and I'm tired of holding ourselves to a higher standard. :-) Seriously, I do think the nerd public deserves to know that most people with relevant expertise remain extremely skeptical of D-Wave's basic claims, and say in private most of what I say in public. Given how much joy blogging about this topic has brought into my life, I can certainly understand why more prudent QC researchers have chose to remain silent. I still find their behavior cowardly, of course, and hope it changes.

3. Unlike (it seems) most people in both the pro-QC and anti-QC camps, I don't see the progress in experimental QC as particularly slow. But maybe that's because I never expected it to be "fast"! It took more than a hundred years from Babbage to the transistor---and the progress in quantum computing between 1995 and today compares very favorably to (say) the progress in classical computing between 1825 and 1840.

4. "its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states." For separable pure states, isn't it clear that you can simulate classically in linear time, by keeping track of the state of each qubit? What am I missing?

For separable mixed states, of course, there's the longstanding open problem of whether one can get a speedup---I worked on that problem my first semester at Berkeley nine years ago! I personally see that as saying more about the limitations of our proof techniques than anything else (for all we know, the question could even have a different answer for qubits vs. qutrits, or for real amplitudes vs. complex ones!).

In any case, what seems relevant to the D-Wave discussion is that all known quantum algorithms require entanglement---I'm not aware of any evidence that one can get a speedup with separable mixed states. That casts some doubt on their idea of using separable mixed states to solve their customers' industrial optimization problems.

5. "It sure appeals to the scientist in everyone to validate every stage of everything you do, but technology development is different than science." I've heard this argument many times: it's not D-Wave's job to satisfy skeptics like you; their job is just to build a working device. (Which might be paraphrased as: "as long as the car we're building will work, who cares if it has no engine?") I can only respond that they has their job and I has mine. And part of my job, as I see it, is to speak truth to cringeworthy claims, in those situations where ignoring them would seem tantamount to assent.

6. Dave, I envy your sunny disposition, which is able to see D-Wave's "black box" nature as all part of the fun. I hope you can see my crankiness as part of the fun as well.

"its not clear to me that these results can be used to rule out polynomial speedups for non-entangled states." For separable pure states, isn't it clear that you can simulate classically in linear time, by keeping track of the state of each qubit? What am I missing?"

Ah, crap I meant to write about the "nearly separable states" results, which, to me, might be more closely related to the mixed state separable state problem.

"In any case, what seems relevant to the D-Wave discussion is that all known quantum algorithms require entanglement---I'm not aware of any evidence that one can get a speedup with separable mixed states. That casts some doubt on their idea of using separable mixed states to solve their customers' industrial optimization problems."

I guess the point I try to make is that neither side has any real case to stand on. Theorists can't say "mixed state separable" so no quantum speedup, just as they can't say we have quantum speedup even though our system is "mixed state separable." Of course we both suspect the answer...

"I can only respond that they has their job and I has mine. And part of my job, as I see it, is to speak truth to cringeworthy claims, in those situations where ignoring them would seem tantamount to assent."

Indeed, it is a good job, a worthy job. But from my view I'd say D-wave has gotten a lot better about this recently than in the past. I mean their old CEO said some whoppers that would make the Burger King blush.

"6. Dave, I envy your sunny disposition, which is able to see D-Wave's "black box" nature as all part of the fun. I hope you can see my crankiness as part of the fun as well."

Life would be boring without something fun to stomp our feet about.

Oh, and about Babbage. Damn it we shouldn't accept mediocre progress. We should be pushing ahead as fast as possible! We don't have to be in 1840. This is independent of the intellectual merit of quantum computing (which I also think is high.) This is a question of building a brand new technology that we know can be made, but that we dedicate seriously too few resources to. We need to dream big, and go big, damnit. If we constantly say "quantum computers will never be built in my lifetime" then of course quantum computers will never be built in our lifetime. If instead we say damnit we need to build one of these things quickly, then the worst than can happen is that we fail.

For me the correct time metaphors is that 1994 is equivalent to 1934 (Turing). It took about 14 years to the invention of the transistor. We should be at the invention of the transistor now. We are behind.

not sure why Neven's company having been acquired by Google has much relevance
I guess his group still operates in "Neven Vision" mode without much of interference/control from Google's management, but you didn't give any weight to the "fact that they are working with Google" anyway, so never mind...

Dave, I actually agree that D-Wave seems to have improved over the last couple years in certain respects. In particular, it's no longer that hard to imagine them stumbling into something profitable---presumably, something classical-optimization-related that had nothing to do with quantum---in spite of the multiple gaping holes in their stated business plan (which of course is my main concern). I was alluding to those improvements when I made the comment---which you unfortunately found too Gestapo-like---about many of the scientists they've hired doing perfectly reasonable things.

Thanks for the great post! There are many important questions here (and hopefully enough tomato goo to go around this holiday season). I agree that the question of what constitutes quantum evidence and/or evidence of entanglement is the big one here, and the path taken by D-Wave is interesting (and unsettling) precisely because this question is not so simple.

Note that spectroscopic evidence for entanglement in superconducting circuits was, I believe, first seen in Saclay in the 80s. While more recent experiments, as done at Santa Barbara and Yale, meet the gold standard for entanglement through quantum state tomography, it is possible that indirect quantum evidence such as spectroscopy and/or phase diagrams could be the only detectable signal of a system such as D-Wave's (and such experiments have been performed and published). Unfortunately, indirect evidence is, well, not direct.

Does this mean we should simply ignore such systems? While sticking with the gold standard is certainly safe (and Glenn Beck approved), it is not surprising that a semi-quantum-device-without-directly-verifiable-entanglement gets VC attention. And providing a theoretical framework to analyze the (indirectly inferred) entanglement of such a system seems like an interesting problem (e.g. the work of Vedral), independent of quantum computation. Without invoking such a framework, the hypothetical skeptic adds little to the noise.

So, ignoring the noise and the hype, I think there is plenty to enjoy in the D-Wave saga (personally, I think the testing of the coupling network alone is worth the effort, although I would want to know more about their properties at microwave frequencies).

@Scott: The two-qubit entanglement that we demonstrated at Yale was not the first entanglement result in superconducting qubits. UCSB and ETH both published earlier works. However, we did show substantially higher concurrence than anything done before, which combined with long coherence times allowed us run simple two-qubit algorithms.

@Geordie: You are right that there are earlier two qubit experiments, such as the early demonstration of a CNOT gate by Nakamura with charge qubits in 2003. However, I am referring to experiments which performed full two-qubit state tomography, such as Steffen et al, allowing for a quantitative measure of entanglement.

2003 is yes, evidence of entanglement, but it's not the kind of evidence that would convince Scott (who I use as representative of the evil QC community) of anything :) (I don't want to disparage the result, as the experiment I'm sure was a tour de force, but by the standard of just revealing energy levels consistent with a certain structure, then aren't most spectroscopy experiments "evidence of entanglement?")

Dave: No, not all. But let's say the following hold: (a) you make the best QM model of a multi-qubit circuit you can; (b) you use this model to predict the allowed energies of the multi-qubit system; (c) you measure the energy levels (using eg spectroscopy) and you find quantitative agreement with the predictions of the QM model; (d) the eigenstates of the QM model are entangled for some realized experimental parameters; (e) the temperature is less than the gap between ground and first excited states. This type of spectroscopy most definitely is evidence of entanglement.

Hi Dave,

Great post! My earlier comment seems to have gotten lost, but I think this question of what constitutes evidence for entanglement (and/or quantum-ness) is very interesting and not at all clearcut. Spectroscopic evidence (or other indirect measures) may not meet the gold standard of state tomography, but for certain systems (such as the D-Wave design) it may be the best available. Note also that the first such evidence (in any superconducting circuit) I am aware of actually dates back to the 80s, in experiments performed at Saclay.

My question, though, is the following: is there really a skeptic-proof framework to analyze a possibly-quantum-device-without-directly-verifiable-entanglement? I think the answer is no, but perhaps I should be less skeptical.

It's evidence that your model is correct (and damn straight I would update my priors about the system exhibiting entanglement) but because you don't know anything about the eigenvectors from this sort of experiment, it doesn't "prove" that the system has entanglement. Evidence, yes. Demonstration, no. (Of course the experiment performed in 2003 in superconducting qubits is exactly the kind of step that needs to be made to progress at all in coupling the qubits, so for me, it's more important for that than for demonstrating anything about entanglement. As you can see from above I frankly care less about demonstrations of entanglement, and more about whether the physics of the device is what you think it is...aka I trust physics.) But I also believe that the spectrum of He also gives evidence of entanglement, no? I mean parahelium is a singlet, right? We build the model, look at the spectrum confirm it is correct, cool the thing down, and wham we've got "evidence" of entanglement. In this sense lots of evidence of entanglement has been demonstrated down through the ages.

Of course, I'll probably say that tomographic proof is slippery: it requires validation of measurements and because of that its subject to loopholes. I guess I'd come down on the side that says to "prove" entanglment you've got to do a Bell experiment (and even then, all you've proven is that the system has no local hidden variable theory, which does not really show that you've got quantum entanglement.)

One could waste a lot of time worrying about showing entanglement in a system. It seems better to validate the physics of the device on a coarser level and then stand back and see what happens when you try complex "quantum" experiments.

Apologies all I just noticed a few comments stuck in my spam filter. If you ever see your stuff stuck in the filter, please email me. Matt, I don't know why the heck it keeps flagging you. Scienceblogs apparently thinks you are a spambot :(

Dave: I can attest to significant time spent worrying about that slippery slope, and I think we agree on every step along the way (with your work on the communication cost of Bell inequalities being an important piece of the puzzle, in my opinion). However, helium was definitely a big deal (i.e. Born considered it a crucial test of quantum mechanics altogether). So, as far as evidence for entanglement goes, I think helium ranks above both aluminum and niobium (ignoring indistinguishability, control of subsystems, and all that).

"in spite of the multiple gaping holes in their stated business plan" -- speaking as someone who's taught PostDoc students with Doctorates in Business Administration, and who gets paid $110.00/hour to write business plans:

(1) The real value of creating a business plan is not in having the finished product in hand; rather, the value lies in the process of researching and thinking about our business in a systematic way. The act of planning helps us to think things through thoroughly, study and research if we are not sure of the facts, and look at our ideas critically. It takes time now, but avoids costly, perhaps disastrous, mistakes later. As I often say, it has an external use (to show investors), and an internal use (to keep us honest within ourselves).

(2) Given a choice between good managers with a poor plan, and poor managers with a good plan, the investors will pick the good managers every time â one can always hire MBA students to re-write the plan. But MBA students who know QM, that may be in short supply.

Over on Shtetl Optimized I have just posted a defense of D-Wave.

I began writing it as a humorous post ... but finished it as a serious post ... and in consequence, I find myself agreeing with Dave (Bacon) that D-Wave may well be onto something important.

This does *not* imply that Dave and I agree on all the technical details, but rather, that we both see merit in this work.

Congratulations, Geordie ... and happy holidays to all! :)

Thanks John... although any congratulations for anything we're doing rightfully belongs to the whole team. This is a unique group of people all of whom are playing a critical role in bringing this new technology into the world.

Hmmmm ... Rod, are there really only *three* options?

I'm old enough to remember a decade of enthusiasm for fluidic computers ... micro-fluidic devices whose computational state-space was bistable Navier-Stokes fluid flow. As computers, micro-fluidic devices found only niche applications, yet they played a vital role in catalyzing the development of what today are economically key disciplines like computational fluid dynamics and micro-fluidics.

Mightn't the science and engineering of quantum computers plausibly develop along the same lines as the science and engineering of fluidic computers?

The point being, that quantum factoring engines (for example) may conceivably arrive someday, but such engines require so many technological advances (relative to our present capabilities) that it is implausible (IMHO) that factoring engines will be the *first* transformational applications of QIT/QIP technology.

In which case, a viable "Option D" for AQUA and D-Wave would be to form a strategic alliance, focusing not only on long-term computational goals, but also on shorter-term (and easier) sensing and simulation goals.

There is an excellent 1989 essay by Alan C. Kay titled Inventing the Future that discusses these points.…

"S V N Vishwanathan, Purdue University, Training Binary Classifiers using the Quantum Adiabatic Algorithm. The goal of this project is to harness the power of quantum algorithms in machine learning. The advantage of the new quantum methods will materialize even more once new adiabatic quantum processors become available."

Andreas, thank you for that link.

Does anyone know of a reference that discusses what happens to the adiabatic theorem when Hamiltonian dynamics is pulled-back from (exponentially large-dimension) Hilbert space onto (polynomially large-dimension) Kählerian state-spaces?

AFAICT (after a quick search) there is a mathematical literature relating to this question (Gromov-Witten invariants?)---for the physical reason that in both cases a symplectic structure is present that ensures the answer is interesting---however that literature is not (at present) developed along lines that obviously relate to the D-Wave/Google framework.

If there is anyone in the blogosphere who is fluent in all three languages (informatics, geometry, and practical engineering), perhaps they will provide a Rosetta Stone!

These considerations motivate what I said (at first in jest, then seriously) on Scott's Shtetl Optimized blog "No-one shall expel us from the mathematical paradise that D-Wave/Google has demonstrated for us!" :)

John, I read Kay's essay long ago...

I don't agree with many things about D-Wave, but in some ways they are doing the right things -- trying to figure out what it takes to build a real system, building a team and the supporting technologies.

As to how QC will develop, I'm fond of saying that, as a systems architect, I want to build one and put in the hands of Torvalds, Knuth, and Lampson, and see what comes out. But saying that alone would be a copout; it's not possible to build a computer without *some* idea of how it will be used. And the most prominent -- and challenging -- application at the moment is factoring, though in practice it might not be the first economically viable application.

Rod says: The most prominent -- and challenging -- application [of quantum computing] at the moment is factoring, though in practice it might not be the first economically viable application

That is a true statement, but perhaps it is like saying (back in the 1900s) "The most prominent -- and challenging -- problem in aeronautics at the moment is more powerful dirigible lifting gas and more broadly, antigravity substances" ... which was true ... but missed the point.

The point being, that part of the significance of D-Wave/Google work, is that it reminds us that other narratives are possible. I am working on a lecture (tentatively) titled "The Mumpsimus and Sumpsimus of Quantum Systems Engineering", which takes this point of view.

The purpose of this lecture mainly is to use the wonderful words "mumpsimus" and "sumpsimus", which I encountered via Raymond Ayoub's (enjoyably scathing) AMS review (MR1540074) of Morris Kline's Mathematics: the Loss of Certainty.

@article{Ayoub:1982fk, Author = {Ayoub, Raymond G.}, Journal = {Amer. Math. Monthly}, Number = {9}, Pages = {715--717}, Title = {Reviews: {M}athematics: {T}he {L}oss of {C}ertainty}, Volume = {89}, Year = {1982}}

Does "decoherence" get involved in the concept of this D-wave system? (Almost sounds like that's where they got the name - ?) As some of you know (perhaps even too much!), I have a big gripe about "decoherence" and consider it a fallacious argument for avoiding collapse worries. (In a nutshell - you would still need to feed wavefunctions into a "collapse box" or they would still be superpositions, whatever their phase relations. I mean really, a "mixture" is about more than one instance. Can e.g. one photon really *be* a mixture and not a coherent superposition, once we get over not being sure of what the details are?) Soon I'm rolling out a counterexample that could actually be tested, not just arguments.
PS: search "quantum measurement paradox" and see who comes up in top five, albeit shifts around.

To my ultra-orthodox way of thinking ...the D-Wave/Google experiment would be plenty exciting even if it were stipulated that the orthodox quantum mechanical postulates of (say) Nielsen and Chuang (ch. 2) are exactly true ... and it were also stipulated that D-Wave's SQUIDS were so noisy as to be provably incapable of quantum computation by *any* error correction method (whether known or unknown). Because it would still be the case, that the D-Wave/Google experiment is conducting an annealing process on a state-space that is large-than-classical, yet smaller-than-Hilbert.

AFAICT, very little is known about the "landscape" of these state-spaces, and it is at least conceivable that annealing algorithms might work well upon them.

This would be exciting! Because you think about it, this plausibly is the happiest of all possible outcomes for D-Wave, for Google, and for the QIT/QIP community. Not least because these intermediate-dimension state-spaces seem to offer to the QIT/QIP community revitalizing frontiers for mathematical exploration and unification, that are broadly comparable to---and pretty substantially overlap with too---those of algebraic geometry.

Hi John, just FYI our SQUIDs are not that noisy. In terms of 1/f noise they are among the quietest superconducting devices ever built. The integrated spectral noise densities we measure when we do MRT experiments are also promisingly low. The projected T2 and T1 times for these devices are quite long. For numbers you can look in the experimental MRT PRL.

I think there are two main reasons for these positive results. One is that the fabrication process we brought up and use is unique in that it is modeled after the way semiconductor processes are run, and is in fact run within existing semiconductor fabs. This allows measurable quantities (like 1/f noise) to be characterized, tracked down, and removed or reduced systematically. Having a stable high quality high throughput fab allows us to hunt down and eliminate most sources of noise. (It would be great if this could also be applied to sources of noise on the internet also... wishful thinking :) ).

The second reason is that the environment in which the qubits sit is exquisitely engineered to be free of unwanted signals, magnetic, thermal or otherwise. Teams of exceptional professional engineers and scientists do nothing but design, build and test these environments. The resultant systems are extremely good and eliminate most external sources of noise from the chips. Of course this is the way all experimental quantum computation should be. It's just that we are fortunate enough to be able to afford these types of (necessary) resources.


Without responding to other issues here, I'm curious about your comments on how quiet your system is. Essentially every other group working in supercon qubits (IBM, NIST, MIT-LL) has shown that devices coming out of well-characterized "industrial" process flows show some of the worst performance in terms of coherence. The best devices are almost uniformly produced via the unscalable shadow-mask Al technique.

Why is your SFQ-derived JPL process flow different?

And seriously, do you think other groups don't have professionals who are good at shielding their devices from ambient field fluctuations? Come on.

Hi Mike, there are no other industrial processes for niobium--as far as I know, we're the only one. The process lives in a set of semiconductor foundries in the bay area and inherits all of its goodness from following best practices from the semiconductor fab industry. While JPL was involved in the beginning our process is completely separate from the JPL process now. There are 1/f noise figures from about 6 months ago published in PRB Rapids by Trevor Lanting, you can just compare them to the Al results of other groups, links are here Noise is much lower now. There is a process overview here

Re. magnetic shielding, our ambient field is below 1 nanotesla in all 3 dimensions over the entire processor (roughly 1 square cm). It took an extremely good team several years and a lot of resources to get to this level. I don't think any one else has come close to this yet. If you know of any groups who can reach these levels I'd be very interested to speak with them. Do you?


I believe you are quite incorrect about Nb.

The best example is:

HYPRES has been around for decades, working in Nb. They develop SFQ/RSFQ, voltage standards, etc.

Nb is the material of choice for RSFQ and SFQ in part because it has a high Tc. It is widely used in superconducting sensors, circuits, etc. MIT-LL runs an industrial Nb process and has for years. In fact, they were among the earliest groups to show that simply transferring SFQ/RSFQ foundry processes into the QC realm would not work. There was even a government-funded foundry effort around 2000 that was a complete flop for this reason.

As far as magnetic shielding goes, 1nT is great but not likely unique. I'll let others respond about noise in their particular systems.

I'm sure you know Nb fab and many other technical topics quite well. However, when you state "there are no other industrial processes for niobium" you are incorrect as HYPRES has been around for quite some time.


I'm honestly confused by your replies. I know nothing about the quality of HYPRES devices vs those coming from DWave. That is immaterial.

You said, "there are no other industrial processes for niobium--as far as I know, we're the only one." That's simply incorrect. Are you at least willing to admit your mistake?

Given then that there are other industrial Nb foundries, the fact that you use a robust process flow is not particularly unique.

So would DWave be willing to fabricate a standard flux or phase qubit in your process, based on someone else's design, and allow the outside group to test the device, measuring T1, T2, etc?

If you define "industrial process" to be any process run by a company then yes there are other industrial superconducting fabs, including Hypres.

If you define "industrial process" to be one following best practices from the semiconductor industry then there is only one--ours. This is the definition I'm using.

Re. extracting T1 & T2--we're doing this already. You can see some preliminary work in this direction in arXiv:0909.4321 . Note however that these timescales are not the correct metric for qubit goodness in AQC (it's the integrated spectral noise density that matters, not decay times at some specific frequency).


Your comment is a bit silly. Hypres, NGC and other superconducting foundries use best practices largely derived from the semicon industry. The fact that they may or may not be the same as yours hardly disqualifies them as competing Nb process flows.

Information about the integrated noise spectral density can be obtained in a coherent experiment by performing measurements of multipulse CPMG and UDD (dynamical decoupling) and fitting to the resulting phase error as a function of total free-precession time. This gives a direct probe of the noise power spectral density using a technique that is widely accepted by the research community. See e.g. Nature 458, 996 (2009).

Do the community a favor - Perform a phase coherent spin-echo-style measurement of coherence, extract the noise power spectral density, show that it matches with your alternative measurement styles and put the issue to rest.

Prove everyone wrong - I wish you great success in this endeavor!

"In fact, they were among the earliest groups to show that simply transferring SFQ/RSFQ foundry processes into the QC realm would not work. There was even a government-funded foundry effort around 2000 that was a complete flop for this reason."

It flopped for cultural, not technological, reasons, and the champion of that particular effort in NGST was instrumental in shaping D-Wave into what it is now (and bringing up our foundry capabilities along the lines of what should have happened there)!

"Hypres, NGC and other superconducting foundries use best practices largely derived from the semicon industry."

"Use" and "try to use" are two different things! NGC no longer has SC fab capability, but at least 4 people affiliated with former NGST SCEO effort now work for D-Wave.


"We do a better job" is a poor counterargument to the notion that competitors do exist and do employ best practices from the semicon industry. Maybe DWave's fab is better than these others (not supported by evidence, but possible), but that's a very different argument than your original, which was that DWave has the only industrial Nb process.

Further, given that HYPRES builds Josephson voltage standards used by NIST, I'd argue that their process is at least reasonably well controlled.

I'd just like to add one thing - my previous comments were not to suggest that Nb is fundamentally flawed. MIT-LL has reported a coherence time in a Nb flux qubit 10x longer than other reported results.

That took major modifications from a standard SFQish process flow. Maybe DWave has made similar advances. A nice test using community-standard measurement techniques could finally settle this issue...