A Quantum Bogosity

Okay, well apparently the paper arXiv:0804.3076 which I mentioned in the last post is being picked up by other bloggers (see here and here as well as here) as a legitimate criticism of quantum computing. So before anymore jump on this bad wagon let me explain exactly what is wrong with this paper.

THE PAPER DOESN'T USE FAULT TOLERANT CIRCUITS.

Hm, did you get that? Yeah a paper which claims that

We will show, however, that if even a small amount of imprecision is present in every gate, then all qubits in every code block will be affected, and more importantly the error in any given qubit propagates to all other qubits.

does this by working with error correcting code circuits which are not fault-tolerant. This sort of paper would have been fine, in oh, say, 1995, when quantum error correction was just being invented, but completely and totally misses any further development which was pretty much squarely aimed at this problem. Doh. Great big doh.

For those of you who don't know what I'm talking about, let me explain. Quantum error correction is a procedure for protecting information encoded across multiple subsystems from independent errors on this subsystem. Or at least, this is how it works in its most basic form. A good way to think about it is in a communication setting. If you want to send some quantum information from Seattle to New York, but there is someone in the middle (evil flyover staters) who corrupts this information, then you won't get good fidelity in transmitting your quantum information. But, if the corruption in this communication isn't totally debilitating, one can use some tricks to use such a setup to send quantum information robustly. One does this by encoding the quantum information into a quantum error correcting code. Then you send each qubit of the code across the country, and after a decoding procedure the original quantum state can be reconstructed with higher fidelity (its less destroyed) by an appropriate error correction procedure. Indeed one can make the quantum information basically as protected as one wants using more and more qubits, in a scaling that is actually really good.

Now what does this have to do with quantum computing? Well in quantum computing we need to use quantum error correction to protect our quantum information from environmental disturbances, but we also need to make sure that all of our procedures themselves don't introduce disturbance which will destroy the quantum information. In other words you need to develop protocols which are fault-tolerant for all the things you need to do quantum computation: state preparation, quantum gates, quantum error recovery routines, and measurement. And of course in doing this you need to worry about all of the components you use in these protocols failing. Like for instance your gates in your error recovery routine. Doing this was the work of a lot of bright people, who came up with a series of fault-tolerant protocols which ensure that quantum computing can be done, even when there are errors not just induced by an environment but also when there are errors in all the things you use to construct your quantum computer. If these error rates are small enough, then using these protocols allows one to perform quantum computation to a desired accuracy with an overhead which is polylogarithmic in one over the error in the computation. This result is known as the threshold theorem for quantum computation and is one of the biggest results in quantum computing.

So what the above paper does is ignore absolutely everything about fault-tolerant quantum computation (note the only references after 1996 are for papers for efficiently simulating small classes of quantum computations.) Indeed they basically rediscover why all that hard work was done in the first place. I suppose this could be forgiven, except for the fact that the basic ideas of fault-tolerant quantum computation are well known enough and important enough to appear in the standard text on the subject "Quantum Computation and Quantum Information" by Nielsen and Chuang.

On the other hand the fact that more and more cryptographers latch on to every single paper that claims that quantum computers are bunk makes me think we've finally gotten their attention. Now if only we could teach them some quantum information science...

Categories

More like this

I post my comment on dwave, but dwave remove it, so I think it is constructive to post it here (becouse Opera remembering it):

“Whah? No entanglement in any of those other experiments in, say, ion traps or ions entangled with photons, or supercondcuting circuits (of a non-Dwave kind )?”
Information about ion traps, quantum dots, superconducting circuits is so mixed and unclear, that I don’t taking it like evidence (I more tend to think, that there is probabilistic computation and working…), anyway precision is not enough and entanglement is too weak/small and thus any quantum algorithm (with say 2 qubits) working without any convincing speedup….

“Oh, and I wonder if debunker is “possible”: http://scienceblogs.com/pontiff/2008/03/shor_calculations.php
Somebody telling, that for Factorising n=4096 bit number need c*n qubits space and n^3 time http://blog.jfitzsimons.org/?p=165 , with quantum error correction need about milions qubits and with 1 MHz frenquency QC need about minimum mounth (I don’t know what is constant to n^3 time).

“But for mixed states, it is not known whether entanglement is necessary, and certainly not all parties agree.”
Current experiments show (with NMR etc) that “quantum computers” without entanglement and mixed states aren’t faster than probabilistic computer. I don’t know from where is this optimistic expectation about mixed state “QC”?

Have not kept up with the field for the last few years but quantum error correcting when I last saw it added orders of magnitude more complexity (number of qubits required) to the circuits.

debunker: The slow part of Shor's algorithm is also the slow part of the RSA cryptosystem. As it's not unreasonably to expect quantum computer architectures to have gates on a similar speed to classical computers (NMR aside), you would be able to factor at roughly the same speed as you can encrypt a message with the key.

"Information about ion traps, quantum dots, superconducting circuits is so mixed and unclear, that I dont taking it like evidence"

Dude, to mangle a radio head song: just because you don't understand it, doesn't mean its not true. And really it is not mixed or unclear at all. Basic physics background is all you need to understand the ion trap experiments. Look at me, I understand them, and I'm in a computer science department.

debunker: there are only a limited number of cases where efficient classical simulation of quantum algorithms is known. Gottesman-Knill, matchgates, (neither of which is universal for classical computation), no entanglement in pure state systems. But no one knows if there are efficient simulations of mixed states which have no entanglement. This is a longstanding open problem.

Dave: Not knowing much about QEC I was puzzled by this paper when it came out, for basically the same reasons you cite (i.e. I thought the whole point of quantum error correction was to correct this type of error). You, being someone who does know something about QEC, might offer some more substantive criticisms. You say that the authors do not use a fault tolerant error correction circuit but they claim to use Calderbank-Shor-Steane encoding. So you have contradicted them and claimed that the developers of QEC are correct, but you haven't offered anything that would help us, who are less endowed with QEC knowledge, understand the pitfall the authors fell into. Bummer, this means I will have to actually learn something about QEC, rather than being blessed the answer from the pontiff.

By schwagpad (not verified) on 30 Apr 2008 #permalink

Not sure if this is off topic or way too on topic:

I once convinced myself that error correcting codes will work in the non-Markovian regime after reading same papers enough number of times but I am not yet convinced that the concatenated structures should work for computation as well.

Concatenated codes are based on a renormalization of error rates. What happens when you want to get the computation part right? The usual recipe tells you that the [in the circuit model] you need to use a concatenated version of the gates but if you want to do it in the Hamiltonian picture [as in arXiv:quant-ph/0510231 ] where I suppose it will involve going to some interaction picture that involves the gates, you run into trouble. I am thinking that the argument that took you from e to e^2 for error rates will now take you from e to eH where H has to do with the gate: no improvement through concatenation. However, even if my guess is right, that doesn't mean a dead end, just that one must be more careful for designing computation circuits as opposed to the memory part.

Hey schwagpad

The problem is that they don't use fault-tolerant circuits. In order to get the fault-tolerant quantum computing constructing to work, you have to use circuits of a particular form. If you look at the figures in their paper, for example, you can see that they are explicitly not fault-tolerant constructions.

Basically this is a category error. Quantum error correction is a subset of fault-tolerant quantum computing. You need one to carry out the other. What the paper says is that if you just use quantum error correction then you don't handle gate errors. This is certainly true. And it is the reason that you use fault-tolerant circuits to achieve fault-tolerant quantum computing.

There is such ion trap quantum computer paper NSA or army http://iontrap.physics.lsa.umich.edu/publications/archive/PRA_72_050306… , and in which I don't get logic with grover algorithm:
"There are several approaches to gauging the performance
of the algorithm implementation. One method is to compare
the algorithm’s success at recovering the marked state with
the best that can be achieved classically. The classical counterpart is a simple shell game: suppose a marble is hidden under one of four shells, and after a single query the location of the marble is guessed. Under these conditions, the best classical approach gives an average probability of success P=1/4+(3/4)*(1/3)=0.50, because 1/4 of the time the query will give the correct location of the marble while 3/4 of the time a guess must be made among the three remaining choices each with 1/3 probability of choosing the correct location. If Grover’s algorithm is used, the answer to the single query would result in a 100% success rate at “guessing” the marble’s location.
As can be seen in Fig. 3a the marked state is recovered
with an averaged probability over the four markings of
60.2 %, surpassing the classical limit of 50%.".
I don't understand with that shell probability... It is only clear that need two guessing tryies to determine with 50% probability (probably I bad understand combinatoric). For grover algorithm, I believe this task is not practical (interest)...
What is 60% instead 50%? Entanglement? "Quantum advantage" with not pure state (and without entanglement) like in NMR?
Anyway, even if 60% is gotten thanks to entanglement, then it's loosing with gates number, becouse for probabilistic computer need much less gates. And so no any advantages! I think similarly can be debunked also and NMR "advanteges"...

"But no one knows if there are efficient simulations of mixed states which have no entanglement. This is a longstanding open problem."
Brain also nobody can simulate efiecently, but it don't mean that brain is much more computationaly powerfull than computer...

Quantum dots have very big problems with precision, measurment state with probability about 60%. About superconductors I only heard that somobody make CNOT gate, but to find such paper I was unable, also I don's sow quantum dots CNOT gate paper unlike optical quantum computer (with entanglement).

Dave, this is a complete mis-characterization of the paper. Before I start the rebuttal, I'll add the disclaimer that I am the second author of the paper, and would be more than happy to clarify this work to anyone.

We are absolutely well aware of the threshold theorem and we even cite one of Preskill's post-1996 papers that directly discusses the threshold result of Knill and Laflamme. The problem with the threshold theorem for QECCs is that it only applies to errors that are *orthogonal* to the code basis. This is the whole point of all of the QECCs I've seen based on codewords, and it is explained quite clearly in Preskill's paper.

It seems that you didn't read the first three formulas at the top of page 4 of our paper or the description of the error model we use on pages 11 and 12. We model gate imprecision, which adds a random epsilon of rotation error to each gate (in lay terms you add a small random epsilon error value to the sin/cos parameters of the matrix representation of the operators). Forgetting simulations for the moment, what happens with the gate imprecision model is that two types of error are created, and only one of these is corrected by a QECC. Indeed, some non-zero amplitudes that are associated with invalid codeword states will manifest when applying an imprecise gate. These errors will be caught by a QECC since this portion of the state is orthogonal to the codeword basis. However, another error occurs which over- or under-rotates the part of the state associated with valid codewords. This is exactly what is shown in the first three formulas on page 4 of our paper. These errors will not be detected by a QECC, and these are precisely the errors we are talking about.

For example, suppose we say that |L0> and |L1> are our logical codewords for 0 and 1 (let's say they are 7-qubit CSS code registers initialized appropriately). Further suppose we start off with a valid error free state a|L0> + b|L1>. Now suppose we transversally apply a logical X gate to this state, well then we'd expect to get a|L1> + b|L0>. However, if there is gate imprecision error in each of the physical X operators applied transversally (exact precision is physically impossible), then you will get something like c|L1> + d|L0> + f|E>, where |E> is an invalid codeword basis vector. Certainly f|E> will be eliminated by a QECC if the magnitude of that error is large enough. However, even after correcting this error, the amplitudes for the |L1> and |L0> portions of the state will be slightly off from a and b because part of the action of an imprecise gate is to under- or over-rotate those basis vectors.

In other words, in the case of a logical X operator, this is a lot like applying "too much" or "too little" legitimate X rotation, which essentially amounts to a circuit that is fault-free according to a QECC but functionally slightly different from the real fault-free circuit with no under- or over-rotation. So yes, part of the error is eliminated, but not the part that affects the valid codeword basis vectors.

To address your point about our simulations, the diagrams you are referring to are merely there to show how the codewords are functionally created and how error correction is done. As noted in the text, we merely point out that regardless of how you want implement the codeword initialization or syndrome measurement, all the physical operators involved will be imprecise. More importantly, at the bottom of page 12, as we try to explain in the text it's really the transversal X gates we apply that carry the gate imprecision error. As X gates are transversal, these are being applied fault-tolerantly, albeit with operator imprecision on each X operator. We've run the simulation both ways assuming a clean initialization of a logical 0 and even clean error correction machinery and assuming faulty versions of each using the simple functional circuit which indeed does not incorporate extra fault-tolerance. I think you are assuming we only did the latter, but both versions exhibit the same linear trend in the probability of error increasing, where in both cases the probability of error is calculated by directly comparing to the amplitudes of fault-free operation.

Indeed, we can be more clear about the fact that even with fault-free state initialization and fault-free syndrome measurement/recovery, the trend is the same. This is just a first draft posted on arXiv after all, not a camera copy. The results of merely using faulty transversal X gates and assuming everything else is error free are particularly telling because it seems to show that the under- and over-rotations on the valid codeword states do accumulate, and they are spread across all the physical qubits since they are not orthogonal to the codewords. This coupled with the fact that in *all* cases we provide pristine ancillary 0 qubits for each syndrome measurement shows that this experiment clearly gives the benefit of the doubt to the QECC being simulated.

This work is also merely a first step in that we intend to simulate other circuits and other QECCs with many different parameters. The problem is that this subset of the errors not caught by a QECC seem difficult to quantify, which spurred us to propose simulation to analyze this problem. In every case we expect the non-orthogonal errors to accumulate just as they do in this simulation even when assuming error-free QECC machinery. We are definitely open to constructive feedback about this topic, and we posted this paper with the intention of getting such feedback. However the claim that this work is bogus in any way is clearly ridiculous, and no offense intended, but it seems to have stemmed from not reading through the draft very closely. We certainly should clarify those diagrams and the fact that we don't introduce errors into the QECC machinery in the actual simulations, since that makes it clear that we are not hiding errors in some way to sabotage the CSS scheme. We are working on a subsequent version to clarify this part of the discussion, but the ultimate results are the same.

George: It is clear that at each level of concatenation of a code that errors will creep into the code space. If we assume a probilibility for a single qubit gate to fail to be e_0, then for a distance 3 code across N qubits, the state will become:

(1-e_0)^N(Correct state) + N(1-e_0)^(N-1) e_0 (Correctable error) + O(e_0^2)(Uncorrected error)

Dividing by (1-e_0)^N we get

(Correct state) + N e_0(Correctable error) + O(e_0^2)(Uncorrected error)

The uncorrected errors can lead to errors in the codespace, as you suggest. However, notice that the total error rate with one level of concatenation satisfies

e_1 < C e_0^2 for some constant C

The same analysis holds at each level of concatenation, so we get

e_j < C e_{j-1}^2

Which yields e_j < C^L e_0^{2^L}. If e_0 < (1/C) then we get a super exponential suppression of _all_ errors in the code space. The only way this breaks down is if you have correlated errors (say, if you had a poorly characterised interaction between qubits), but these are outside the standard error model.

Obviously fault-tolerant circuits are necessary to provent propogation of errors before error correction can be performed, but these circuits are already well established.

Oh, I've just noticed a subtelty in your paper. The error model for a CNOT allows for correlated errors. In this case the first level of concatenation does not supress all errors which occur with probability e_0. The result is that you loose the first layer of concatenation.

A simple way to avoid this is to use a distance 5 code. Problem solved!

George, I have a couple of questions about your paper and what you have said here. Your results show that the logical failure rate of a sequence of logical X gates increases linearly with the number of gates in the sequence. This doesn't seem surprising to me. Do you expect to see anything different if error correction is properly suppressing systematic errors? Also, do your simulations show that for a single X gate the logical failure rate is less than the physical failure rate? This seems to me to be the important metric - if you construct an algorithm out of logical gates will it fail less often than if it is constructed from physical gates?

Joe, fault tolerant circuits account for two-qubit correlated errors. There is no need for a distance-5 code.

By astephens (not verified) on 01 May 2008 #permalink

Joe, fault tolerant circuits account for two-qubit correlated errors. There is no need for a distance-5 code.

Oops. Of course you're right. When I was writing my comment I had been planning on saying something about two qubit correlated errors (i.e. for an unknown coupling), which obviously do cause uncorrectable errors, but thought the better of it. I just scanned the paper before posting and noticed the noise CNOT. Obviously this is identical to a mixture of evolving a single qubit error through the CNOT and one occuring afterwards, but I wasn't thinking so clearly.

I'm not a morning person.

Just maybe somebody can answer to my question, why "Classically, a single
query of a four-element search space followed by a guess can
only result in a successful outcome with 50% probability."?
Why classical guess success is 50% and not 25%?
In fact grover algorithm with 4 elements database giving 4x speedup over classical-probabilistic mashine. So why in mentioned paper there is compared 60.2% of quantum computer with 50% of classical instead 60.2% vs 25%?
I think 50% probabilisticaly can be goten only if you cuering two times and not one time (in 4 elements database).
Maybe two guess (50%) instead one (25%) for classical computer was choosen, becouse classical computer using less gates than quantum computer?
Or becouse "quantum qubits rotations" is in some sense classical (not probabilistic) computation?
Shortly: what is role of entanglement 60.2% vs 50% or 60.2% vs 25%?

Well first of all George Viamontes I find it completely unacceptable that you make a claim about the whole edifice of fault-tolerant quantum computing while citing exactly one paper, which while a great paper, is part of at library of tens to twenties of relevant papers that construct construct the threshold theorem.

But on to your main conclusion: "We will show, however, that if even a small amount of imprecision is present in every gate, then all qubits in every code block will be
affected, and more importantly the error in any given qubit propagates to all other qubits. "

Of course errors increase linearly as you apply gates. The whole POINT of the threshold result is that by using fault-tolerant error correction beyond just one level of a seven qubit code (concatenating for example, or using a better code) will supress, if the over rotation is low enough, the error rate exponentially in the number of qubits.

D'Oh! Just noticed all my inequalities failed to appear. My post should have read:

George: It is clear that at each level of concatenation of a code that errors will creep into the code space. If we assume a probilibility for a single qubit gate to fail to be e_0, then for a distance 3 code across N qubits, the state will become:

(1-e_0)^N(Correct state) + N(1-e_0)^(N-1) e_0 (Correctable error) + O(e_0^2)(Uncorrected error)

Dividing by (1-e_0)^N we get

(Correct state) + N e_0(Correctable error) + O(e_0^2)(Uncorrected error)

The uncorrected errors can lead to errors in the codespace, as you suggest. However, notice that the total error rate with one level of concatenation satisfies

e_1 &lt C e_0^2

The same analysis holds at each level of concatenation, so we get

e_j &lt C e_{j-1}^2

Which yields e_j &lt C^j e_{j-1}^(2^j), so for e_0 < 1/C the errors are supressed super exponentially in the level of concatenation (and hence exponentially in the number of qubits).

Obviously fault-tolerant circuits are necessary to provent propogation of errors before error correction can be performed, but these circuits are already well established.
<\blockquote>

And again!

....Which yields e_j &lt C^j e_{j-1}^(2^j), so for e_0 &lt 1/C the errors are supressed super exponentially in the level of concatenation (and hence exponentially in the number of qubits)....<\blockquote>

Just maybe somebody can answer to my question, why "Classically, a single query of a four-element search space followed by a guess can only result in a successful outcome with 50% probability."?

You have four element search. You make one query. If you find the element, you win. If not you obviously guess one of the remaining three. If the element is randomly distributed originally, then your probability of succeeding using this method is half.

But why in this paper comparing classical two guess with quantum one guess (query)? Instead classical one guess and quantum one guess?
And that sentence which I give then is absolutly wrong, becouse need two querys and not single query.
It is for child clear that one coin to give say "1" have 50%, two coins to give say "01" have 0.5*0.5=0.25 - 25% probability.

(after a little bit thinking)
Oh, guess fallowed after guess... But stil, why comparing two probabilistic guess with single quantum computer guess? To establish real quantum entanglement role in speedup?

Maybe even possible that if entanglement don't work, this "60.2%" (5% or 10%(?)...) advantages was gotten from 'correct' rotation qubits to needed possition of "01"?

But most logical answer I think probably is that even without entanglement if you will rotate those qubits accordint to grover algorithm you will still get 50% good answer with single query.