Self-Correcting Quantum Computers, Part III

The physics of classical information storage. Why is it that your hard drive works? A modern miracle, I tell you! Part III of my attempt to explain one of my main research interests in quantum computing: "self-correcting quantum computers." Prior parts: Part I, Part II

The Physics of Classical Information Storage

Despite the fact that Shannon and von Neumann showed that, a least in theory, a reliable, fault-tolerant computer could be built out of faulty, probabilistic components, if we look at our classical computing devices it is not obvious that these ideas matter much. I mean really, once you see how reliable modern transistors or magnetic media work, it is easy to forget that deep down they are highly analog and probabilistic devices. So this leads one to the question of how to reconcile the observations of Shannon and von Neumann with our everyday computing devices. The answer to this seeming conundrum is that systems like transistors and magnetic media actually enact the ideas of the classical theory of error correction, but they do this via the physics of the devices out of which the computer is built.

i-a2c6ec55fcfd49031fa843e07da69c5a-Isinglattic.jpg
A toy model for how information is stored in you hard drive

Let's examine a concrete example of how this works. This will be a cartoon model but it will capture a lot of why you hard drive is able to robustly store information. Our model is made up of a large number of "spins" which are arranged in a regular lattice. For our purposes, let's just make this a two dimensional lattice. These "spins" are simple binary systems which have two configurations, pointing up or pointing down. Now associate with a configuration of the spins (i.e. an assignment of values of pointing up or pointing down) an energy. This energy is calculated in a straightforward way: for each two neighbors on the lattice, add to the energy a value of +J if the spins are pointing in different directions or subtract from the energy a value of -J if the spins are pointing in the same directions. From this description it is easy to see that the lowest energy configuration of spins is when they all point up or when they all point up, since in this configuration all links in the lattice contribute an energy -J in this configuration. Now for any particular configurations of the spins, we can count the number of spins which are pointing up and the number which are pointing down. This is roughly how "information" can be encoded into a device. If the majority of the spins are pointing up, we call this configuration "0". If a majority of the spins are pointing down, we call this configuration "1".


i-896a001ea12b9383b1f7686599927816-Mag.jpg
Equilibrium distribution for the two dimensional model of a hard drive

Okay, so how can we view a magnetic media in terms of classical error correction. Well first of all note that we have encoded "0" and "1" by copying their values across a large number of spins. This is what is called a redundancy code in classical error correction. But, now, here is the important point: the system also performs error correction. In particular consider staring the system with all the spins pointing up. If you now flip a single spin, this will cost you some energy because now your spin will be unaligned with the four neighboring spin. Further if you want to flip another spin, this will cost you even more energy. In general one can see that it requires an amount of energy proportional to the perimeter of a domain to flip all of the spins in this domain. Now at a given temperature, the environment of the hard drive is exchanging energy with the system and is constantly fouling up the information of the value of spins. Sometimes energy goes from the environment to the system and the system is driven from one of the two encoded states of all up or all down. Sometimes energy goes the other way from the system to it's environment. This latter process "fixes" the information by driving back towards the encoded state it came from. At a given temperature, the ratio of the rate of these two processes is related to the temperature of the system/envornment. At low enough temperature, if you store information into the all up or all down configuration, then this information will remain there, essentially forever, as the process of cooling beats out the process of heating. At high enough temperature this fails, and the information which you try to store into such a system will be rapidly destroyed. The figure on the right is the way that physicists would talk about these effects. At low temperature, most of the spins are pointing along one of two different directions. As one raises the temperature most of the spins remain pointing mostly up and mostly down, until one nears a critical temperature. At this critical temperature, these two configurations merge into one and information can no longer be reliably store in the equilibrium configurations of this system. Thus we see that one can encode information into the majority vote of these spins, and, at low enough temperature, the rate of errors occurring on the system is dominated by the processes of the errors being fixed. Thus physics does error correction for you in this setup.


i-ceb94b073bba364fb53ab4bcd2ca6e0c-Isingfix.jpg
Incorrect preparation is fixed below the critical temperature

Another important property of the model just described is that it is also robust to imperfect manipulations. For example, suppose that you attempt to prepare the system into a mostly up state, but you don't do such a good job and a large (but not majority) number of the spins are accidentally prepared in the down state. Then, the above system will "fix" this problem and will take the system prepared imperfectly to one that is closer to a mostly up state. Similarly if you try to flip the value of the spins, if you don't correctly flip all of them, then the system will naturally relax back to its equilibrium value which will have a higher proportion of spins in the correct distribution. Such a system is "self-correcting" in the sense that the error correction is being done not by an external agent, but instead by the natural dynamics of the system.

So, in this simple example, we have seen that physical systems which are currently used to store classical information actually do embody the ideas of Shannon and error correction: it's just that the physics of these devices enacts these ideas behind the scenes. So, a natural question to ask for quantum computers is whether you can build similar systems for quantum information. Of course, before answering this question you'd better know whether it is possible to mimic the classical ideas of error correction and fault-tolerance for a quantum computer.

Next Time...

If classical computation is possible because of the physics of the devices out of which they are made, can quantum computers take a similar route?

Categories

More like this

This all makes sense to me, Dave ... but is temperature really the right variable?

The point is that from a practical point of view, qubit memories work perfectly well even if they are in contact with an infinite-temperature reservoir, provided they are feedback controlled.

Suppose, for example, we have a two-state qubit system in an ion trap that is subject to two decoherent processes. The first decoherent process is an infinite-temperature thermal reservoir, having a time constant of (say) one millisecond (such as might originate from thermal magnetic noise from the walls of the trap). The second decoherent process is a continuous observation process having a much faster time constant of (say) one microsecond (such as might originate from interrogating lasers).

Via bistable feedback control, the fast observation process can precisely emulate a fast thermal reservoir having either positive-zero temperature (binary 0) or negative-zero temperature (binary 1). For either binary bit-value, the slow thermal reservoir transitions (errors) are Zeno-suppressed, and yet simultaneously, we have continuous classical knowledge of the (bistable) qubit state.

From a quantum information theory point of view, isn't that pretty much the way that bipolar computer memory really works? And this picture explains why bipolar memory cells always have multiple transistors ... so that each transistor can continuously observe, and feedback-control, the quantum state of the other transistors ... with the net dynamical effect being classical bistability.

As they say in Brooklyn: "Poifect! That's what we want!"

As a followup to the above, I checked AnandTech to see if Intel might be adopting some of Dave's suggested techniques for "self-correcting computers".

On physical grounds, this would be expected, since shrinking sizes, diminishing power consumption, and faster speeds are all acting to make Intel's memory cells look more "quantum-ish".

It turns out that Intel's cache memory has moved from a six-transistor architecture to an eight-transistor architecture -- this means an increase in complexity and area (bad), but it allows individual cells to retain their memory state at lower voltages and power consumption (good).

However, each individual memory cells still operates by the traditional "flip-flop" feed-back principle that half of each memory cell continuously measures, and if need be, error-corrects, the quantum state of the other half.

A big change for Intel, however, is the advanced error-correction architecture of its next-generation chips --- triple-error detect, double-error correct.

The point of the above is that the operation of Intel's memory and logic devices now is getting to the point that it can understand, and reliably simulated, only in the context of quantum information theory.

Which is fun!

What if say protein folds according to entanglement processes and for molecular simulation realy need exponentional computation power and if molecules will be simulated without entanglement and quantum mechanic and superposition then our world is say imposible, but where is some guaranty, that with our big devices it is possible to control proceses, which going on in atoms, molecules and so on? Like etc electron flying controling our live in atom, but we can't in same advanced way to control electron flying direction, so maybe there quantum computation also imposible, don't matter that molecular/quantum simulation realy going/working on quantum mechanical and calculating exponentionaly...
And another one thing, which scaring me is that entanglement seems was very well observed only between 2 photons and not between 3 or 5 or 6 photons. Entanglement between over quantum bits like spins or excided states of atoms seems can be explained with local hidden variables if such entanglement was observed at all... So maybe entanglement between 3 or more photons also can be explained with local hidden variables? Of course if such entanglement was observed at all. So maybe sciencists too hary to create theory of quantum computing? If entanglement was observed only between two photons...

Sorry, "pr" regarding "entanglement between 3 or more photons." Not only HAS this been observed, but a more subtle case has been seen. Photons A, B, and C are (as a set of 3) entanngled, even though no TWO of them are entangled.

My Quantum Computing friends don't like my metaphor for this. But look at a picture on the web of "Borromean Rings."

For 5 photons, look at the Olympic 5-ring picture...

By the way, "entanglement" is inconsistently defined between different authors. Some thus claim that whether or not two particles are entangled depends upon the observer. Other authors say otherwise. Perhaps our blogmeister can clarify...

To point toward a definite, upbeat, and technically well-posed end-point, Sarovar, Ahn, Jacobs, and Milburn have written a really nice article on this topic titled Efficient feedback controllers for continuous-time quantum error correction.

For their results to apply to classical computer memory, as contrasted with quantum computer memory, only one minor modification is needed ... their feedback control really ought to be modified to be stateless (i.e., Markovian) instead of low-pass. The reason is that a low-pass filter has a (classical) internal state, which is "cheating" if our goal is to simulate a one-bit classical computer memory wholly from quantum parts.

It would be a fun exercise to simulate a bit-flip corrected computer memory using only quantum parts and Markovian stochastic unravelling ... so that all parts of the device are stateless except for the quantum parts ... certainly Sarovar et al. have supplied all the needed ingredients.

On physical grounds, it seems likely that three qubits are both necessary and sufficient to emulate classical computer memories -- I wonder whether this might be proved rigorously? And from an engineering point of view, would this imply that a three-transistor memory cell is feasible?

There are plenty of fun questions at the boundary between classical and quantum systems! :)

I found good discusion about 3 or 4 or more photons, here:
www.physicsforums.com/archive/index.php/t-195394.html
or google copy:
http://209.85.135.104/search?q=cache:2kE0nBFitBEJ:www.physicsforums.com…

So there is discused like I thought, that there was no observed 3 and more photons GHZ state, which is equal to Bell state, but for more particles, this state is such:
|000>+|111>, so 0.5|00>+0.5|11> for two photons was very good observed and 0.5|000>+0.5|111> wasn't observed due to imposibility of some laws, because BBO crystal is like beam spliter I think, and thus can produce only two entangled photons and not more! So they go around in some tricks and create some 0.3|000>+0.3|111>+0.3|101>... or somthing (see in linked discusion). This over not GHZ states seems usefull for quantum comuncation or somthing but possible I think, that withou such states manipulations quantum computer imposible in principle... So seems, that sciencists don't have idea how to create some states for quantum computer in physical level... They looking somthing from spins or from atoms entanglemend, which even wasn't observed or observed very bad, that can be explained with local hidden variables. So seems at least photonic quantum computer imposible in principle!?

I would guess that temperature is exactly the right variable. It describes a thing's susceptibility to errors from fluctuations in the ambient environmnent.

David says: I would guess that temperature is exactly the right variable. It describes a thing's susceptibility to errors from fluctuations in the ambient environment.

The problem with temperature is that in real-world devices the local environment is (typically) far from thermal equilibrium. A good example is the thermal magnetic noise from the (room temperatures) walls of ion traps. This noise has a temperature of about 1/30 of an eV, which is about 10^11 times hotter than the excitations of a Bose condensate within the trap. Yet this ultra-high temperature noise is not an insurmountable problem ... provided its relaxation time is long compared to the time it takes to pump heat out of the condensate.

That is why environmental noise is commonly modeled as a Markovian process having (effectively) infinite temperature, and why sensing and computation devices are commonly cooled by (non-equilibrium) feedback-and-control processes.

I learn something new every day. I saw your blurb on the ion traps earlier, but I am way-a-layman, so most of my understanding is ad hoc [and anything I suppose starts life as a wild-a$$-guess].

Your description makes me have questions [mostly for lack of precise understanding]. I would guess energy exchanges at 1/30 eV temperatures would be mediated by phonons, so dispersed through the material's lattice structure [no interaction with the condensate; I'm assuming that the ion trap is made out of something, if it's magnetic or something like that, well, I'll have to think about that some more]. Photons of this energy would have VERY long wavelengths [longer than the entire extent of a condensate?] I have to ruminate on this some more.

Thanks Dave!

One familiar fact you might reflect on (which took physicists several centuries to understand), is that as you make things hotter, the color of their radiation does *not* ascend through the rainbow. Meaning, the color of heated iron does not progress through red, green, yellow, blue, and violet, but rather progresses through dull red, red, yellow-red, yellow-white, and white.

This is a clue that objects at room temperature radiate photons at *every* energy, up to the thermal enerby cut-off at (about) 1/30 eV. In particular, room-temperature objects do emit plenty of radio waves, which have energies far below 1/30 eV.

By the way, tracking down this thermal noise source famously led Wilson and Penzias to a Nobel Prize in Physics ... at the same Bell Labs that just this week closed its fundamental physics division. :(

bjects at room temperature radiate photons at *every* energy

This really hits right at my lack of understanding. I have always thought of that as an artifact of classical thermodynamics. I've never really had a good understanding of the relationship between low-N [where N is big enough that N! makes you cringe, but not so big that that you say, "δx ~= Δx"] quantum system dynamics and thermodynamics. This is why we pay you guys the big bucks, though. To figure this stuff out. :)