QIP 2010 Speakers

The list of talks accepted at QIP 2010 is now online. As a member of the PC I can tell you that there were way more good papers than available speaking slots and made some of the final decisions hard to make.

One talk that I think will be a highlight is the invited talk by the optimizer: "Efficient simulation of quantum mechanics collapses the polynomial hierarchy." Quantum computing skeptics of the "BQP=BPP" kind may just found their island significantly smaller and lonelier. The QIP=PSPACE will also be given a talk slot. Quite a year for quantum complexity theory, I think.

Categories

More like this

Hi Dave, is it true that efficient quantum simulation implies P=NP? Eg if you can efficiently compute the ground state of an Ising model which is NP-hard then it seems the answer is yes. Is this weaker than collapsing the polynomial hierarchy or stronger?

No, it doesn't. It only implies (BP)P=BQP, depending on what you mean by efficient simulation. The general assumption is that BQP < NP, but just as for P=NP, there's no proof.

By Aaron Denney (not verified) on 27 Nov 2009 #permalink

Efficient quantum simulation would mean that classical computers can do anything quantum computers can do, while only incurring a polynomial sized (efficient) overhead.

Efficient quantum simulation does not directly imply P = NP, since quantum computers are not known to be able to solve NP-hard problems in polynomial time. However, we know that NP is in PH, so if P = PH then P = NP, and so this recent result by Aaronson seems to prove that efficient quantum simulation would be as absurd as P = NP.

I don't have time to correct all the errors in the above comments. Collapsing the polynomial hierarchy, though, is not the same as P = NP, I think the collapse is only to the second level.

OK so as far as I can tell this business of "collapsing the polynomial hierarchy" is a far stronger result than P=NP, ie. if this collapse occurred it would imply P=NP and a whole lot of other things also. Is this correct?

On the "efficient simulation of quantum systems" point. What definition of what this means is relevant here? Does it mean that any possible quantum computation can be simulated on a classical computer with at most polynomial overhead in time, space, energy, etc. over the resource cost of doing it on an actual quantum computer?

Collapsing the polynomial hierarchy is no stronger than P=NP. It is implied by P=NP but does not imply P=NP. The polynomial hierarchy is defined as a sequence of complexity classes, P subset NP subset Sigma2 subset Sigma3 subset ... If you show P=NP, then all the classes are equal to P. If you show NP=Sigma2, which is the kind of conditional conclusion of this latest result, then all the larger classes (Sigma3, Sigma4, ...) equal NP. This is a "collapse" of the hierarchy. But P may still not equal NP.

By "efficient simulation of quantum systems," what is meant, precisely, is the ability to efficiently classically sample from the same probability distribution that you get by measuring the output qubits of a polynomial-time quantum computer.

I don't see an arxiv paper by Aaronson about this result, but it sounds interesting. The BQP and the Polynomial Hierarchy paper seems to show something else. Did Scott literally prove that if you can efficiently sample the output of any polytime quantum computation (presumably, sampling up to error 1/poly?), then PH collapses? And was the proof using an oracle or no?

I think it is up to the authors to state their result.

I'd like to thank Dave for posting these abstracts, and thank Martin Schwarz for posting the link to Scott Aaronson's slides.

It seems to me that Scott and his coauthor Alex Arkhipov have made a very serious and potentially very significant contribution to our understanding of the fundamental question, "What physical systems can be simulated with classical resources?"

Here I emphasize the words "serious" and "significant" because the Aaronson/Arkipov work is so serious and so significant (IMHO) as to create tensions that have wonderful dramatic and comedic potential.

To help create the the proper frame of mind, I suggest that before reading the rest of this post, a reading of Terry Tao's arxiv manuscript, Perelman's proof of the Poincare conjecture: a nonlinear PDE perspective (arXiv:math/0610903v1), in particular Tao's footnote 3 (p. 2).

The footnote discusses the tension between combinatorical methods and PDE methods ... a tension that forseeably will be crucial to the experimental testing of the Aaronson/Arkipov ideas, as will now be explained.

Is everyone now finished reading Tao's footnote? Good. Now let's fast-forward to the year 2015, where in the basement of the UW physics building we find find three research groups hard at work.

Angela's group is purely experimental. Precisely as foreseen by Aaronson/Arkipov (slide 22), the Angela group has constructed a high-quality 1600x1600 fiber optic coupler (1600 input fibers, 1600 output fibers), whose inputs are 40 single-photon sources and 1560 vacuum ports. Angela fires her single-photon sources over and over again, and every few seconds precisely 40 output ports light up (indicating that all 40 photons propagated through the network without loss). Angela records the data record (a 1600-bit number having precisely 40 ones).

After accumulating (say) one million such numbers, Angela publishes them, along with a superoperator description of the optical coupler. Of course, Angela can never measure her coupler's superoperator directly (it's got too many dimensions), but she can describe it in an approximate factored representation, by making reasonable assumptions of linearity and independence among components.

And oh yeah ... Angela's group doesn't just work with photons ... they also do pretty much the same experiment with spins in an ion trap (initialized to "up" or "down" instead of "photon" or "vacuum", then subsequently coupled with dipole interactions) and also with micromechanical cantilevers (initialized to number states, then subsequently coupled parametrically). The Aaronson/Arkipov work has created a new industry in experimental physics: "trapdoor experiments" that are simple to describe and hard to simulate!

There are two theory groups working on this problem: Carl's combinatorical group and Tim's tensor network group. Angela challenges these two theory groups by asking: "Are our group's experimental data consistent with quantum theory + our experiment's superoperator?"

Carl's group proceeds combinatorially. His group ignores loss-and-noise in Angela's superoperator, and by a heroic direct calculation of a permanent (per Aaronson/Arkipov) Carl's group estimates an a priori probability P_i for each of Angela's million data points.

It's fortunate for Carl's group that Angela is using only (say) 40 photons, because it would be infeasible to directly calculate the permanent for any larger number of photons.

And the good news is (assuming that orthodox QM is true, and Angela's apparatus is sufficiently low-noise), Carl's group's predicted distribution of P-values is a very good match to the observed distribution of Angela's P_i values.

Tim's group proceeds via concentration-and-pullback onto tensor network manifolds. Tim's group's calculation is not as heroic as Carl's ... they keep all the loss terms in the superoperator, convert them to concentration potentials in a synoptic Lindblad gauge, and pullback the resulting trajectories onto low-dimension tensor network states (see also Philippe Corboz's invited talk on tensor network states). But low-dimension is relative ... Tim's group readily handles tensor network states of algebraic rank (say) 100.

Tim's group checks (by numerical experiment) that higher algebraic ranks are not needed to simulate Angela's quantum trajectories with good fidelity (until such time as Angela upgrades the quality of her photon sources and her network). If Angela were to use more photons, without upgrading her optical components, this would not greatly inconvenience Tim's group, because the required concentration-and-pullback computational resources scale only polynomially for fixed tensor network algebraic rank (this is the main difference between Tim's group's pullback approach and Carl's group's combinatorial approach).

And the good news is (assuming that orthodox QM is true, and Angela's apparatus is sufficiently high-noise), Tim's group's predicted distribution of P-values is a very good match to the observed distribution of Angela's P_i values.

Now it is time for all three groups to upgrade their respective experimental, combinatorial, and tensor network pullback methods.

What (if anything) have Angela, Carl, and Tim learned about quantum mechanics? How should they coordinate their efforts? What should they propose to do next? And what lessons might Angela, Carl, and Tim draw from the long and wonderful history of combinatorical and (essentially IMHO) pullback approaches to the Poincare Conjecture?

It seems to me that Aaronson/Arkipov deserve great credit for inspiring these wonderful questions!

I am a participant in QIP 2010. I am a college sophomore with a dual major in Physics and Mathematics @ University of Canterbury in Christchurch, New Zealand. By the way, i came across these excellent physics flashcards. Its also a great initiative by the FunnelBrain team. Amazing!!!

John: As someone coming from a theoretical condensed matter physics background who now tries to provide theory support to condensed matter physics experimentalists, I think it is safe to say that MOST experiments are "simple to describe and hard to simulate".

I would hazard that neither of the approaches you mention would be appropriate for simulating real experiments. Real multiple qubit systems are open quantum systems coupled to complex noise spectral densities at finite temperature.

Let me propose a different experiment. Let's say you have an initial Hamiltonian H_i=\sum_{j=1}^128 X_j and a final Hamiltonian H_f=\sum_{j=1}^128 h_j Z_j + \sum_{i,j} J_ij Z_i Z_j, where X_j and Z_j are the pauli matrices corresponding to rf-squid superconducting flux qubits. You can program the qubit biases h_j and the coupling strengths J_ij. You know that the spectral noise density is 1/f at low frequencies and ohmic at high frequencies up to a cut-off at about 100 GHz. You know that the temperature of the chip is 10mK. Now let's say you fix the qubit biases and couplings to some particular values, and perform the evolution H(s)=(1-s)H_i + sH_f taking s--> 0 to 1. Read out all the qubits. Now repeat 1 million times, giving 1 million 128-bit states, and a resultant output probability distribution over the 2^128 possible output bit states.

What I would want of a quantum simulation is to predict the output probability distribution I see in the experiment. In order to do this I need an open quantum systems model with 128 qubits as the central system, coupled to a non-Markovian environment at finite temperature, where the temperature, energy gaps, spectral broadenings, etc. are all of the same magnitude.

I know how to do this for a closed quantum system--quantum monte carlo a la aqua@home. How would you do it with realistic noise?

"What I would want of a quantum simulation is to predict the output probability distribution I see in the experiment. "

Just to clarify the problem what you probably really want is to be able to sample from this distribution, right? Or do you want an estimate of a given amplitude, or estimate of an expectation value?

Geordie, that word "MOST" is a tricky one ... in medicine and biology we think that "most" quantum systems are molecular-scale, and are immersed in an aqueous environment at 300 Kelvin.

In these class of systems we find that symplectic flows (i.e., classical and quantum dynamics) and metric flows (i.e., measurement and noise processes) have roughly equal importance and it seems likely (in principle anyway) that pretty much any biological process can be simulated *and* observed at the atomic scale.

The dynamics of 10mK SQUIDS is of course very different (by design!) and it is quite a bit harder even to decide what simulation strategies would work best.

One rule of thumb is that it is generally a good idea to model Markovian noise as covert measurement process; this quenches high-order quantum correlations that otherwise are costly to store. Another rule of thumb is that when simulating a process that is robustly insensitive to noise (like most biological processes), then it is OK to add *extra* noise-equivalent measurement processes to the simulation; this has a beneficial effect similar to adding artificial viscosity in simulating the Navier Stokes equations.

But neither of these strategies apply to the case you describe, namely non-white (1/f) noise acting on a low-temperature system whose dynamics are exquisitely sensitive to noise (coupled SQUIDS).

Perhaps one way to summarize the situation is this: Low-temperature quantum systems---like RF SQUIDS and ion traps---are at the cutting edge of research because they are hard to simulate and yet are (relatively) easy to observe; conversely, high-temperature quantum systems---like biological molecules and spin microscopes---are at the cutting edge of research because they are hard to observe and yet are (relatively) easy to simulate.

In the (recent) past, biologists have been trained to conceive of dynamics (both classical and quantum) in terms of geometric flow and PDEs, whereas physicists have been trained to conceive dynamics in terms of combinatorics and linear vector spaces. But nowadays these traditional geometric/combinatoric divisions are evaporating --- Terry Tao's current blog topic From Bose-Einstein condensates to the nonlinear Schrodinger equation is a good example.

One thing seems pretty certain IMHO ... we'll all be thinking and learning plenty about these new ways of conceiving quantum dynamics in coming years.

Dave: Practically what you'd want is the ability to sample from a predicted probability distribution given some open quantum systems description.

John, I think Geordie is totally right. Or to put it differently, if you've got it all figured out, could you maybe tell me what the phase diagram of, say, 20 electrons in a 30 site Hubbard model is? Nothing complicated. Do that in a convincing way and everyone will be very impressed.

Actually, I do think DMRG, PEPS, MERA, TEBD, and others are some of the most amazing ideas in modern science. But, currently they have their limitations. And people do somehow manage to work on them without telling stories about unrelated mathematics, but rather by writing code and getting results.

Matt, your Hubbard model example points to one of the areas that simulationists *don't* (presently) understand well, namely, the geometry of fermionic state-spaces.

Partly this is a matter of training ... my generation of physicists was never taught that Slater determinants were (geometrically speaking) simply Grassmannians via the Plücker embedding ... the consequence is that the vast mathematical literature on Grassmannians has never made good contact with the vast physics literature on fermionic condensed matter.

To see why this is unfortunate, let's think about concentration-and-pullback simulation methods as proceeding in two stages: (1) put the system in contact with a thermal reservoir that concentrates quantum trajectories onto a low-dimension state-space, then (2) pick a coordinate system that implements the musical isomorphism efficiently on the geometry of that pulled-back state-space.

From this point of view, designing efficient simulation algorithms is all about knowing the algebraic geometry of concentrative flows onto Kählerian state-spaces, because good choice of pullback geometry does much of the hard work of simulation for us.

It is step (2) that we don't (presently) know how to do very efficiently on the Grassmannian join manifolds that characterize fermionic state-spaces. Fortunately this is changing pretty rapidly ... progress in recent years has been certainly been amazing ... and there every reason to foresee that this progress will continue (or even accelerate) ... because there is so much still to learn!

Issues of scientific style enter too. It's been very illuminating for me, this fall, to regularly attend the David Baker Group's molecular simulation seminars. Mathematically speaking, there's no point in attending these seminars, because they're trying to solve an NP-complete (NP-hard?) problem, namely polymer folding. And the speakers cheerfully admit that they can't simulate generic polymer folding.

But they run their simulations anyway, because heck, natural proteins aren't generic polymers -- they're designed by evolution to exhibit enthalpy-entropy compensation (meaning, as a natural protein unfolds, it's enthalpy increases faster than its entropy).

The complexity-theory consequence is that biological simulationists live in a world in which many (most?) simulation problems are formally NP-hard, but nonetheless these problems are in practice solvable with classical resources ... and this is exactly the world that Dick Lipton likes to talk about.

The holy grail of simulation biology is a world in which all structures larger than (say) 0.5--1.0 nm are directly observable, and all structures smaller than 0.5--1.0 nm are dynamically simulatable ... and there is good reason to think that we may be pretty close (meaning, less than 10 years) from achieving this link-up, both classically and quantum-mechanically.

This accounts for our QSE Group's focus on spin simulations. There are about 33 water molecules in one cubic nanometer; this means that 66 protons are transmitting information to us ... and so we are very interested to simulate spin trajectories on this 2^66 dimension state-space, so as to build spin microscopes that will acquire this information as rapidly as feasible.

There is a loose parallel with the 1969 link-up of the first American transcontinental railroad, between the crews laying the tracks west (the quantum spin microscopists) and the crews laying tracks east (the biological simulationists); it presently appears that our 21st century observation-simulation linkup will occur at a dimensional scale of about (0.7 nm)^3.

John, if you can't do the Hubbard model, perhaps instead you could tell me what phase the ground state of the spin-1/2 Heisenberg antiferromagnet on the kagome lattice is? Or tell me whether the square lattice Heisenberg antiferromagnet with additional diagonal couplings has a first order, direct second order, or two second order transitions between AF and VB phases? No fermions in either of those. But, otherwise they are simple, only two states per site, so should be no problem. Those are just a couple models from the enormous selection of strongly interacting non-one-dimensional systems where we really can't do things yet.

Matt, is the physics of Heisenberg antiferromagnets really so hard to simulate numerically? When I do a full-text search for arxiv preprints containing "Heisenberg antiferromagnet numerical" there are more than two thousand hits ... and many of these articles describe highly ambitious numerical simulations.

It is interesting that in biological research nowadays, simulations have definitely *not* replaced experiments. Quite the opposite: if a group is contemplating (say) one hundred candidate experiments, then nowadays they simulate all one hundred of them, and then run the ten (or five, or one) experiments that are most likely to yield interesting results.

A typical pipeline is to design (by simulation) one hundred synthetic enzymes, express (expensively) the ten best candidates, and be happy if one good-performing enzyme results.

In practice, simulation-based planning of experiments improves the odds of success sufficiently that research groups that don't incorporate numerical simulations into their overall research strategy are no longer competitive; they are incapable of even planning, much less completing, experiments of the same sophistication as the simulation-based groups.

To again use the analogy of the 1869 transcontinental railroad, today's observational biologists (microscopists, crystallographers, spectroscopists) are the westbound track-layers, and today's synthetic biologists are the eastbound track-layers ... and today's simulationists are the surveyers who determine where the track-layers are destined meet-up!

As for the "Golden Spike" (i.e., the joining of the tracks that enabled high-throughput transcontinental travel), this represents a capability of reliably observing all biological structures larger than (say) 0.7 nm, and simulating the dynamics of all biological structures smaller than (say) 0.7 nm ... I append a BibTeX reference to a well-written review article by Subramaniam and Milne that fills in some of the (many) technical details.

The point is, today's simulationists (both classical and quantum) are playing in biology roughly the same role as the surveyors of 1869 played in determining where, when, and how the Golden Spike got pounded ... in which respect, the role of today's simulationists is one of many vital roles, in a vast science-and-engineering enterprise that (IMHO) will largely shape the twenty first century.

Recognizing that it is a sensitive question, and that opinions differ, is the situation nowadays in condensed matter physics really all that different? And how does quantum computing fit into this analogy?

To my mind, quantum computers today are rather like airplanes in 1869 ... far-sighted folks could see them coming, but neither the technology of 1869 nor the understanding of fundamental physics was mature enough to build them ... and in the meantime exciting technologies like railroads, steamboats, and telegraphs offered plenty of action.

Mark Twain being a far-sighted guy, he wrote vividly about both the practical realities of nineteenth century surveying (in his 1871 travelogue Roughing It) and about nineteenth century visions of air travel (in his 1894 novel, Tom Sawyer Abroad).

The bottom line (for me) is that our twenty first century has many striking parallels to Twain's nineteenth century ... and definitely the opportunities of our century are equally exciting!

@article{*,Author = {Subramaniam, S. and Milne, J. L.}, Journal = {Annual Rev. Biophys. Biomol. Struct.},Pages = {141--155},Title = {{T}hree-dimensional electron microscopy at molecular resolution},Volume = {33},Year = {2004}}

John, almost all those simulations are either quantum Monte Carlo, which is limited to systems with no sign problem, or exact diagonalization on small systems (36-40 spins typically), or DMRG. So, the first case (QMC) does not include the example problems I mentioned since there is a sign problem, the second case is too small to answer the questions I mentioned, and the third case is mostly limited to 1d, but also some small 2d systems. There are some PEPS, MERA, etc.. simulations also, but yes, this problem really is that hard to simulate, and no, we really do not know the answers to the questions I say, despite a lot of numerical effort. And really, this is simply because these are strongly interacting quantum systems---I could name many other examples where numerics fail.

I don't know that we're disagreeing, matt, because selection effects so strongly condition which physical systems a given research community considers "interesting" or "typical".

For pragmatic reasons, engineers, biologists, chemists, and astronomers often focus on systems whose dynamics are (relatively) easy to simulate. And for equally pragmatic but quite different reasons, physicists, cryptographers, and computer scientists often focus on systems that are (even in principle) hard to simulate.

As for mathematicians, in my experience they will happily work in any field in which there is good mathematics to be found! :)

It's not just that physicists care about problems that are hard. Mostly they care about real problems with fermions. But if you make the excuse that you can't do these problems because you can't do fermions, then I had to ask if you can do any strongly interacting system. It seems to me like the answer is no. But if you can do something strongly interacting in a novel way, then please show it! People would listen.

On the other hand, other people, using methods like DMRG, PEPS and MERA are making progress on these problems in a much more quiet way.

Matt, what is it that defines a strongly interacting system? E.g., classical hard spheres interact strongly ... and folks have been simulating them for ages.

As for techniques like DMRG, PEPS and MERA ... aren't these good examples of concentration-and-pullback simulation methods? With the concentration arising from contact with a low-temperature reservoir? And the pullback being onto (multilinear) tensor network states that have been sliced and chopped in various ways, so that the natural geometry of the state-space increases the algorithmic efficiency of the computation?

In a mathematical sense, perhaps there is nothing really new here ... it's rather that nowadays more researchers are choosing to conceive both classical and quantum simulation techniques along lines that integrate geometric and PDE toolsets with algebraic and combinatoric toolsets.

So in consequence, it seems (to me) that there is abroad in the land a more widespread appreciation that classical and quantum mechanics (when viewed as symplectic flows) are mathematically similar, and that this similarity increases when the dynamical trajectories are pulled-back onto low-dimension, non-linear state-spaces ... as happens ubiquitously in both classical and quantum simulations.

All of these ideas have been around since the 1970s (or even earlier) ... what is happening nowadays is that these ideas are being more tightly integrated into larger-scale research enterprises ... and there is (of course) a much larger math-and-physics literature from which to draw ideas.

It seems (to me) that these trends are very good news, for everyone who wants to see more jobs created in mathematics, science, and engineering ... and this is most especially true for younger researchers.

Well, since this thread is winding down, please allow me to thank Matt for his to-the-point comments on the difficulties of simulating 2D fermionic systems, which led me to a short (2 page) review by Matthias Troyer of ETH Zurich, titled Some recent advances in numerical algorithms for quantum many body problems .

Troyer's review is largely consistent with Matt's comments, and it concludes (optimistically, and yet with a wink too!): "Perhaps the holy grail, reliable, accurate, and unbiased simulations of large 2D fermion clusters, is coming within reach!"

My main impression, after a first read-through, is that the community of (mostly classical) molecular simulationists thinks about (what seem to be) similar issues relating to simulatability, using their own idiomatic language of "enthalpy-entropy compensation."

Thanks, everybody! Wonderful topic, Dave!

Hi everyone! I don't much follow these "blog" things that all the young people use nowadays. But someone told me there was a discussion here about my new result with Alex Arkhipov, so I thought I should take a look.

Firstly, I apologize that we don't yet have the preprint ready! But I'm grateful for all the interest. I think most questions have already been answered above, but for the record:

(1) Our result says the following. Suppose there exists a polynomial-time classical algorithm that approximately samples from the output distribution of any quantum circuit -- even a quantum circuit that involves only non-interacting bosons, e.g., linear optics with single-photon sources and a single nonadaptive measurement. Then P#P=BPPNP, so in particular the polynomial hierarchy collapses. This is a less dramatic consequence than P=NP (though formally speaking P#P=PH is incomparable with P=NP, as far as we know). From a complexity standpoint, though, it's arguably a more dramatic consequence than factoring being in BPP.

As an extension, even if sampling problem is solvable in probabilistic polynomial time with PH oracle, we still get that P#P=PH and hence PH collapses.

(2) This is not the same as the result in my paper BQP and the Polynomial Hierarchy. There I showed that there exists an oracle A relative to which FBQPA⊄FBPPPH^A that is, there exist black-box relational problems that are solvable efficiently in BQP but not in PH.

By contrast, the new result with Arkhipov is not relative to an oracle. Indeed, it uses a non-relativizing technique (namely the random self-reducibility of the permanent), and I conjecture that the result itself doesn't relativize (though I haven't actually constructed an oracle where it fails). In that sense, the new result gives stronger and more convincing evidence for the power of BQP than my earlier one.

On the other hand, I can't resist mentioning that the relativized result was the direct inspiration for the unrelativized one. It seemed so striking to me that one could use the hardness of the #P-complete MAJORITY problem to prove the hardness of a quantum sampling problem, that I wondered if the same phenomenon couldn't be exhibited in the "real," unrelativized world, which for some reason people seem to like better.

(Ironically, those who dismiss oracles as irrelevant fictions create their own fictional world, where things like the above don't happen... :) )

(3) Yes, it's certainly been our experience that many interesting fermionic/bosonic systems are easy to describe but hard to simulate classically! The question we address is to what extent one can base that intuition on "standard" complexity assumptions.

Best,
Scott

Thank you for that illuminating post, Scott ... many people (including me) are looking forward to your and Alex Arkhipov's preprint ... and congratulations on this very interesting new approach to thinking about quantum complexity.

Over on the math-centric Secret Blogging Seminar (SBS), Scott Morrison has launched a thread on the (IMHO) related topic of Quantum Mechanics and Geometry; I have mentioned the Aaronson/Arkhipov work there, and it is clear that the SBS cabal of mathematicians will take great interest in the preprint when it appears.

So far, it is mainly mathematicians posting on SBS ... perhaps some of the Quantum Pontiff's subjects would care to visit the SBS' exotic blogosphere?

Oh yeah ... as a followup ... in a post on the SBS, I introduced the eponymous nomenclature "Aaronson/Arkhipov apparatus."

This was partly an impish impulse; a maxim that we teach medical students is "never let your name become an eponym" ... and so introducing the eponymous phrase "Aaronson/Arkhipov apparatus" may perhaps have been a disservice to Scott and Alex.

But *mainly* it is a gesture of respect ... it seems to me that Scott and Alex have achieved that uncommon hat-trick -- an exciting new idea that can be embodied in a new class of theorems *and* in a new class of experiments *and* (perhaps) in new classes of simulation and/or cryptographic algorithms.

To me, this seems wholly admirable. Congratulations again!