“It's always seemed like a big mystery how nature, seemingly so effortlessly, manages to produce so much that seems to us so complex. Well, I think we found its secret. It's just sampling what's out there in the computational universe.” -Stephen Wolfram
In the mid-20th century, computers allowed us to explore a brand new idea: that a discrete space, with a simple set of rules and straightforward initial conditions, could evolve in steps to create a rich, life-like environment. While many of us have played or seen simulations of Conway’s Game of Life, a deeper idea is at the core of such a simulation: that at a fundamental level, the Universe itself may be nothing more than a similar cellular automaton.
Started by Ed Fredkin in the 1960s, a simple idea that digital information could represent reality, and that bits of that information in different states and configurations could correspond to what we perceive as different particles in our physical Universe. Developed further by John Wheeler and David Bekenstein, and later taken to a quantum level to incorporate the full nature of the Universe, it’s conceivable that both matter and energy could be illusions. If the “It from Bit” hypothesis is true, only digital information would truly be real.
Is it possible that this is how our Universe actually works at a fundamental level? That the whole shebang is nothing more than a cellular automaton?
IMHO universe/reality must be a Cellular Automata Quantum Computer operating at Planck scale. I had written a lot about this in comments here and in my blog:
Conway's Game Of Life is probably the most famous CA but I think CA used for fluid simulation (like FHP or LBM) would be a much more relevant example for this topic.
Like I said here sometime ago, I think Black Holes must be made of Planck particles. Which is I think an idea compatible with their surface entropy being (event horizon) surface area divided by Planck area.
(But I think that is entropy of the BH seen from rest of the universe outside/around. The full entropy would be BH volume (of event horizon) divided by Planck volume.)
Universe being a CA would also explain seeming incompatibility between QM and Relativity.
In CA used for fluid simulation, the future is unpredictable in micro (particle size) scale, but it is predictable in macro scale (which shows Navier-Stokes behavior); just like future being unpredictable in quantum scale but being predictable in relativity scale, in the real universe.
And just like what happens in CA used for fluid simulation, future events become more and more predictable with certainty in the real universe, as we look at the world in higher and higher scales.
For example consider how certain future events are in our daily life (Newtonian physics) scale vs how certain events are in macro scale, like motion of Earth in its orbit tomorrow.
A good example for uncertainty in Newtonian physics scale could be weather, like how certain we can predict temperature, air pressure, humidity, wind etc for any location on Earth for 24 hours in the future. And compare that to how certain we can predict location of Earth, its velocity, acceleration etc in its orbit 24 hours in the future.
As for why Black Holes must be made of Planck particles, that is because theoretically those are the smallest possible particles.
(Also Wikipedia article on them was saying they are already naturally show up in some theoretical QM calculations (which I think is similar to how complex numbers were showing up in solutions for some polynomial equations way before they actually discovered by mathematicians).)
Imagine gravity of BH compresses incoming quantum particles until they cannot get any more smaller.
since there's Wolfram's quote at the beginning, it should be fitting to give a link to his lecture about the topic of this post. Seen this years ago and got me really intrigued.
Like it or not ("not" prevails here!) it's time for a little primer from the philosophy of science on this subject.
Ethan: " it’s conceivable that both matter and energy could be illusions. If the “It from Bit” hypothesis is true, only digital information would truly be real."
http://fitelson.org/164/realism.html (Edited for brevity and emphasis, my caps.)
Opposed to scientific realism... are a variety of antirealisms... Recently two others, INSTRUMENTALISM and constructivism, have posed special challenges to realism. Instrumentalism regards the objects of knowledge pragmatically, as tools for various human purposes, and so takes RELIABILITY (or empirical adequacy) RATHER THAN TRUTH AS SCIENTIFICALLY CENTRAL. A version of this, FICTIONALISM, contests the existence of many of the objects favoured by the realist and regards them as merely expedient means to useful ends. Constructivism maintains that scientific knowledge is socially constituted, that 'FACTS' ARE MADE BY US. Thus it CHALLENGES THE OBJECTIVITY OF KNOWLEDGE, as the realist understands objectivity, and the independent existence that realism is after. Conventionalism, holding that THE TRUTHS OF SCIENCE ULTIMATELY REST ON MAN-MADE CONVENTIONS, is allied to constructivism."
From "People Who Ask":
"Realism, at it simplest and most general, is the view that entities of a certain type have an objective reality, A REALITY THAT IS COMPLETELY ONTOLOGICALLY INDEPENDENT OF OUR CONCEPTUAL SCHEMES, linguistic practices, beliefs, etc."
"In that sense, INSTRUMENTALISM IS DIRECTLY OPPOSED TO SCIENTIFIC REALISM, which is the view that the point of scientific theories is not merely to generate reliable predictions but to DESCRIBE THE WORLD ACCURATELY. Instrumentalism is a form of philosophical pragmatism as it applies to the philosophy of science."
Note for those not familiar: Ethan is an instrumentalist, and this philosophical bias (opinion) colors everything he presents in this blog as "facts."
"Realism" can be thought of as a philosophical theory answering the old question which we called the "Problem of Authority": how can we justify the claim that it is rational to believe scientific explanations? The realist answers by saying the ultimate authority which justifies the rationality of scientific beliefs is simply that they are true in the sense of "truth"as a relation of **correspondence between what we believe to be the case and what in reality is the case.**" (My **)
It sounds like the theorists have come full circle, from deriding the idea of a created universe, because it was so quaint, culturally gauche and beneath them, to embracing the idea of the universe being someone else's simulation.
...so much for scientific progress.
Anyone who has worked seriously with computer programming and modeling knows that what computers do internally to process data isn't what the universe does, much less what humans thinking do. Representation much like allegory is a powerful tool, but when it is abused it can mislead more than inform.
...embracing the idea of the universe being someone else’s simulation.
Where was it ever stated that someone out there is running such a simulation, or even that it is a simulation at all? What we are speaking of here is the underlying nature of reality itself, and if reality itself behaves as a cellular automaton as hypothesised, then by definition it is not a simulation at all!
@Anonymous #10: Exactly so. What I find more interesting about this is that cellular automata are (not by definition, but certainly by usual construction) strictly _local_ in construction. That is, the state transitions of a given cell are determined by the states of their near neighbors. Since quantum mechanics has been demonstrated (by violation of Bell's inequalities and such equivalents as CHSH) to be fully non-local, it's not obvious to me how a CA formulation can give rise to the quantum mechanical features of reality which we actually observe.
"While many of us have played or seen simulations of Conway’s Game of Life, a deeper idea is at the core of such a simulation: that at a fundamental level, the Universe itself may be nothing more than a similar cellular automaton."
You don't get it.
Don't look at the CA but what the CA does.
That is actually covered in the article:
"Feynman showed that a quantum system could not be fully simulated using a classical computer and classical algorithms, rather one needed what came to be known as a quantum computer. Instead of bits, it would be based on qubits"
Earlier thinking assumed reality as a CA on a classical computer system, which is clearly impossible.
But what if we are talking about a CA on a quantum computer, where each cell (at Planck scale) is a qubit or a qubit register (multiple qubits).
Then compatibility with everything in QM is really possible.
Of course there are many important details to find out, like how many qubits would be needed for each cell, how many other cells each cell needs to be connected, each cell just connected to its neighbors next to it, or M levels (distance) of neighbors, or all cells need to be connected to all others etc.
Also I think earlier thinking assumed a CA at QM scale (particle size). Which I don't think could work. It needs to be at Planck scale (smallest possible size known in physics).
(Which would mean each quantum particle is actually a cluster of information (similar to a quasiparticle).)
If mankind occupies what it is here called a cellular automaton, what difference does it make? It's our habitat and we have to live in it whether we like it or not. Sooner or later we will know for sure and when that happens, it will all go away and we will be free to create another automaton.
Technically it is possible to simulate a quantum-mechanical system using a classical computer and classical algorithms: it is just computationally expensive to do so. All known algorithms for doing this in take space and/or time exponential in the number of particles, which means that simulating even a relatively small number of particles would be intractable.
Can you give an example?
@Frank -- multiple great comments!
#16: Using CA for fluid dynamics (and even MHD) is a perfect example of what I had in mind. The interactions are necessarily local (shear, pressure and friction between adjacent fluid elements), and the CA formalism is ideal for that. Thanks for the lovely videos!
#13: But that was my point. If you implement a CA while baking in existing quantum mechanics as a precondition, then you really aren't learning anything new. You're just making a computationally convenient implementation of QM (c.f., lattice QCD).
Wolfram's arm-waving, as I understand it, is that a _classical_ CA (local, deterministic rules) can "somehow" reproduce the full physics we observe in the universe. By definition, that includes non-local, non-classical correlations (a.k.a. Bell's inequalities). But Bell's theorem, which is a proper maths theorem, not just physics, says you can't do that with any local formulation.
Now, I am not a _theoretical_ physicist, so there is probably something deep here that I am missing, but I just don't see the magic step where you get from a local, deterministic CA to non-local, non-classical QM.
@Elle H.C. 18:
This was in the news recently:
"Finding exact solutions to such problems numerically has a computational cost that scales exponentially with the size of the system, and Monte Carlo methods are unsuitable owing to the fermionic sign problem. These limitations of classical computational methods have made solving even few-atom electronic-structure problems interesting for implementation using medium-sized quantum computers."
They managed to simulate the chemistry of BeH2 molecule using a primitive quantum computer, a previously intractable problem using classical algorithms.
"It will have quite significant memory requirements. The local version will offer up to 32 qubits, but to do this will require 32GB of RAM. Each additional qubit doubles the amount of memory required. The Azure version will scale up to 40 qubits."
I am no CA expert but I think if we have a local CA quantum computer, and all currently known particles are quasiparticles (clusters of information) created by that CA, explaining non-local particle interactions like entanglement is no problem.
Because those quasiparticles will be still the same quantum particles we know, which includes non-local entanglement ability. Think about how any two particles get entangled in the first place. Are those interactions really impossible to happen in a local CA QC?
@ Anonymous Coward
I'm not an expert, but I don't think you simulate quantum systems with classical computers and algorithms, in principle. Sure, you can solve the the equations and get statistical predictions (like Monte Carlo method), but you won't get an answer of what is going to happen definately, only probabilites of what might happen.
IMO if one is able to simulate a quantum system i.e. a double slit experiment, or quantum eraser, on a classical computer (not counting the CPU cost), that would mean that there is no "actual" quantum uncertainty, and that the nature is fully deterministic.
@ Michael Kelsey #19
"Wolfram’s arm-waving, as I understand it, is that a _classical_ CA (local, deterministic rules) can “somehow” reproduce the full physics we observe in the universe"
That's not how I understood him. But I might be wrong. The way I see it, it's more along the lines with Mandelbrot and fractals. In a sense, that there are emergent properties in nature, some of them behave like fractals, and some behave like CA's. I don't think anyone is saying or thinking, that i.e. plant growth is only determined by some fractal formula (disregarding biology, physics etc...) or mountain formation or riverbeds or a host of other things. But it is self-evident that fractal patterns emerge in those examples. In the same ways, it seems that certain processes in nature follow the paths of CA's (a set of initial conditions that then grows into something which isn't repetative in a naive sense, some of his examples do nothing until they reach some trillionth iteration, then they seem almost inexplicably do create patterns).
So I think he's saying that for whatever reason, there seems to be an emergent properties in nature that can be very nicely described with CA systems. Not that they can explain everything.
p.s. IMO his arm-waving is more about taking a deeper look into CA's and why do those properties emerge and how, maybe find some deeper rules/laws about them. To not only consider them as interesting oddities.
IMHO there is no doubt a CA QC (running at Planck scale) can recreate QM at its own scale (average particle size) as an emergent property, but even then it would be still a big question if it would also (automatically) recreate Newtonian Physics and Relativity Physics at their own scales (also as emergent properties).
Maybe there are many possible CA that can recreate QM but not NP nor RP. Maybe there are few CA that can recreate QM and NP but not RP. And maybe there is only one CA that can recreate QM, NP, RP altogether (at their own scales), who knows :-)
@Sinisa & Frank: Thank you! I'm not sure I agree with everything, but it's good stuff for me to think about.
Mh, this is more about using a quantum computer to simulate molecules and QM interactions.
It's not the same as what CA does, which simulates a medium as a whole. What you're referring to isn't exceptional an sich, cause you can simulate the double slit with QM.
What special is that's CA is about emergent behavior, and that's also the problem with Frank bringing up LBM, for those kinds of simulations there is an a power input. Let's take for example a heat source. In GoL on the other hand there is no real input, it's a zero player game and things fluctuate ongoing.
"IMO if one is able to simulate a quantum system i.e. a double slit experiment, or quantum eraser, on a classical computer …, that would mean that there is no “actual” quantum uncertainty, and that the nature is fully deterministic."
That's was also what the comment/argument by Einstein was about vs. Bohr that there is an Aether and nature is deterministic.
My opinion is that a CA with fixed cells won't work, but one with flexible cells might work, and that eventually there is a logical explanation for 'spooky' QM effects.
Einstein's aether and determinism are in no way related.
"that’s also the problem with Frank bringing up LBM, for those kinds of simulations there is an a power input. Let’s take for example a heat source. In GoL on the other hand there is no real input, it’s a zero player game and things fluctuate ongoing."
Not sure what is the problem. In the first video I gave as example, there is continuous input because obviously the goal was to get a continuous fluid flow. In the second video there is no such thing. The liquid starts moving and later starts slowing down. Just like what would happen in the real world if someone done the same experiment with real water.
In our world we have continuous liquid/gas motion because Earth is keep getting energy input from the sun.
Also like I said before, if someday we create a CA that runs on quantum computer, which recreates all known quantum particles (as its own quasiparticles), it could recreate everything in QM, including atom, gas, liquid, solid materials, all kinds of chemical and nuclear reactions.
It can be done with fixed cells because what really move are clusters of information, just like the (moving) quasiparticles created in solids(!) today in real physics experiments.
Lol, you are where I was 8 years ago … it's all very naive … I have done the things that you are talking about, but they don't work because of randomness … LBM is random, GoL isn't because it has a set of organizational rules …
What is the problem with randomness?
Hidden Variable Theory (which Einstein hoped for) already disproved which means any kind of CA trying to reproduce QM would need perfect randomness. There is no way around that.
@Sinisa Lazarek#22: No, it is perfectly possible to simulate quantum mechanical systems using a classical computer. The only problem is as the number of particles you are trying to simulate increases, the amount of memory and processing power required explodes exponentially, such that even a small system will rapidly become intractable even if you had the entire world's computing power and storage space at your disposal. This was something that Richard Feynman noted in 1982, and was the original motivation for a quantum computer.
You can simulate a double slit experiment using a classical computer, but you'll have to be calculating the wave function of the particle if you're going to do it right, and that is by definition a probability amplitude. That's all you can really do to simulate any quantum mechanical system. Perfectly feasible for one or two particles, but if you have many more, well, the problem will rapidly become intractable even if you had the entire world's computing power at your disposal.
Nevertheless, at the current state of our knowledge, it is possible, however very unlikely, that there is some classical computer algorithm hitherto undiscovered of that can simulate quantum mechanical systems at that level without exponential blowup. That's because there's no real mathematical proof that such an algorithm doesn't exist. In formal terms, computer scientists speak of complexity classes, e.g. P, the set of all problems solvable by a classical computer in time proportional to a polynomial in the size of the input (polynomial time). The set of all problems solvable by a quantum computer with bounded error in polynomial time is called BQP. The simulation of arbitrary quantum-mechanical systems is in BQP. It is known that P is a subset of BQP, but there is no real proof that P is a proper subset of BQP. The two sets could actually be equal, and we are just not smart enough to figure out that efficient algorithm yet.
If anyone interested:
If our universe/reality is created by a CA QC at Planck scale, and QM, NP (Newton Physics), RP (Relativity Physics) are its emergent properties at different size scales, then I would think there maybe yet another level of emergent property at even higher size scales where Dark Matter/Energy Physics operate (with its own rules).
"What is the problem with randomness?"
Let's say you have a flat 'medium' to begin wirh, and once you start your simulation patterns should emerge (particles), now when your medium is random there won't emerge any particles. In practice your LBM will stay flat and at most chaotic when you add some vortex confinement, in contrast in GoL you have rules out of which paters can emerge such as gliders. The former stays 'flat' while in the latter a 'structure' emerges.
Note, it is the context wherein I use 'randomness'.
paters > patterns
You are using a lot of terms but aren't actually saying nothing. Let's look at the basics; what would cell A pass on to its neighbor cell B?
@ AC #33
Maybe we are differing on opinions between emulation vs simulation. I was under the impression that it can't be simulated even if you have a classical super computer the size of a solar system. So, for the sake of explanation, let's leave processing cost out, and just focus on "how". I'm trying to understand how in terms of theory. A simple thought experiment.
Consider a simple double slit experiment. And let's say we are firing just a single photon for now. QM theory tells us how to calculate the outcome of the experiment in terms of QM statistics (many photons). X photons here, Y photons there. QM doesn't tell us through which slit will every single photon go through, or in fact why. There is inherent "unknown" or randomness in it (modern argument being there isn't a hidden (classical) variable which is unknown, but that it's the fundamental property of nature).
Now let's say we are writing a (classical) computer program to simulate this. How would you go about this? How would you simulate that quantum unknown that happens when a photon encounters the two slits, when even QM doesn't give an answer to this?
Yes, you could have the program solve schroedinger equation before hand to get to statistical prediction of what will happen on the screen (not slits) and then employ some classical random number generator in order to decide if the photon will go left or right when it encounters the slit. But this isn't a simulation, it's an emulation. You are not modeling the underlying properties of the system, you are emulating the result of the experiment.
This is what I don't understand, and this is why I think simulation is not possible even in principal on classical system. Unless you have that quantum (not classical) randomness built in, then it's not doing what it should be doing. This isn't about any randomness, but QM kind of randomness. AFAIK the statistical prediction between QM randomness and classical randomness are different, that's one of the things that are "weird" in QM.
If it can be done classically, in principle, regardless of computing cost, and it gives the exact same solution as QM statistics does. Then the implication IS that the universe might very well be fully deterministic and that there is no "uncertainty" fundamentally present. That's a huge paradigm shift.
All kinds of CA needs an initial state to be setup.
GoL is never started in completely empty state.
If it was, nothing would happen. It would stay empty.
Check the rules carefully:
I never said if our universe/reality is a CA QC, the Big Bang would happen by itself and our current universe would be reached (after it run for (current age of the universe divided by Planck time) steps).
It would need an initial state setup (like any other CA), like creating a ball of (max dense) energy(ies?) as the seed.
I'm glad that you realized that LMB is not a CA and stopped talking about it.
LBM is a 2d/3d CA. That is a well known fact.
But it seems resources in the internet somewhat unclear about it.
But I wrote many little 1D/2D CA programs over many years myself. I wanted but never wrote an LBM program myself but I wrote FHP. I saw source code of other people for LBM and also read a lot about LBM algorithm.
So I say it is definitely a CA if you can trust me about it though :-)
Where is the 'automaton' in LMB?
I had watched Stephen Wolfram's video above years ago.
Thanks to Sinisa I was rewatching and noticed at some point he explains relationship between number of dimensions and number of neighbors in CA.
If that is true and since our universe has 3+1 dimensions united as spacetime, then maybe it means the CA QC that creates our reality has 4 neighbors for each cell, maybe arranged like in:
You obviously don't know much about CA.
I suggest you start from here and later start researching about LBM algorithm (and not to mention be respectful to everyone commenting here):
I see messages saying my comments awaiting moderation :-)
Or the neighborhood of our universe/reality CA QC maybe arranged like a Tetrahedron?
Multiple of my comments awaiting moderation.
CA QC what's the point of this?
You do realize that a QC used qubits which are something such as the polarization of a Photon. Now to simulate a Photon itself (particle/wave) with a CA or LBM-ish simulator you probably need a grid with thousands or millions of cells.
Probably you could use a QC to do the calculations, but that's it, you could do the same calculations with a PC, only slower.
And you still need to answer my question what the interaction will be between 2 cells. I'm guessing in your case pressure, momentum, perhaps viscosity because you're a fan of LBM what else?
A QC uses only used bits, you know zeros and ones, like a processor / transitor does.
I gave LBM as a good example of power of CA to simulate physics. I never claimed our universe/reality actually runs the LBM algorithm. LBM obviously cannot create the all known particles in the Standard Model and calculate their interaction for example.
Also you seem to think LBM cannot be a CA because if it was then the words "Cellular Automaton" would be included in its name.
"Also you seem to think LBM cannot be a CA because if it was then the words “Cellular Automaton” would be included in its name."
LOL. No that's not it. CA uses (boolean) rules:
The cellular automata paradigm presents some weaknesses inherent to its discrete nature. Lattice Boltzmann (LB) models have been proposed to remedy some of these problems, using real-valued states instead of Boolean variables. - http://ergodic.ugr.es/jmarro/fisico/pages/Automatas&LattBoltzm.pdf
The CA and LBM are not the same.
"Elementary cellular automaton" uses binary cells.
Not all CA. There many different types of CA.
In general each CA cell can have N states if it is a discrete CA. There are also continuous state CA types.
Each cell can even have multiple discrete/continuous state variables.
LBM is a CA. Both discrete and continuous types of it.
It's like multiplication and addition, they look the same but they are not the same. The math is different. It's also explained in the paper I linked to. But if you personally feel that they are the same than sure, why not, you can consider them both CFD models. I'm not going to waste anymore time on semantics. Perhaps you should post a link where it's clearly mentioned that LBM is still a CA.
It seem my comments with links go to moderation but I will try.
For example search Google for:
"The cellular automata approach is based on an advanced lattice Boltzmann technique for a discrete microscopic description of the fluid flow."
How about this:
LBM is a more advanced form of LGCA (Lattice Gas Cellular Automata) but it is still a CA itself.
I think what is going on is most resources in the internet written in a way to prevent people mixing them up. So they are not clearly saying LBM itself is also a CA.
"I think what is going on is most resources in the internet written in a way to prevent people mixing them up. So they are not clearly saying LBM itself is also a CA."
They write it that way, because there is a distinction that you are not willing to accept or grasp, where the LBM dissipates energy and dies down vs. CA that keep on 'automatically' going:
"The HPP model is a fundamental lattice gas automaton for the simulation of gases and liquids. It was a precursor to the lattice Boltzmann methods.
The model is badly flawed, as momentum is always conserved in both the horizontal and vertical lanes. No energy is ever removed from the model, either by collisions or movement, so it will continue indefinitely." - https://en.wikipedia.org/wiki/HPP_model
… therfor LBM is based on a more 'natural' process.
Think of a computer using bits and boolean operations, the program can go on automatically, and won't die down because it doesn't dissipate energy.
p.s. Post your email address on your blog and I'll email you, how you might simulate all the laws of the Universe with one basic CA-like model.
Energy dissipation does not decide what is CA what is not.
I was just honestly trying to help you learn and fix your misconceptions but you have many and adamant on keeping them. There is no point arguing anymore.
If you really think you have great ideas/thoughts but you cannot do it here, then why do you want to send me private email and share them just with me?
I had started my own blog on Blogger because I thought I can better explain some of my ideas/thoughts that way.
I posted here links to some of my blog entries many times whenever I felt their content is relevant.
Why not you also start your own blog and post here the links whenever you need? It is free.
"I was just honestly trying to help you learn and fix your misconceptions but you have many and adamant on keeping them."
Mh, I posted two quotes that specified how and why the schism happened between CA and LBM.
And you wrote yourself, "most resources in the internet written in a way to prevent people mixing them up".
Don't you think that if LBM was a CA that 'resources' would just state so, instead of now 'preventing' this from happening?
"If you really think you have great ideas/thoughts but you cannot do it here, then why do you want to send me private email and share them just with me?"
LOL, I don't want to share them 'just' with you, it looks like you're paranoid for getting an email.
This is Ethan's blog and it seems like we're the only ones left interested in this topic. I do have my own blog, but it are different fragments, in an email I can have it all explained in a more organized fashion, from the ground up based on GoL, that's all. I could turn it into a new blogpost though … Maybe Ethan could also invite me do a guest post here. ?
Anyway, I'm more focused on finishing my simulator to prove my idea, until then it's just talk.
Come to think of it, I don't think I ever saw anywhere a full precise definition of what is a Cellular Automaton.
(By precise I mean a definition including all theoretical possibilities for cell states and update rules etc.)
What that article is really saying is for a classical computer, not for a quantum computer.
Also I think it maybe still unclear to you that what we were talking about is how reality itself maybe WORKING IN A SIMILAR WAY to a (CA) quantum computer.
But even if that turns out to be true someday, it does not mean there is an actual computer (like the ones we built) that runs the reality/universe. Nobody is claiming such thing here.
Think how water molecules act out mathematical (quantum) rules in micro scale (just like a computer), to create world of fluid mechanics (which is working with different laws compared to water molecules).
A fish in water would probably think, idea that water is made of extremely tiny molecules that follow the rules of QM (not rules of fluid mechanics!) is ridiculous :-)
Sorry that it looks like it was actually CFT who was talking like we are claiming reality is someone's computer simulation. :-)
Ultimately the question is what is a computer?
Philosophically you could say that our brain is a computer, it's a circuit of 'transitors' that run our thoughts and draws conclusions. Look at the advancements of 'neural' networks.
Now the argument is, you can 'never' simulate a brain with a 'computer', be it classical or quantum, because of the enormous amount of cells and connections, but what if you grow your 'own' brain (computer) out of a kind of Molecular, cellular DNA you constructed. Have you than in that case 'simulated' a brain in which zillions of different possibilities interactions happen.
And finally let's say you create in the same way a DNA that emerges into a brainlike cellular model, but one in which Proton, Electron and Photon cells emerge who have Quantum Mechanical properties aren't you than able to simulating life?