The LHC is coming, and it's time to place your bets. What do you do? (Fun though it may be, shooting the hostage doesn't really help here.) We're committed Bayesians (for the sake of this post, at least), and we want to assign a probability that the LHC will see supersymmetry. More generally, we have a set of possibilities for our observable physics, and we would like to assign probabilities to each. This is called the problem of finding a measure. Since the theory of eternal inflation with its "bubbling universes" is the context where the multiverse often comes up, this is often referred to as "measures for eternal inflation".
To approach this problem, let's return to the drug side effects situation of the previous post on this subject. To come up with the probability of 5%, we tested the drugs on a number of people and saw how many experienced side effects. Can we do this with our multiverse? Well, the obvious thing to do is to look at each possible universe (or "valley") in the multiverse and assume that are all equally likely. This is called a "counting measure", and one can use it to derive probability distribution for physical parameters. There is, however, one major problem with it. If there are an infinite number of possible universes, a counting measure does not exist. Since the total probability must be one, the probability for any given universe is zero. (There's no continuum here, so probability densities can't save you.) Oops.
Now, you might argue that assigning an equal probability to each universe isn't the right thing to do. After all, we're not universes; we're observers. Thus, we should weight each universe by the number of observers. This is hard to do, but one can guess various proxies for the number of observers. Popular examples are the amount of free energy or the amount of cold matter. People have tried to implement this sort of weighting, but I don't see how it escapes the above problem. Instead of using a counting measure on universes, you're using a counting measure on observers. But it's still a counting measure, and if there are an infinite number of observers, you're still SOL. It seems to me that to proceed along these lines, one either must choose something other than a counting measure (and justify your choice somehow) or postulate a finite number of possible universes or observers.
There's another aspect of the story that we have neglected, however. As discussed previously, in many theories, the multiverse is populated by a sort of reproductive process. It seems reasonable that this would affect your choice of measure. In Lee Smolin's hypothesis of cosmological natural selection, for example, black holes produce new universes and, consequently, one might imagine that universes with large number of black holes are more plentiful. But, as with so many things in this story, this is hard to make precise. If the multiverse keeps reproducing forever, one has an infinite number of universes. In order to obtain a measure from this (and solve the above problem with counting measures), we need a limiting process that we can compute with. An intuitive idea would be to look at the (possibly finite) distribution of universes at a given time and then take the large time limit. The problem with this is that, in general relativity, there is no global notion of time. Depending on how we decide to choose the sequence of time slices in our limit, we can obtain completely different answers.
I make no claims to originality with any of these arguments. They are well-known to most of the experts on this subject, and they have a variety of responses. I think some people believe that there is a right answer -- that if we understand the theory and the philosophy well enough, we will able to say that this is the correct measure and begin to take our bets. Most people I talk to, however, take a much more pragmatic approach. Given some abstract principle that leads to a measure, some will say that we should consider the choice of principle as part of the theory. Thus, if the experiments turn out other than how we made our bets, the combination of physical theory and measure principle is falsified. (Lee Smolin's principle wherein one postulates that we live at a local maximum of the measure/fitness function is a variation on this, avoiding questions of probability by sheer force of axiomatics.) Others say in a similar vein that we should plug away, and if we find something that works, we get to be happy. It's possible, I suppose, that one of these measure papers will lead to a series of predictions that get vindicated at each opportunity. Who would want to argue with success? Finally, some have tried to look for quantities that end up being independent of the choice of measure, thus sidestepping much (but perhaps not all) of this philosophical morass.
I want to end with an odd consequence of such reasoning called the Doomsday argument. As I argued above, there's nothing particularly special about a given point in time, so if we are to consider ourselves as generic human, we should consider all humans throughout time. Human population has been undergoing exponential growth recently, and if this were to continue, there would be far more people in our future than in our past. Since the average person should have roughly half of humanity born before and half of humanity born after them, it would seem that either we're not very average, or the exponential growth of humanity will come to an end. Soon. In other words, the end is nigh. Doomsday approacheth. Repent.
Or not. I just can't make myself take this very seriously, although many people do. Take Ken Olum's version of this. Consider all observers in the multiverse. Surely a fair fraction of them would live in an interstellar civilization that will have many more people than our measley six billion. Well, then, where are my spaceships? I find the lack of spaceships very disappointing. Or take the "simulation argument" mentioned in John Tierney's latest column and discussed subsequently around the internet. If our hyperadvanced descendents are performing lots of simulations, who'd be simulating boring stuff like us? Wouldn't there be lots of simulations a whole lot cooler? You know, magic, spaceships, lots of kick-ass martial arts maybe. Fun stuff.
Why stop with simulations, anyways? The philosopher David Lewis has put forth the idea of modal realism, that all possible worlds exist. The mother of all multiverses. Greg Egan's Permutation City on some seriously strong steroids. Counterfactuals become a piece of cake, but oy vey. It all seems like it could be a lot of fun, I suppose, but it's not for me. As for where you stop, well that's up to you. If not in this universe, then in the next.
I want to end with an odd consequence of such reasoning called the Doomsday argument.
The Doomsday Argument makes Baby Jesus cry, and drops the apparent IQ of otherwise intelligent people like Robin Hansen by something like 20 points.
And I think that Aaron is confusing the Goldilocks Enigma with mediocrity.
Well, I stated earlier that the really important thing in probability is, not the chance you'd get to stage one, but what to expect once you're already there. As they say, someone had to be the Queen of England, so it's silly for her to be surprised that "Hey, I'm the Queen of England! What was the chance it would be me?! The interesting questions come with, if you find yourself already the Queen, then what else is most likely to be the case? There are various applications to the questions at hand, I will leave them for now. The example I like, pretty much as I said earlier (and so glad to see a physicist mentioning modal realism by name):
Modal Realism is an acknowledgment of there being, serious as a heart attack, no genuinely logical way to define "existence" above and beyond logical description. IOW, no way to define "matter" aside from the structural descriptions of it, other than appeal (ironically) to metaphysical issues like the realness of our experience, etc. Remember, "exist" is not a predicate like round etc, but that's only the start of the problem. I defy anyone to define "exist" in a non-circular, strictly logical way. Talk of kicking chairs etc. is cheating and ineffectual since that is either an experiential distinction, or can be part of the description of a model platonic world anyway.
Now you might say, no big deal, since you can imagine just thinking of "the universe" as being pure mathematics/structure (which is evasive since it leaves out experiential qualities, but I digress.) The trouble then is, you have to admit all the other "descriptions" (not just "simulations") as being equally pseudo-real as well, like it or not. That's what Max Tegmark says he believes in, roughly. Then, you've got a mess on your hands.
Here's the problem: All possible worlds really means all possible descriptions. If so, one has a vanishing Bayesian probability of finding oneself in a world that continues to be lawful instead of one of the infinitely more that were like this up to this point and then begin to diverge. Why? Because of all the changes from then on to different laws and variations and distortions of laws that can be described, and indeed the entirety of what behavior can be described after that point which certainly includes a gigantic set of chaotic futures, etc. It's like once you're in the world of "already came up heads 100 times in a row" or similar, then even so it's not likely the subsequent flips will continue to be orderly.
Hence, I think there really needs to be a "manager" of some sort, to ensure placement in effect of observers like us in a world that really has laws, since logical possibility is just too inclusive. Think of that as you wish. (Not to mention, our having experiences etc., but that gets into consciousness issues and I am just making the argument relating to physical conditions and our being here.)
If not that, or if not full modal realism but the "Landscape" of string theory, then perhaps talk about multiple universes is relevant in a less grandiose and still "scientific" way. If the underpinnings of our world promotes variation in kind, then Bayesian type reasoning about what to expect given what we already see seem unavoidable. I still have to ask, without an a priori concept of what "ought" to "exist" (which concept still doesn't have clear meaning, as seen...), what will you hang that on? It's OK to use deduction on down from we see as a working hypothesis, but the workings of "the" world can't really be built logically from the ground up.
An interesting series of posts. I particularly liked the clear handling of the often muddled together different anthropic principles by keeping to the tautological AP and adding the principle of mediocrity.
Oh, here and there are some arguable details. Like complaining about the sociological effect of deciding for one theory that always can be replaced by a better one. Or first mentioning a falsifiable approach and then calling the line of attempts for a philosophical morass. (And then traipsing into some real such morasses. :-P)
But why quibble about details? I have a specific question instead:
It seems to me that a similar challenge for what is going on here is seen in cladistics. Maximum likelihood methods, including bayesian, are among those used to establish the tree diagram of phylogeny. There is a unique common descent phenomena that we can access but it still permits test of the method, the evolutionary theory and the specific observed model.
The tests would consist of two similar procedures. Either by sequencing more genomes or finding more fossils of already found populations and see if the tree (or rather, the small subsets of likely trees) changes much. Or by adding more populations and see if the resulting set of trees changes much.
Here we have a unique inflationary universe that we can access but using the same principle (say, the entropic) throughout it seems it permits test of the method, the inflationary theory and the specific observed model.
Maybe any tests would be analogous to the biological case, adding data or finding more parameters to be roughly explained by the chosen measure.
The difference, and the challenge, seems to me that here there would be some parameters that may be decided by unknown constraints. (Hopefully given by a better, more complete, theory.) But perhaps the physicists concerned could call this "the current best model" if they wish?
Or are the two cases too dissimilar?
An interesting series of posts. I particularly liked the clear handling of the often muddled together different anthropic principles by keeping to the tautological AP and adding the principle of mediocrity.
Thereby willfully ignoring the *fact* that there is no "tautological" anthropic principle without assuming the mediocrity that the multiverse provides.
Unless of course, you have the dyamical principle that explains the structuring of the universe from first principles, while proving that we are just a consequence of this natural physics, rather than the specially relevant reason for it that the evidence **most apparently** provides.
"Keeping to the [ad hoc theoretical assertion] for [a] plausible tautological answer to the guy standing over the dead body holding a smoking gun'... is more like it.
Prove definitively that your multiverse is necessary to the ToE, or quantum gravity... or you're just tossing off in the face of reality.
So, you said this was a three part series right? I had some questions I was holding until the end, so I guess I'll just start tossing them out now:
So you've given us here the general overview of the multiverse/anthropic selection concept. Fine. The thing is though is that by themselves these ideas simply aren't usable-- postulating just there is a multiverse doesn't do much by itself, you have to postulate some specific multiverse or there really isn't anywhere you can go from there except to discuss purely philosophical generalities. For example, you suggest in this post that we could try to calculate the probability or distribution of different kinds of universes within the multiverse in hopes of figuring out what an "average" universe within the multiverse looks like. Okay, but we obviously can't even begin to ask that question unless we've first defined in some way the set of universes which the multiverse contains!
Given this, I'd consider the generic idea of a multiverse landscape much less interesting than considering some particular example of a multiverse landscape theory, say the string theory landscape. You've been going to great pains here to keep your statements about multiverses general and avoid talking about the string theory landscape in specific, and I understand why you're doing this, but the thing is the string theory landscape is the only one anyone really cares about. Meanwhile, although the multiverse concept doesn't seem particularly useful in the absence of the string theory landscape (or the evolutionary universe, or the modal reality "everything" set, or whatever), it's quite possible the string theory landscape is useful even if one rejects the multiverse concept utterly: After all, maybe there's a universe selection principle after all and we just haven't thought of it yet.
So, here's my "question for a string theorist", which this thread may or may not be the right place to ask in:
--- --- ---
Getting away a bit from the idea of a multiverse in general, how exactly is the string theory landscape in specific defined? What does a single "point" in the landscape correspond to? What, so to speak, are the landscape's boundaries, and what kind of variation should we expect to see over the breadth of the landscape?
--- --- ---
Popular descriptions of the landscape seem to imply that the landscape exists because string theory gives different results depending on what geometry you choose to use for the "extra" dimensions, and that the landscape is basically supposed to be a collection of every concievable way of folding up those extra dimensions. (I somehow got the impression, which may be wrong, that each of these different ways of folding dimensions can be described as a configuration of our universe's "brane" within a larger-dimensional space.) Wikipedia's page on the string theory landscape meanwhile says:
The large number of possibilities arise from different choices of Calabi-Yau manifolds and different values of generalized magnetic fluxes over different homology cycles.
As I understand this the calabi-yau manifold part is just the colloquial thing about the way the universe's extra dimensions are folded. So wikipedia seems to be saying that each "universe" in the landscape is uniquely defined by some way of configuring the dimensions, paired with some collection of values describing something I don't understand about "homology cycles" (???).
Is this correct? And if this is correct, then another question: Why is it that each of these points in the landscape is commonly referred to as a "vacuum"? Wikipedia explicitly describes the landscape's points as "false vacua", and you often hear people talk about a "vacuum selection principle". I've seen this same terminology used in the context of something like the eternal inflation theory, where (as I understand things!) there's something like a multiverse where each inflated bubble of universe winds up settling into a different quantum ground state for the vacuum. I think I get what we mean when we identify each of these universe bubbles by its vacuum state. But I don't understand why we'd use the same terminology for the calib-yau configurations that make up the string theory landscape. Is there a connection I'm missing between the "vacuum"s in the string theory landscape and the vacuum states in other cosmologies?
Finally, given however it is that these "vacua" are defined, what kinds of variation should we expect to see between the different vacua? For example, is the variation limited to different values for fundamental constants, or should we expect to see entirely different kinds of sets of particles, or entirely different laws of physics, coming forward as emergent behavior from the vacuum picked? In this blog post you gave the example of using some kind of landscape calculation to determine the "probability" that we live in a universe with supersymmetry; is this something that's actually a measurable feature of the string landscape, in the sense that the landscape is allowed to contain universes both with and without supersymmetry? In general which laws of physics, or kinds of particles, could potentially wind up being allowed to vary between different points in the landscape? Do we know enough about the string landscape yet to answer that question?
This all said, if these questions are kind of too deep to really cover here in this blog, is there some other more appropriate source I should be looking at to get answers to these questions?
I'd like to expand on the "real science" insight here. Indeed, if the underpinnings of our world promote variation in kind (different kinds of laws, etc.), then Bayesian type reasoning (about what more to expect given what we already see) seems unavoidable. For example, if there's a "Landscape" of possible ways for the universe to turn out, given "strings" as the fundamental building block, etc., then we have to ask: if we are in a certain region that's possible from that matrix, what likelihood for other features? I mean, if 10% of Landscape-possible universes (total including their chance of existing, not just as portion of description space) with our currently known properties should also have property X, then "at random" there's a 10% chance our world has property X - do you agree? That isn't too far off the usual practice of doing an experiment and talking of likely errors etc. It can even be tested to some extent: See how many of the predictions come true, of course.
PS: It would be nice to have a check for "remember my data," to keep from typing in N,EA,URL all the time.
I've only got a few minutes right now, Coin, so I won't be able to give your questions the full attention they deserve quite yet.
The term "vacuum" is really a misnomer that's fairly pervasive in this field. There isn't a notion of a vacuum in general relativity. You should think instead of a solution to the classical (or quantum corrected) equations of motion. String theory seems to present a large number of these solutions (although there are still some gaps in the constructions) which is what people mean by the landscape. You can then look at the physics of small perturbations around these solutions and these describe the physics we observe. This can be very, very different from our present physics. There can be different forces, different, particles, different symmetries, etc. There are some conjectures (called the "swampland") about what effective theories are not possible to obtain through string theory, but there is not yet anything approaching even a physicist's notion of rigor.
In string theory, one can construct broad classes of solutions, but by no means can we construct all of them. Nonetheless, you can try to do statistics of the solutions we know about presently. What conclusions one wishes to draw from such statistics very much depends on the physicist.
Having discussed this a bit after dinner today, I believe I have come up with a foolproof argument against the simulation argument.
Would not any simulation you would want to do involve giant robots, I ask you? I think the question answers itself.
It was later suggested that it would be even better if the giant robots did kick-ass martial arts. Maybe they could turn into spaceships, too.
"if we are to consider ourselves as generic human" is the problematic part. There is no reason to assume that we are typical humans. Similarly, there is no reason to assume that humans are typical observers. James Hartle and Mark Srednicki recently published a paper in Physical Review A arguing that typicality assumptions are not justified. I find their argument convincing. If they are right, it would also mean that counting measures of observers are not a 'natural' measure.
As I was saying above, I think what matters is this, not the observers inside the universe: The chance that a universe having a collection of properties also has other properties as well (for example, 10% chance of additional property X based on how many universe "sproutings" that already have the first properties, would also carry X.) That leads to real predictions and possible tests, albeit probabilistic in nature.
My question became wordy, since I copied and pasted an earlier comment. Shorter, and hopefully answerable by knowledgeable people:
Assume the question of measure could be solved. Other sciences (biology, for one) successfully study unique objects by statistical methods, both frequentist and bayesian, and tests by adding more data and "parameters" (populations and phylogeny traits).
The difference is that they know that constraints are subsumed by their measure. (I.e. maximum mass of animals et cetera is incorporated in observing phylogeny.) Is it still a valid analogy, or would a less than complete characterization of parameters by the chosen measure invalidate the AP "project"?
In other words, is really lack of measure the only problem here?
it would be even better if the giant robots did kick-ass martial arts.
:-) Proof by anime.
Thereby willfully ignoring the *fact* that there is no "tautological" anthropic principle without assuming the mediocrity that the multiverse provides.
I'm referring to Aaron's earlier post:
The key to this is the principle of mediocrity, sometimes called the Copernican principle. This goes beyond the anthropic principle (which is essentially tautological) and deep into the philosophical swamp.
As Aaron defines it in his review of Peter Woit's book:
the anthropic principle, the statement that we definitely exist,
Depending on your definitions, you could or could not include mediocrity here. I prefer Aaron's treatment which adheres to more useful praxis of local definitions where possible.
The rest of your comment seems to be an example of what I described as "muddled together" principles - there are more than the one narrowly discussed, and they address likelihoods and outcomes as Aaron described in his post.
This discussion brings up something a little unreal about probabilistic theories. The predictions made by quantum mechanics, are probabilistic, and those predictions are born out by experiment. But there is a certain sense in which probabilistic theories don't make any predictions. A probabilistic theory of coin flips doesn't say that if you flip a coin a number of times, half the time it will end up "heads" and half the time it will end up "tails". It says that the set of all infinite histories for which the fraction of flips that end up "heads" is different from 1/2 has measure zero.
This fails to be a testable prediction for two reasons: First, we never make an infinite number of coin flips, so it's not clear that the asymptotic ratio is ever relevant. Second, the theory only predicts what happens in the case of "typical" histories; how do we know our history is typical? (Yes, the "atypical" histories have measure 0, but that doesn't make them impossible.)
So, in practice, to make a probabilistic theory testable, we have to augment its predictions with the assumption that we can safely ignore sufficiently tiny probabilities. But in a multiverse in which everything possible happens, somebody is going to experience results contrary to the probabilistic predictions. How do we know it's not us?
Aaron writes: There isn't a notion of a vacuum in general relativity.
Can't a vacuum be defined in GR as any region for which the Einstein Tensor G_mu,nu vanishes?
That's something of a mismatch between General Relativity and every other kind of physics, it seems to me. In Newtonian physics or quantum physics, the zero of energy seems arbitrary, but not in GR.
As I've been saying at conference talks, and in science fiction, is that WE may see things through an Anthropic Principal, but giant intelligent worms see things through a Vermic Principal, and superconducting computer life in liquid helium temperatures (which became prevalent in our cosmos fairly recently as the cosmos cooled by expansion) see things through the principal that the universe seems finely tuned to allow for the right combination of liquid helium and easily isolated metals.
Stephen Baxter is another science fiction author who points out that competing forms of life can try to change the overall structure of the cosmos to benefit their forms of life.
Hence the physical constants that we measure may be some sort of Nash Equilibrium between us, the worms, the cryocomputers, his "Photino birds" and the like, in the great cosmic political battle for generalized Darwinian fitness.
Daryl, you make good points. As you imply, probability claims are not "falsifiable" since there cannot be any specific outcome that counts as "disproof" of such a claim. Even worse: probability becomes almost undefinable if it can change with time, for obvious reasons of comparing results of trials to expectations.
I actually make those precise points about QM in the second post of the series.
For the GR question, this is a situation where a word gets used for multiple things. A vacuum solution in GR is as you say, but that's different than the notion of vacuum in QFT which is the lowest energy state. Since there's no global notion of energy in GR, that doesn't make much sense.
What is the minimum apparatus needed to measure WHICH of 10^500 universes we live in?
NIST (what used to be the National Bureau of Standards) measuring every physical constant as accurately as possible?
NIST plus Google and newspapers of the world on microfiche to see which terrestrial History we have (i.e. is Elvis still alive or not)?
Although I publish Mathematical Physics, and my wife is a Physics professor, this question is on the fuzzy boundary between Cosmology and Science Fiction.
So, how do we know where we are, or are we Lost in the Multiverse?
"The philosopher David Lewis has put forth the idea of modal realism, that all possible worlds exist."
Others have gone further, and asserted some ontological reality to "imaginary worlds" that violate the laws of logic which apply in our universe. That is, not only all possible worlds exist, but all impossible ones too.
Someplace to start in this strange branch of literature is:
Imaginary Logic-2: Reconstruction of a Version of Outstanding Nikolai Vasiliev's Logical System. Vladimir I. Markin, Dimitry V.
Submitted by Admin on Thu, 2006-11-23 17:40.
Author: Markin V., Zaitsev D.
Vladimir I. Markin, Dimitry V. Zaitsev
Imaginary Logic-2: Reconstruction of a Version of Outstanding Nikolai Vasiliev's Logical System
The aim of the paper is to present a formal reconstruction of a recondite version of Nikolai Vasiliev's imaginary logic. Vasiliev regarded this version to be an 'interpretation' of imaginary logic in terms of 'our real' logic. The core idea was to associate a concept considered as a set of characters with each term and to treat syllogistic constants as denoting intensional relations between concepts. This variant of the imaginary logic differs essentially from its main version, it has another class of laws. Semantics of 'logic of concepts' based on Vasiliev's preformal intuitions is formulated. Syllogistic type calculus axiomatizing the set of valid formulae is constructed.
Embedding of Nikolai Vasiliev's Imaginary Logic into Quantified Three-Valued Logic. Vladimir I. Markin
Submitted by Admin on Thu, 2006-11-23 17:28.
Author: Markin V.
Vladimir I. Markin
Embedding of Nikolai Vasiliev's Imaginary Logic into Quantified Three-Valued Logic
The aim of the paper is to provide a formalization of Nikolai Vasiliev's imaginary logic and to establish its metatheoretical relation to quantified many-valued logic. Syllogistic type calculus IL which axiomatizes the set of imaginary logic laws is formulated. The natural translation of affirmative, negative and "indifferent" (contradictory) statements of the imaginary logic into the language of quantified three-valued logic is offered. It is proved that IL system is embedded into three-valued quantified Lukasiewicz logic.
I thought this could have some relevance to your posts :)
I would love to post one on 'the many worlds theory' too
But this one gives one some perspective. Mathematics may be the 'Universal language' :) But that doesn't necessarily make it graspable. :) So keep on thinking..
This multi dimensional place we are living in can be very confusing at times. On Earth they had found it so confusing that they finally decided to create laws for explaining, not only how to live, but also the very laws of nature. They started with the macroscopic laws, that was quite easy as all could see the results from for example gravity and then backwards with help of the properties inherent in the phenomena calculate acceleration and create definitions relating to weight etc. Most of the phenomena one could see on a every day basis were quantifiable, When Earth discovered electricity and how light might work things got complicated again, Suddenly it seemed that the laws controlling our every day reality at best, only were statistical.
They were mostly true but not really, so to say. Take the duality of light, It were as right to see light as a particle as to see it as a wave. And you could prove both concepts in repeatable experiments, When you went down to the quantum level nothing of what we knew from our 'normal' observations were seen to be true any more. Everything at that level just seemed to be about probabilities.
They called it the fuzzy level, just because probability ruled. One could expect that in a place were everything were possible. Microscopically hanging on threads of probability, that it also should show itself in our every day life? But it didn't and doesn't, which we all should be grateful for, because if it had, there probably wouldn't even have existed 'tailless ones'. With the help of equations describing how electromagnetic forces interacted and flowed they tried to create diagrams and define new simplified forces that mathematically described those various couplings. A new type of quantum field theory came on to the scene that explained the weak nuclear force by uniting it with electromagnetism into electroweak theory, and it was shown to be renormalizable. That word meaning that at last they could create a norm where all deviations were explainable..
That is, that those infinities could be absorbed into a redefinition of a small number of parameters contained in the theory. Then similar wisdom was applied to the strong nuclear force to yield quantum chromodynamics, or QCD, The magicians, sorry physicists, smiled as this theory also was shown to be renormalizable, Which left one only force to be accounted for .. Gravity. . Which couldn't be turned into a renormalizable field theory, no matter how hard or for that matter, what spells one tried
At the same time as the physicists tried to understand the universe in form of particles and waves combining and interchanging, the mathematicians who worked inside their own fields of theoretical frameworks, started to look at topology as described mathematically. There had been a discarded theory of something called a dual resonance model. It was an attempt to describe the strong nuclear force. The dual model was never that successful at describing particles, The dual models were actually quantum theories of relativistic vibrating strings and had displayed very intriguing mathematical behavior. Dual models came to be called string theory as a result.
The question Earth's mathematicians finally asked themselves was, could string theory be a theory of quantum gravity? String theory worked in a smooth theoretical plane defined by two dimensions A two-dimensional space-time of its own, where the division between space and time depended upon the observer. The strings in themselves were one-dimensional. And all that were okay if you happened to be a mathematician, but for those tailless ones that weren't, it just didn't make any sense.
Try to imagine a squarely formed string, five hundred feet wide, with a height of another five hundred feet, and a length of ooh, a thousand feet. Easy huh? Good. What you have is an object that is three-dimensional, height, width, and length, ok. Now let's play! We take one dimension away, lets take width, and awaay it goes! We now have a 'string' that you can see clearly, if standing to the side of it, because from there it will still have height and its length in space, you agree? Good. Now jump into my flying machine and take a good look at it from up above. Hey! It's gone! . .
Why, well, we took away its width, didn't we. There is no width to it, so seen from above it doesn't exist, considering that we are looking on it, from our normal three-dimensional space with time as the fourth dimension. The same would be, if you saw that string straight from it's front or end, it just wouldn't exist. Which in a way seems to turn this two-dimensional string into a none dimensional, depending on where you look at it. The headache of it all, ahh.
It could of course, represent a solution to all those problem of storage limitations. We just take one dimension away and Presto! We can fold the whole universe into our wallet. Nicee. But one-dimensional? Strings? If you can envision them, tell me how. But those same scientists that frowned at paranormal phenomena were quite happy to accept string theory. Don't ask me why? Without knowing it, the tailless ones were defining their own magic.
And Earth's scientists and the common magicians had one thing in common, what worked, worked! Even though terminology and explanations differed, they all were result oriented. Which is a good thing if you want something to happen. What really were changing were that, without thinking about it the tailless ones at last started to feel, if not overjoyed, then at last ready to accept the universe, now when they had defined it.
For as they would prove to you, by relating to mathematical concepts and standardized retryable experiments, this is how it works. By the way, those one dimensional strings that Earth's magicians invented, they had not only placed them under enormous tensions, they also expected them to move, those movements through time created a one-dimensional plane of their own that those magicians named a 'worldsheet'. And magic it was, of a seldom seen complexity. Most worlds used to Magic never created so interwoven 'logical' laws for how their magic should work, only the tailless ones had found it so obsessive important.
Remember that the properties of magic lies in your intent, your choice of creation and use decides what that magic will do, and be. Because of the framework and concepts resting behind their magic, supporting and mounting it, the tailless ones could, without understanding, now create frightfully strong new spells.
It's mine :)