One of the interestingly odd things about how people understand math is numbers. It's
astonishing to see how many people don't really understand what numbers are, or what different kinds of numbers there are. It's particularly amazing to listen to people arguing
vehemently about whether certain kinds of numbers are really "real" or not.
Today I'm going to talk about two of the most basic kind of numbers: the naturals and the integers. This is sort of an advanced basics article; to explain things like natural numbers and integers, you can either write two boring sentences, or you can go a bit more formal. The formal
stuff is more fun. If you don't want to bother with that, here are the two boring sentences:
- The natural numbers (written N) are zero and the numbers that can be
written without fractions that are greater than zero. - The integers (written Z) are all of the numbers, both larger and smaller than
zero, that can be written without fractions.
Natural Numbers
The most basic kind of number is what's called a natural number Intuitively, natural
numbers are whole numbers - no fractions - starting at zero, and going onwards towards infinity: 0, 1, 2, 3, 4, ... Computer scientists are particularly fond of natural numbers, because everything computable ultimately comes from the natural numbers.
The natural numbers are actually formally defined by something called Peano arithmetic. Peano arithmetic specifies a list of 5 rules that define the natural numbers:
- Initial value rule: 0 is a natural number.
- Successor rule: For every natural number n there is exactly one other natural number called its successor s(n).
- Predecessor rule: 0 is not the successor of any natural number. Every natural
number except zero is the successor to some other natural number, called
its predecessor. - Uniqueness rule: No two natural numbers have the same successor.
- Induction rule: For some statement P, P is true for all
natural numbers if:- P is true about 0 (That is, P(0) is true)
- If you assume P is true for a natural number n (P(n) is true),
then you can prove that the P is true for the successor s(n) of
n (P(s(n)) is true).
And all of that is just a fancy way of saying: the natural numbers are numbers with no fractional
part starting at 0. We usually write N for the set of natural numbers. Most people, on first encountering the Peano rules find them pretty easy to understand, except for the last one. Induction is a tricky idea; I know that when I first saw an inductive proof, I certainly didn't get it; it had a feeling of circularity to it that I had trouble wrapping my head around. But induction is essential: the natural numbers are an infinite set - so if we want to be able to say anything about the entire set, then we need to be able to use that kind of reasoning to extend from the finite to the infinite.
To give an example of why we need induction, let's look at addition. We can define addition on the natural numbers quite easily. Addition is a function "+" from a pair of natural numbers to another natural number called their sum. Basically, we define addition using the successor rule of Peano arithmetic: m + n = 1 + (m + (n - 1)). So formally, addition is defined by the following rules:
- Commutativity: For any pair of natural numbers n and m, n + m = m + n.
- Identity: For any natural numbers n, n + 0 = 0 + n = n.
- Recursion: For any natural numbers m and n, m+s(n) = s(m+n)
The last rule is the tricky one. Just remember that this is a definition, not a procedure. So it's describing what addition means, not how to do it. The last rule works because of the Peano induction rule. Without it, how could we define what it means to add two numbers? Induction gives us a way of saying what addition means for any two natural numbers.
Integers
The integers are what you get when you extend the naturals by adding an inverse rule. Take the set of natural numbers N. In addition to the 5 Peano rules, we just need to add a definition of an additive inverse. An additive inverse of a non-zero natural number is
just a negative number. So, to get the integers, we just add these new rules:
- Additive Inverse: For any natural number n other than zero,
n, there is exactly one number -n which not a natural number,
and which called the additive inverse of n,
where n + -n = 0. We call the set of natural numbers and their additive inverses
the integers - Inverse Uniqueness: For any two integers i and j, i is the additive inverse of
of j if and only if j is the additive inverse of i.
And that's just a fancy way of saying that the integers are all of the whole numbers - zero, the
positives, and the negatives. What's pretty neat is that if you define addition for the natural numbers, the addition of the inverse rule is enough to make addition work. And since multiplication on natural numbers is just repeated addition, that means that multiplication works for the integers too.
- Log in to post comments
Illuminating as always!
I look forward to the reals, complexes and perhaps a reprise of the surreals.
By the way, there's an <em> tag stuck just after "integers too."
I'm at work now, so I can't check my notes, but I am reasonably sure that when I had algebra at university we usually defined N (natural numbers) without Zero
Whenever we needed 0, we would use the Notation of N with a small zero to signify that we assumed 0 to be present in the natural numbers.
I can't remember if this was the norm in both algebra and analysis, or if we had different definitions depending on the professor, and or, the book.
Couldn't you just relax the special case of "0" in rule three to get the integers?
Soren:
I think you're confused with the whole numbers. Whole numbers are the positive naturals; I've generally seen N0 as a notation for the natural numbers without zero - that is, the whole numbers.
I was taught that natural numbers are the positive integers (i.e. 0 is not a natural number). What you've called natural numbers I was taught are the whole numbers. Looking at several references, the great majority agree with me, though it's by no means unanimous.
Stepan Wolfram recommends using the terms "positive integers" and "non-negative integers" to avoid confusion.
KeithB:
No, you can't just relax the zero rule - the induction principle will fall apart without it. By defining integers using the additive inverse, which is supported by the induction principle, that means that we can use the induction principle for reasoning about integers as well.
I, too, had been under the impression that zero was not a natural, or counting, number but, rather, the first in the set of whole numbers. I recall that the mnemonic for distinguishing them was that zero has a "(w)hole" in it. However, it is clear that, since Peano arithmetic is the guide post, zero must be a natural number.
Good stuff. Thanks!
What Marc is using is a variant of the Peano Axioms to define the natural numbers. Although he he has done it instead with the whole numbers, starting at zero rather than one as is the norm for the natural numbers. The initial number in the Peano Axioms is irrelevant, they work as well starting with either zero or with one. And yes, as a math teacher I have always taught that it is the whole numbers that start with zero and the natural numbers that start with one.
What exactly constitutes the natural numbers is, unfortunately, something that has not managed to become an established convention. The use of N for both "non-negative integers" and "positive integers" is fairly widespread, and the particular choice often depends on what kind of math you happen to be doing at any given moment. The notation N_0 (where the "_" denotes subscript) is totally unhelpful because it's used to mean "the opposite of whatever N means" both by people who include 0 in N and by people who don't include 0 in N.
Of course, it doesn't actually matter at all: the choice of the letter N is entirely conventional, and Mark happens to have chosen as his convention that N includes 0, which is more convenient when you're using the Peano axioms.
Mark didn't give the notation for the integers: usually, they're denoted Z. (I think it's from the German.) This allows us to generate non-ambiguous notations easily. For instance, Z_(>0) pretty clearly means positive integers, and we could replace the "greater than" sign with a "greater-than-or-equal-to" sign if we wanted to denote the non-negative integers. Z_+ is also a reasonably common notation.
Seperately, it's worth noting that commutivity is actually a provable fact about addition, given rules 2 and 3 and the Peano axioms. In fact, it's an excellent example of why induction is so important: it's possible to prove commutivity from induction, but if you take an axiomatic system with a weaker axiom in place of induction, you might find that it's impossible to prove that addition commutes. (This happens, for example, in Robinson arithmetic.) Since, in most of our experience, addition *does* commute, this suggests that we really do need the axiom of induction.
Here's the problem... Quoting from wikipedia:
I come from the CS/logic background, so I was taught the zero-based definition of naturals. People from more number-theory type backgrounds were taught the one-based definition.
I'm particularly hoping you will find time to address the above point, as it seems the contrast between "numbers are for counting" and "numbers are members of a set" is often quite confusing for laypeople.
Now if you really want to blow some people's minds, you should talk about the incompleteness of the Peano axioms and the existence of nonstandard models of arithmetic.
Though the question of whether we can really know what a nonstandard model looks like is a great game played by mathematical philosophers.
Just a small note:
You write: "The integers (written Z) are all of the numbers, both larger and smaller than zero, that can be written without fractions."
That sentence could be interpreted to suggest that the number 0 is not in Z ("both larger and smaller", but not "equal to 0") even though 0 is obviously in Z.
If I'm doing addition in a set, I'd really like my set to include the additive identity. It seems the "natural" thing to do.
Besides, I think that terms like "whole numbers", "counting numbers" and all that lot were invented by grade-school teachers who needed something they could quiz their students upon, since testing knowledge in any meaningful way is just too hard.
"Susie, is 0 a counting number?"
"Well, I know I use counting numbers to count things. If I have no candy in my box, I count zero pieces of candy, so 0 must be a counting number."
"Wrong! Class, look on page 7 of your textbooks. Tommy, can you read what it says at the top of the page?"
"The counting numbers are the set 1, 2, 3, . . . ."
"Excellent, Tommy. Can everyone tell why Susie was wrong?"
I just made this up — I don't know whether a first-grade textbook will typically include 0 in the "counting numbers" (or any other set), but whichever way they choose, the problem is the same.
I have a question that might seem slightly OT but I think it fits here. I'm a sighted person working on software to interconvert print math and braille math. There are various systems for braille math -- basically they are simply linear systems like LaTeX except terser since they are meant to be read by humans.
Here's the question. Some braille math systems use the same braille characters (cells) for the letters a-j and also to represent the decimal digits 1-9 and 0. The digit use is distinguished by prefacing a digit sequence with an additional braille character, called a number sign, that indicates the change of semantics.
Other braille math systems, including the one currently used in the US, use different braille cells for letters and for digits.
Now it may seem odd to readers of this blog, but the braille authorities in Canada are seriously considering switching from the US braille math system, which they currently use, to a new one which uses the same characters for the letters a-j and for the decimal digits.
I would appreciate insights about the effect of the use of a letters-as-numbers notation on the understanding of what numbers are.
(There has already been more than enough discussion in braille circles of the obvious awkwardness of the notation in practice but this hasn't been persuasive. However, I think the much bigger issue is any possible effect on understanding.)
It would be nice to have these basics articles as you're learning them. With that in mind, I hope the next basics article is on Riemann-Stieltjes integration. :)
This "debate" goes back quite a long way. The ancient greeks distinguished "numbers" (0,1,2,3 , etc, etc) from "lengths" of lines. In modern terms, they made a sharp distinction between N and R, or if you like, between the concepts of "how many" and "how much".
Their major problem was, in essence, that they did not consider N to be a subset of R! Though they knew that you could have lengths corressponding to natural numbers, and that you could even add those lengths like natural numbers, they still considered N and R to be disjoint. This is because their entire understanding of R was through its interpretation as the lengths of line segments.
If you consider as they did, that a real number means a length of a line segment, then you can do things with numbers that don't make much sense with "lengths". You can multiply two numbers to get a number. But you can't really multiply two lengths to get a length. You can also square, cube, and put numbers to even higher powers, and still obtain a number. But you cannot do this for line segments and still obtain a line segment. Even addition and subtraction are a bit strange unless the lines are parallel.
For this reason, the Greeks never really studied true "numbers" and their properties. That's why they're remembered only for their contributions to geometry, as they made very little progress in pure algebra or arithmetic.
So the concept of a number, or different types of number, is not always so straightforward. And when you eventually meet numbers like complex numbers and quaternions, it becomes a little clearer that a "number" can be an elusive concept.
Susan:
Well, it might underscore the difference between numbers and numerals, but I think actively switching to a such a system is asking for trouble. How are the poor students ever going to handle hexadecimal numbers?
Just a nitpick here: I don't think the commutativity of addition is usually included in the definition of addition. (And neither is the rule 0+n=n; only n+0=n is true by definition.) Rather, it is proved - by induction of course:
First, prove 0+n=n by induction on n. It is true by definition for n=0, and if it true for some fixed n then 0+s(n)=s(0+n)=s(n) so it holds for s(n) as well.
Then, prove by induction on n that s(m)+n=s(m+n) for all m.
It is certainly true for n=0, and if it is true for some given n then also s(m)+s(n)=s(s(m)+n)=s(s(m+n))=s(m+s(n)) where I used first, the (recursive) definition of addition, then the induction hypothesis, then the defintion of addition once more.
Finally, prove by induction on m that m+n=n+m: Clearly true for m=0, and if it is true for some given m then also s(m)+n=s(m+n)=s(n+m)=n+s(m), and we're done.
As you can see, proving stuff with Peano arithmetic can take some effort.
Oh, and to define the integers in terms of the natural numbers, some might find the Grothendieck construction more natural: The basic idea is to work with pairs of natural numbers, letting the pair (m,n) stand for the number m-n. But since many different pairs will stand for the same numbers, we must state when equality should happen. Introduce an equivalence relation â¡, saying (m,n) is equivalent to (p,q) and writing (m,n)â¡(p,q) if and only if m+q=n+p. Then let the equivalence class (mân) consist of all pairs (p,q) equivalent to (m,n). The set of all equivalence class is the set of integers. Addition is defined as (mân)+(pâq)=((m+p)â(n+q)), and subtraction as (mân)â(pâq)=((m+q)â(n+p)). Identifying m with (mâ0) then the negative of m will be âm=(0-m), and we see that the chosen notation is consistent, since (mâ0)-(nâ0)=(mân).
There are lots of details to be filled in in the above. The elegance of the procedure may not be fully realized until you discover that the exact same procedure can be used to define the rationals (with the obvious modificiations to avoid division by zero).
Hey guys, Unicode is a really cool thing. We can write âxââ: âyââ: y = successor(x) without LaΤεΧ. âº
And copy&paste uses ⤠and â are also there.
Mark writes: "It's particularly amazing to listen to people arguing vehemently about whether certain kinds of numbers are really "real" or not."
The American Mathematical Society apparently feels that the ontology of numbers and other mathematical objects is nontrivial enough to include many articles on the topic in their Mathematical Reviews. A study of the history of mathematics is likely to give one some perspective on the matter and to thus make one more tolerant of the sort of arguments Mark describes.
The exception in the wording reminds me; why isn't 0 defined to be its own additive inverse and 1 its own multiplicative inverse, for simplicity?
So if "numbers are for counting" when it follows that "sets are for collections", right? :-)
Susan:
What Drekab notes on number systems used in programming et cetera may be decisive.
But you could also consider neuroscience. IIRC the other day some blog discussed results on number representation and use in the brain, as observed with fMRI, you could google it. Different representations (numbers, letter, figures) and uses (counting, estimates, speech, writing) of the same numbers corresponded to different areas in the brain IIRC.
So, perhaps, more number representations may be good (different perspectives) but replacing number representations may be bad (need to relearn, not quite the same).
ObsessiveMathsFreak wrote:
"For this reason, the Greeks never really studied true 'numbers' and their properties. That's why they're remembered only for their contributions to geometry, as they made very little progress in pure algebra or arithmetic."
Has Diophantus been stripped of his nationality? If it's up for grabs, we'll claim him here in Missouri.
Torbjörn Larsson wrote:
"The exception in the wording reminds me; why isn't 0 defined to be its own additive inverse and 1 its own multiplicative inverse, for simplicity?"
The premise of this question is false. An additive inverse to x is a number which when added to x gives 0. Thus, by definition, 0 is its own additive inverse. (Likewise for 1 and multiplication.)
When you construct the integers from the non-negative integers, what you're doing is adjoining the additive inverses of all the integers. The exception for 0 is because 0 *already* has an additive inverse, so you don't want to add another symbol to represent it.
Thanks to Drekab and Torbjörn for very helpful feedback.
The articles on whether the brain processes numbers or numerals or both look to be fascinating from their abstracts and from online summaries. The original articles plus a Comment appear in the January 18, 2007 issue of Neuron. (http://www.neuron.org/ Subscription required.)
I still remember the day when the integers were described to me this way.
An integer can be represented as a pair of natural numbers (a,b) such that:
You get the idea. You go on to prove that the third relation is an equivalence relation, and show that addition is commutative and has an identity and so on.
This was a key idea in my mathematical upbringing: that you didn't need to understand new objects in terms of what they are, but in terms of what you already have.
Pet peeve: That construction is _not_ due to Grothendieck. For the integers its due to Dedekind; in general it's due to someone else (Oystein Ore defined the ring case, someone else pointed out it works for semigroups -- I forget who). Grothendieck applied it to specific semigroups, but the construction is older.
Ah, Dedekind already? Thanks for pointing that out. Yeah, I knew the "Grothendieck construction" is much more general, and I suspected it did not originate with him. Good to know the real scoop.
Another way to think of the integers is as vectors, which is not so weird as it first seems. That is, the integers are got from the natural numbers by adding in the notion of a pair of directions. (Similarly the reals are directed magnitudes, much as the complex numbers are.) Then, however, the positive integers are not the natural numbers, they are positively directed natural numbers. (Similarly, Pseudonym's positive integers above would differ from natural numbers.) The vectorial approach is less algebraic, perhaps, but also perhaps more in tune with applications?
Nat Whilk:
can you provide any online links or book recommendations? I've been raising (what I think is) the ontology question to which you allude in several fora with no response and concluded that it must be too stupid to warrant one. I'd be happy to confirm that it isn't (or is, for that matter, in which case I can forget it).
tnx - charles
ctw:
I don't know what your background is, so I'm not sure what to recommend. Ontological considerations are discussed in most introductory/survey works on the philosophy of mathematics. Stewart Shapiro's _Thinking about Mathematics_ seems to do about a good a job as any at explaining the issues and how the major schools stand relative to them. Shapiro (like myself) subscribes to ontological realism, and his _Philosophy of Mathematics: Structure and Ontology_ is a defense of that position. One of the more prominent and interesting examples on the opposite side would be Harty Field's _Science without Numbers: A Defence of Nominalism_.
nat -
thanks for the pointers. as it happens, I just tried wikipedia and resolved my question, viz:
is 1+1=2 on a par with gravity in "existing"?
the motivator being that many people (including some worldclass philosophers) say things like "that's as real/true/indisputable as 1+1=2". I don't really care what the "true" answer is, just that "no" isn't a completely off-the-wall answer. turns out my view (simplistic version, of course) actually has a name - embodied mind theory - and is summarized as:
Humans construct, but do not discover, mathematics.
essentially the words I used in a recent comment in another thread on this blog:
-c
JBL:
Yes, that is what I was getting at.
Well, just because I can write - 0 = (+) 0 , I frankly don't see that there is any extra symbol made. And precisely because I can write - 0 it must still be handled as I understand it, even if it is to define is at an undefined number.
Analogous to roots; a root operation on x, say x positive real for illustration, has two solutions, + root(x) and - root(x), so I allow for - root(0) = + root(0) too.
So I guess for me my question still stands. But it is mostly a matter of taste how this is formulated.
Susan:
While I don't know how it is to be blind, I know how it is to have a blind family member. My late grandmother was blind the last years of her life. She didn't use braille much (wasn't into computers) and preferred audio books, but it sure helped her make or read notes and handle medicine dispensers. So I'm happy to see people work with this, as well as that I could help!
It's time to speak in parables. This one comes from the first volume of Isaac Asimov's autobiography, In Memory Yet Green (1979), page 214.
Torbjörn: Unrelated to anything else, is there a quick way to produce the second vowel in your name on a standard U.S. keyboard, without copy-pasting your name from your comments?
So, some actual content: "putting a minus sign in front" is not an operation that exists in Peano arithmetic, and we can define it only after we have defined the objects which we arrive at by putting a minus sign in front of the objects we already have, namely the non-negative integers. In Peano arithmetic, one can not in fact write "-0"; that's a meaningless collection of symbols. So, at the moment that we jump from the non-negative integers to the integers by defining a whole host of new symbols, we need to decide whether or not to add a symbol "-0". I suppose that there is no theoretical reason not to add such a symbol: we would just prove, as our first theorem, that "-0 = 0", and then we'd never have to use it again (or, we'd use it where it was more convenient). Now, after we have all these symbols, we can define on them the operation "taking the additive inverse", which we (for convenience, but not for any deeper reason) denote also with a minus sign, which has the property that when we apply it to 0, we get back 0 (or -0, which equals 0, if we happen to have defined the symbol -0).
I guess this is my point, re-phrased: you wrote, "because I can write - 0 it must still be handled as I understand it, even if it is to define is at an undefined number." But I think this is not really the case. The symbol "-" serves at least 3 distinct purposes: to denote subtraction of one number from another, to denote the function of "taking the negative," and as part of a set of symbols which we use for the additive inverses of the positive integers (where it has no independent meaning). I believe that when you say, "I can write - 0," you are conflating these latter two uses for the symbol "-". If we don't define the symbol "-0," you can write it to mean "the result of applying the operation of negation to 0," but you can't write it to mean "the symbol which represents the additive inverse of 0" unless we specifically define such a symbol.
"Taking the square root" is a special case of "solving a quadratic equation." Every quadratic equation has 2 roots, but for some of these equations the roots are the same. We don't use any symbol to differentiate between the two roots of x^2 - 2x + 1 even though they are the same; why should we use a special symbol in order to distinguish the roots of x^2 from each other?
I hope I'm being clear, sensible, and actually addressing your thoughts -- if I seem pedantic, it's merely because I'm trying to make sure I understand what I'm writing, and if I seem to have missed the point (or to be wrong!), please let me know!
Regarding the use of the minus sign, the following seems to be state of the art: http://www.vub.ac.be/CLWF/WON2006/Abstract_Vlassis.pdf
JBL:
A fair comment, indeed, and a considerate thought. At the moment I don't have access to such a keyboard, so I googled.
If you have a US keyboard:
"The US keyboard layout does not use AltGr or any dead keys, and thus offers no way of inputting any sort of diacritic or accent;" ( http://en.wikipedia.org/wiki/Keyboard_layout#US ).
If you have a US-International keyboard (which is basically what Sweden use, with keys added and remapped):
"The US keyboard layout can be configured to type accents efficiently. This is known as the US-International layout.
...
Accented characters can be typed with the following combinations:
...
" then letter (ë)
...
"
I.e. ¨ then o: ö. (And I can do that too, besides using the remapped main key!)
The easiest way if you have a US only keyboard is to spell it "Torbjorn".
If you want to be phonetically true, it is "Torbjoern", though the "ö" ("oe") sound isn't really used in most english dialects. (But english-speakers can learn it really well, and the basic phonemes are close.)
If you want to make an effort, the HTML code is "& # 246 ;" (remove quotation marks and spaces. [My test: Torbjörn.]
Not intentionally, but I can see how you make sense out of the combination without disallowing it.
So I concede that one doesn't have a need to point out "-0" as an exception. And then parsimony guides us to the chosen convention.
"We don't use any symbol to differentiate between the two roots of x^2 - 2x + 1 even though they are the same;"
Right. I was conflating the algebraic problem with my own practice to make it clear I have considered both roots when solving numerically.
Thank you for addressing my points! Apparently I have grown so used to my own practice so I have forgotten the underlying principles in both cases. Which is presumably why I needed someone to nudge my thoughts in the right direction.
JBL:
On second thought, I have to retract that. Parsimony in symbols is one thing, parsimony in definitions another. And I would actually go for the later, considering the analogy for parsimony in physics. (Minimize number of parameters, not number of objects.)
Perhaps it is mostly a matter of taste, after all.
"We don't use any symbol to differentiate between the two roots of x^2 - 2x + 1 even though they are the same; why should we use a special symbol in order to distinguish the roots of x^2 from each other?"
I'm not sure what you mean by the roots of x^2. For positive real numbers, there is only one square root function. Namely, if a>0, sqrt(a) is the positive solution of x^2=a. So sqrt(4)=2, not 2 or -2. Otherwise, sqrt(x) would not be a well -defined function.
Once we start talking about complex numbers, then things start to get more complicated and we get "multivalued functions", which technically are not functions.
If you have a Mac, you can set your keyboard layout to US Extended (under International->Input Menu). Then hitting option-u gives the ¨ diacritic over the next vowel you type. You can get some other diacritics this way as well.
I was forced to figure this out because my advisor was Hungarian, and had the ´ diacritic on every 'a' in his name; the Hungarians I've met seem to be a little picky about that.
Davis:
I can imagine that some diacritics may make a huge difference in pronunciation or meaning.
In swedish it doesn't, because while the sounds are different, they are close enough. We have å (phonetically 'aa', roughly), ä (phonetically 'ae', roughly), ö (phonetically 'oe', roughly). 'Torbjorn' (or 'Torbjoern') is recognizable, both by spelling and sounding.
OTOH, conflating the spelling can be another problem, for example for people with last name Häger and Hager. Here phonetically spelling helps more. ('Haeger' vs "Hager".)
Actuallly the Peano Axioms don't really provide a definition of the naturals. In fact no set of first order axioms can define any particular infinite model, because any consistent (first order) theory
that has a model of (infinite) cardinality \kappa has a model of every cardinality greater than \kappa. This means that there are actually uncountable models of Peano's Axioms. (There are also coutable models that aren't the standard one as well!)
As pointed out by Bertrand Russell in 'Introduction to Mathematical Philosophy' (and perhaps others), '0 and 'number' and 'successor' are primitive terms in Peano's axioms and so aren't really defined. To obtain the natural numbers it is necesssary to assume we know what 0 is, otherwise it could be identified with 100 and the axioms would still be satisfied. In fact according to Russell any sequence x_0, x_1, ... ,x_n satisfies Peano's axioms.
Additive Inverse: For any natural number n other than zero, n, there is exactly one number -n which not a natural number, and which called the additive inverse of n, where n + -n = 0.
What is "+"? If "-n" is not a natural number then "+" is not the addition operation between naturals. It cannot be the addition operation between integers since we're just defining what integers are.
In the laboratory a number and a length are a pair (5,meters). A pair is the fundamental unit in the laboratory. We need "meters" to specify our instrument, a meter stick. We do not measure (0,meters) because our meter stick cannot be used to measure zero. This may become a problem when an algebraic expression such as 1/t is used. As an example,the laboratory measurement requires using (1,meter)/(5,sec). Introductory physics is often named "point" physics, an incorrect name. It is "physical point" or "measurable point" physics. Point refers to zero in algebra and to a nondimensional point in Euclidean geometry.
Effectively this means that "pure mathematics" does not apply to physics with very few exceptions. Newton's equations are an extension of analytic geometry. His gravitational equation is geometrical, specifically the surface of a sphere, 4(pi)r^2 . The expression is 1/[4(pi)r^2] which clearly does not apply at zero. Neither quantum mechanics, relativistic quantum electrodynamics, or even the standard model for elementary particles address this problem in a "clean" manner. My opinion is that physics will have to be reformulated mathematically starting with Euclidean geometry.