From Surreal Numbers to Games

Today we're going to take our first baby-step into the land of surreal games.

A surreal number is a pair of sets {L|R} where every value in L is less than every value in R. If we follow the rules of surreal construction, so that the members of L sets are always strictly less than members of R sets, we end up with a totally ordered field (almost) - it gives us something essentially equivalent to a superset of the real numbers. (The reason for the almost is that technically, the surreals form a class not a set, and a field must be based on a set. But for our purposes, we can treat them as a field without much trouble.)

But what happens if we take away the restriction about the < relationship between the L and R sets? What we get is a set of things called games. A game is a pair of sets L and R, where each member of L and R is also a game. It should be obvious that every surreal number is also a game - but there are many more games than there are surreal numbers, and most games are not surreal numbers.

Games lose some of the nice properties of the surreal numbers. They are not a field. They are not totally ordered. In fact, they're not even all positive or negative. They're very strange things.

So why would we want to break the restriction on the surreals that gives us games? Naturally, because games have useful applications in modeling many things - in particular, games (in the non-mathematical sense - games like Go, Chess, Checkers, Poker, etc).

Let's take a bit more of a detailed look at games, and how they interact.

Game arithmetic is exactly the same as surreal arithmetic: addition, subtraction, multiplication, negation - even division (which we haven't looked at yet) are all defined in the same way of surreal numbers and games.

But: while surreal numbers are always either positive, negative, or zero, games can also be fuzzy. Remember, games are not fully ordered. That means that there are pairs of games (a,b) where ¬a≤b and ¬b≤a - that is, where the two games cannot meaningfully be compared. Fuzzy games are games that can't be compared to zero.

What does a fuzzy game look like? The simplest example is: {1|-1}. Try to use the definition of "≤" on that game with zero - it doesn't work.

Games also have some strange behaviors with respect to multiplication. If a, b, and c are games, then (as you would expect for numbers), if x×z=y×z then x=y. But, with games, x=y doesn't mean that x×z=y×z. Nasty, that, eh?

So what are these beasts useful for? Part of Conway's motivation was trying to analyze the game of Go (aka Wei-Chi). Go is one of the oldest strategic games in the world; it's been played for thousands of years in China, Japan, and Korea. Go is the Japanese name, which is generally used here in the US; Wei-Chi is the chinese name for it. It's a thoroughly fascinating game.

i-1c81ad554ff42f575501c9c2fb724e57-go.png

Go is a two-player game where the players have a 17x17 grid. Each move, a player puts a piece of their own color on one of the intersections on the grid. The goal of the game is to surround territory using your pieces. Whoever has the most territory at the end wins. Mechanically, it's about as simple as a game can get. Strategically, it's unbelievably deep and complex. It's frequently compared to Chess in terms of depth and strategy. It's a wonderful game.

One particularly odd thing about Go - particularly when you compare it to Chess - is how hard it is to analyze mathematically. Computer Go players are still absolutely lousy in comparison to humans, because it's so hard to get a handle on the strategy in a way that gives a computer player a reasonable chance.

Like Chess, but even moreso, Go has three strategic stages, and the way you play in those stages is very different. There's the opening; the mid-game; and the end-game. Conway's games are useful for modeling go. They're not particularly helpful in terms of strategy in the opening - but the opening doesn't need that much help. It's typically quite stylized: there are particular board positions that are extremely valuable, and in their first moves, each player typically stakes a claim to one of them. And then they start to sketch out territories very loosely according to a mostly predictable pattern. Then, at some point, someone makes a hostile move, and bingo, you're into the mid-game, and that's when strategy gets incredibly hard.

You can describe a Go game using Conway's games. Each board position is a game {L|R} where L is the set of possible moves for the first player, and R is the set of possible moves for the right player. Each turn, a player takes the number corresponding to the current board position, and picks one of the numbers from the appropriate set. That number becomes the new board position, and is passed to the other player.

One of the things that makes this work so well for Go is that in the end-game, a go-board is mostly "filled in" - that is, most the territory is controlled by one player or the other, and there are just a few pockets of free space left. Using Conway's games, you can take each of those free territories, and model them as games of their own. Then the game representing the full board position is the sum of the games representing the remaining free territories.

The observation that the Go endgame could be analyzed a set of subgames was a prime motivator for Conway in the creation of the surreals. A representation of the games that allowed you to analyze the subgames was very desirable - but that representation also needed to be able to view the subgames are parts of the combined game. Conway was brilliant enough to see that the {L|R} pseudo-numeric structure with addition gave him a representation where the subgames could be combined using addition! From there, he just followed a fairly typical mathematicians analysis of the structure, and pushed it to see what it could do - and discovered the restriction that gave him an alternate construction of the real numbers. But the games actually came first!

Anyway - once you've got the construction of surreal numbers and games, you can use
them to do things like analyze the kinds of games, like sprouts, that Conway always plays
with using surreal numbers and games. In fact, you can prove most of the well-known results
about those (recreational) games using properties of the (numeric) games.

Suppose you want to analyze a simple game. For the surreal numbers, we'll assume it must have the following properties: (note that this is a simple example, and that by changing these restrictions, we can analyze more complicated games, and the price of a more complicated analysis - but also notice that this framework is almost enough to allow Chess to be analyzed this way!)

  • There are two players. To make things easy, we'll called them L and R.
  • The game contains no randomizer, like a die.
  • There is no hidden information: both players are in possession of all
    of the information about the current state of the game, and the possible future
    moves.
  • Players alternate turns.
  • The game ends when there are no legal moves left for one player; when this happens, the other player wins

Let gx be the game representing the game after x moves. (That is, each player has moved x/2 times.). If gx>0, then it means that L will win: all possible games in which R wins have been eliminated. If gx<0, then R will win, all possible games in which L wins have been eliminated.

Tags

More like this

Sorry, but I can't help but point out that Go is typically played on a 19x19 grid (not 17x17, although 13x13 and 9x9 are also popular).

By James Cash (not verified) on 01 Apr 2007 #permalink

Had I as much time as a Turing machine has tape, I'd love to work out the "more complicated analysis" to handle games like chess in this combinatorial way. And there was something in Virasoro algebras which reminded me of constructing the surreals, but I learned about Virasoro primaries and descendents when I was proofreading Barton Zwiebach's textbook and had to forget them to make room for other assorted knowledge the following year. (Most days I feel like I've forgotten more than I've ever known.)

With luck, I'll make it through the category-theoretic surreals without having to forget something important, like my mother's birthday or the place where I work.

Your game framework needs one more condition: That the game is guaranteed to end eventually. This is necessary to get well-founded games that have birthdays and allow the transfinite induction from your previous post.

By Ãrjan Johansen (not verified) on 01 Apr 2007 #permalink

Most days I feel like I've forgotten more than I've ever known.

OT, but a fun result - you can take comfort in that too much memory may be a bad thing:

"Our research indicates that those with better working memory may have fewer new neurons being developed in their hippocampus, which helps them forget old and useless information sooner and enable them to take in new information faster."

( http://www.brightsurf.com/news/headlines/29631/New_Research_Shows_Why_T… )

Low neurogenesis may be bad for longtime memory: "Previous research by Dr. Malleret with co-first author Michael D. Saxe, Ph.D., who was at Columbia when the research took place and is now at the Salk Institute in San Diego, Calif., found that reducing neurogenesis causes long-term memory deficits."

But high neurogenesis may be bad for parts of shortterm memory.

And of course it is now known that every time we recall a memory, it is degraded. IIRC, that is. ;-)

By Torbjörn Larsson (not verified) on 01 Apr 2007 #permalink

I think an condition for the game being analyzable is that the games state can never be repeated. This plus the game ending when there are no legal moves guarentees the game will end(?)

Go has a rule similar to this (either exactly, or only so that the a player can't change the game back to the previous state on his next turn) but in Chess it is possible to move the game back into a previous state which is why it isn't analyzable with that condition.

I think an condition for the game being analyzable is that the games state can never be repeated. This plus the game ending when there are no legal moves guarentees the game will end(?)

This is not enough if the game has an infinite number of states.

Go has a rule similar to this (either exactly, or only so that the a player can't change the game back to the previous state on his next turn) but in Chess it is possible to move the game back into a previous state which is why it isn't analyzable with that condition.

Actually chess does have anti-repetition rules, they are just more complicated than simply the position of the pieces, and require keeping track of some game history as part of the state.

By Ãrjan Johansen (not verified) on 02 Apr 2007 #permalink

Yes, you guys are right. The proper condition for games to be tractable in the framework I discussed in that they be strictly finite - which in turn implies that they must be acyclic. I think that the acyclic part doesn't really need to be mentioned in the context of games: I don't think games can be self-referential - i.e., I do not think that games are permitted to contain themselves in any form. But I'm not sure about that; I'll try to check it in Conway's book this evening on the train.

Wow, all this interest in a relatively abstruse mathematical topic. I love it!! Good heavens, my brain hurts though.

By Francesco Franco (not verified) on 02 Apr 2007 #permalink

If memory serves, the potentially cyclic part of Go is the "ko" situation, where I play piece A capturing piece B, and my opponent could, be replacing piece B, capture piece A. Since that immediate recapture in a "ko" is forbidden, you have to play elsewhere first, and so the game must eventually end. I can't think of any larger-scale cyclicity.

Stephen:

WRT go, you're exactly right: cyclicity is prevented by the "Ko" rule - so while in the local subgames, you can have repetitions, they're countered by changes in other subgames.

What makes Go more complicated than the rules that I stated was the "winning" condition: a game of Go is not won by whoever makes the last move; it's won by whoever controls more territory - and defining who controls how much territory is very complex.

actually, i think the winning condition for go can almost be stated in the way you want. it's not exact, and i'm not sure if it can be overcome, but once the territories are well-defined, if the players were to keep playing, the players would eventually start filling up their own territory, and the player with fewer points of territory, would fill his/her own territory first, potentially leaving him/her with no more legal moves (sort of...it's still a little more complicated, but not as far off from "last-move" condition for winning as you might think).

Cyclicity isnt prevented by the ko rule, since when large groups are captured people can still play there. Of course this would never happen in a real game though. Also some rulesets use superko instead of ko, meaning that no board position can be repeated. I will be using this rule my server for both go and chess.

By Andrew Briscoe (not verified) on 02 Apr 2007 #permalink

Andrew:

The way I was taught the Ko rule was that it forbade the exact same board position from ever repeating. On a short-term scale, that means that in a potential capture/recapture cycle, a move needs to occur elsewhere in the board before a recapture can occur (thus altering the board - so even if the recapture happens, it's a new board position); and in the longer term, if a large capture happens, you're not allowed to re-create a ko that formerly existed before the large capture.

Is it necessary that a player with no moves loses? It seems to me that chess could be represented except that having no moves can mean either a loss or a draw, depending on whether the player is in check.

By Steven Wallace (not verified) on 02 Apr 2007 #permalink

If it is your turn to move in a Chess game, you are not in check, and your only move is an illegal move into check, then the game is a Stalemate, which is a draw.

Revised tournament rules of Chess do prevent infinitely long games. The last loophole that they closed was that of three different moves A, B, and C in a square-free infinite word. No move subsequence would ever repeat, which would have failed to end the game by the previous set of rules. Actual CS/Math!

"They're not particularly helpful in terms of strategy in the opening - but the opening doesn't need that much help. It's typically quite stylized: there are particular board positions that are extremely valuable, and in their first moves, each player typically stakes a claim to one of them."

I'd argue with this statement to a certain extent. It's like saying that the chess opening doesn't "need that much help" strategically, that it is "typically quite stylized". If that's so, why do we feel it necessary to write literally thousands upon thousands of books about specific chess openings?

No, the Go opening is also quite hard, and more importantly, computers also fail at that (not just the midgame).

By Craig Helfgott (not verified) on 04 Apr 2007 #permalink

Craig:

You're right that I'm oversimplifying - but the opening strategy in Go does tend to be the most tractable of the three
phases.

Almost every game starts with players each putting stones on a point either on or adjacent to one of the outer handicap points, and then placing extension stones to roughly sketch out territory.

Figuring out what those opening moves mean is, in general, easier that figuring out things like setting up a ko-fight in order to get the opportunity to secure a territory in the mid-game.

And in my (admittedly limited) experience playing Go on the computer, the computer seems to do fewer really stupid things during the opening than it does when we get the middle or end-game. My experience is that computer go players, compared to a human beginner like me, play a mediocre opening, a horrible mid-game, and a mediocre-leaning-towards-bad end-game.

I love this word. Wikipdia's article on it begins:

Zugzwang (German for "compulsion to move", IPA: [ËtsuËk.tsvaÅ]) is a term used in combinatorial game theory and in other types of games (particularly in chess). Zugzwang means that one player is put at a disadvantage because he has to make a move -- the player would like to pass and make no move. The fact that the player must make a move means that his position will be significantly weaker than the hypothetical one in which it is his opponent's turn to move. In combinatorial game theory, it means that it directly changes the outcome of the game from a win to a loss. The term is used less precisely in other games.

The term is frequently used in chess, to mean that one player (having the move) has no move that does not worsen their position (Soltis 2003:78). Game theory does not apply directly to chess (Berlekamp, et al. 1982:16) (Elkies 1996:136). Sometimes different chess authors use the term zugzwang in different ways (Flear 2004:11-12). In some literature a reciprocal zugzwang (see below) is called zugzwang and a one-sided zugzwang is called a squeeze (Hooper and Whyld 1992).

In a chess endgame, being in zugzwang usually means going from a drawn position to a loss or a won position to a draw, but it can be from a win to a loss, or a substantial loss of material which probably affects the outcome of the game. A chess position of reciprocal zugzwang or mutual zugzwang is equivalent to the more precise definition of zugzwang in game theory. Opposition is a special kind of zugzwang (Flear 2000:36). Trébuchet is a special type of zugzwang that is discussed below.