The Universe Algorithm

in #science8 years ago (edited)

Belief in the world as a virtual reality goes back a long way. Buddhism sees the physical world as an illusion, the allegorical tale of Plato’s cave suggested the reality we know is a mere shadow hinting at, rather than being, the world as it truly is, and Pythagoras thought Number was the essence from which the physical world is created. More recently, the likes of Ed Fredkin and Stephan Wolfram have argued that reality might arise from something like a simple computer program known as cellular automata (CA), which were invented by John von Neumann in the early 50s. You don’t necessarily need a computer to run a CA, a piece of paper will suffice. The most basic CA consists of a long line of squares, or ‘cells’, that are drawn across the page and can be either black or white. This first line represents one half of the initital conditions from which a CA ‘universe’ will evolve. A second line of cells is then drawn immediately above the first and whether a cell in this line is black or white depends on a rule applied to its nearest neighbours in the first line. The rules make up the other half of the initial condition.

What I have just described is an example of an algorithm, which is a fixed procedure for taking one body of information and turning it into another. In the case of a CA, the pattern of black and white cells on the first line represents the ‘input’ and the ‘output’ it produces is the pattern of cells in the next line. That in itself is not terribly exciting, but much more interesting things can happen if we run the CA as a recursive algorithm, one where the output is fed back in as input, which produces another output in the form of a third row of black and white cells, which then becomes the input for a fourth line and so on, in principle, for evermore.

Whether or not something interesting happens depends upon the rules used to govern the behaviour of each line of cells. In the case of the simplest possible CA, those that consist of a one-dimensional line of cells, two possible colors and rules based only on the two immediately adjacent cells, there are 256 possible rules. All cells use the same rule to determine future behaviour by reference to the past behaviour of neighbours and all cells obey the same rules simultaneously. Some initial conditions produce CAs of little interest. The result might be a repetitive pattern like the one associated with a chessboard, or completely random patterns. What makes these uninteresting is the fact that you can accurately predict what you will get if you carry on running the program (more of the same). These kind of CAs are known as ‘Class 1’. ‘Class 2’ CAs produce arbitrarily-spaced streaks that remain stable as the program is run. Class 3 produce recognizable features (geometric shapes, for example), appearing at random. Sometimes, they produce patterns known as ‘gliders’, shapes that appear to move along a trajectory (what is actually happening is that the pattern is continually destroyed and rebuilt in an adjacent location). Using a computer to run a CA makes it a lot easier to watch gliders, because then the recursive algorithm is calculated millions of times faster and the illusion of movement is totally persuasive.

The most interesting CAs of all are class 4. These produce patterns of enormous complexity, novelty and surprise. What is most intriguing about them is the fact that their initial conditions seem no more complex than those which go on to produce the dull class 1 types of CA. To people like Wolfram, this is evidence that our attitudes towards complexity are not a true reflection of how reality works. ‘Whenever a phenomenon is encountered that seems complex it is taken almost for granted that it is the result of some underlying mechanism that is itself complex. A simple program that can produce great complexity makes it clear that this is in fact not correct’. We also see this phenomenon in fractals. Consider the famous ‘Mandlebrot Set’. How much storage space would be required to save a copy of every pattern it contains? The answer is, more storage space than you would have even if you used every particle in the visible universe to store a bit. That is because the Mandlebrot Set contains an infinite number of patterns and so it would exceed any finite storage capacity. And yet, underlying all that complexity there is a simple formula (Z=Z^2+C) that can be described in a few lines of code.

Faced with evidence that something complex could have a simple cause, Wolfram asked ‘when nature creates a rose or a galaxy or a human brain, is it merely applying simple rules – over and over again?’ Cellular Automata and fractals produce interesting patterns, but there can be more to their complexity than that. In 1985, Wolfram conjectured that a CA following ‘rule 110’ might be a universal Turing machine and therefore capable of carrying out any imaginable computation and simulating any machine. This conjecture was verified in 2000 by Mathew Cook. Then, in 2007, Alex Smith proved an even simpler CA (known as a 2,3 machine) was also capable of universal computation.

Wolfram commented, ‘from our everyday experience with computers, this seems pretty surprising. After all, we’re used to computers whose CPUs have been carefully engineered, with millions of gates. It seems bizarre that we should achieve universal computation with a machine as simple as the 2,3 CA’. The lesson seems to be that computation is a simple and ubiquitous phenomenon and that it is possible to build up any level of complexity from a foundation of the simplest possible manipulations of information. Given that simple programs (defined as those that can be implemented in a computer language using just a few lines of code) have been proven to be universal computers and have been shown to exhibit properties such as thermodynamic behaviour and biological growth, you can begin to see why it might make sense to think of information as more fundamental than matter/energy.

Working independently of Wolfram, Ed Fredkin believes that the fabric of reality, the very stuff of which matter/energy is made, emerges from the information produced by a 3D CA whose logic units confine their activity to being ‘on’ or ‘off’ at each point in time. ‘I don’t believe that there are objects like electrons and photons and things which are themselves and nothing else. What I believe is that there’s an information process, and the bits, when they’re in certain configurations, behave like the thing we call the electron, or whatever’. The phenomenon of ‘gliders’ demonstrates the ability of a CA to organize itself into localized structures that appear to move through space. If, fundamentally, something like a CA is computing the fabric of reality, particles like electrons may simply be stubbornly persistant tangles of connections. Fredkin calls this the theory of ‘digital physics’, the core principle of which is the belief that the Universe ultimately consists of bits governed by a programming rule. The complexity we see around us results from recursive algorithms tirelessly taking information it has transformed and transforming it further. ‘What I’m saying is that at the most basic level of complexity an information process runs what we think of as the law of physics’.

Because we’re so used to information being stored and processed on a physical system, when we encounter the hypothesis that matter/energy is made of information, our natural inclination is to ask what the information is made of. Fredkin insists that asking such a question demonstrates a misunderstanding of the very point of the digital physics philosophy, which is that the structure of the world depends upon pattern rather than substrate; a certain CONFIGURATION, rather than a certain KIND, of bits. Furthermore, it’s worth remembering that (according to digital physics), EVERYTHING depends entirely on the programming rules and initial input, including the ability of bodies of information as complex as people to formulate bodies of information as complex as metaphysical hypotheses. According to Fredkin, this makes it all but impossible for us to figure out what kind of computer we owe our existence to. The problem is further compounded by the proven fact that CAs can be a universal computer. Reporting on Fredkin’s philosophy, Robert Wright commented, ‘any universal computer can simulate another universal computer, and the simulated can, because it is universal, do the same thing. So it’s possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious to those containers’.

Because it adopts the position that our very thought processes are just one of the things to emerge from the calculations performed by the CA running the Universe, digital physics has ready explanations for the apparent contradiction between reality ‘as is’ (assuming digital physics is correct) and reality as it is perceived. When a CA is run on a computer or piece of paper, ‘space’ pre-exists. But Wolfram believes the Universe-generating program would be unnecessarily complex if space was built into it. Instead, he supposes the CA running our universe is so pared down that space is NOT fundamental, but rather just one more thing to emerge consequent to the program running. Space, as perceived by us, is an illusion created by the smooth transition of phenomena through a network of ‘nodes’, or discrete points that become connected as the CA runs. According to Wolfram, not only the matter we are aware of but also the space we live in can be created with a constantly updated network of nodes.

This implies that space is not really continuous. Why, then, does it seem that way to us? Part of the reason is that the nodes are so very tiny. Computers can build up photorealistic scenes from millions of tiny pixels and smooth shades from finely mottled textures. You might see an avatar walk from one point in space to another, but down at the pixel level nothing moves at all. Each point confines its activity to changing colour or turning on and off. The other reason is that, while in the case of an image on the monitor, it might be possible in principle to magnify your vision until the pixels become apparent, in the case of the space network it would be impossible to do likewise, because our very perception arises from it and so can never be more fine-grained than it is.

That line of reasoning also fixes the problem of ‘time’. Remember, that in a CA run on a computer, every cell is updated simultaneously. This would be impossible in the case of the CA running the Universe, because the speed of light imposes limits on how fast information can travel. Co-ordinated behaviour such as that observed in CAs require a built-in clock, but wherever the clock happens to be located, the signals it transmits are going to take a while to travel to cells that are located far way from it. One might think that having many clocks distributed throughout the network would solve the problem, but it would not because the speed of light would not allow a signal to travel between all the clocks to ensure they were synchronised.

Wolfram came up with a simple solution, which was to do away with the notion that every cell updates at the same time as every other. Instead, at each step, only one cell is updated. Time, just like space, is divided up into discrete ‘cubes’ and at any given moment it is in only one of these ‘cubes’ that time moves a step forward. At the next moment, time in that cube is frozen and another point in the space network is updated. Again, this reality ‘as is’ seems totally unlike reality as perceived. When was the last time you noticed all activity was frozen in place, save for one point in space here… and now here… and now here… that moved forward a fraction? No, in RL we have no lag, no waiting for our reality to update. Our RL never goes offline. But, of course, in the case of SL, the reason we notice when it has gone offline or is being updated is because it is (not yet) running the software of our consciousness. That, for now, is still largely confined within our brains. But when it comes to the CA running the Universe, your very awareness would be frozen – be offline – when any cell other than your own is being updated. Only when your own cell is updated are you in a position to notice the world about you, and when this happens, all that you see is that everything else has moved a fraction. In an absolute sense, each tick of the universal clock might be very slow and RL might actually suffer lag and periods when it is offline that are far longer than anything we endure in SL. But because perception itself proceeds in the same ticks, time seems continuous to us. Again, our perception can be no more fine-grained than the processes computing that perception.

Perhaps the reason why the speed of light is limited in the first place is because photons (just like all other particles) are comparable to gliders, which can only advance one cell per computation. In ‘Is the Universe a Virtual Reality?’, Brian Whitworth reasoned, ‘if both space and time arise from a fixed information-processing allocation, that the sum total of space and time processing adds up to the local processing available is reasonable… events in a VR world must have a maximum rate, limited by a finite processor’. The physical outcome of this supposition would be that something would impose a fixed maximum for the speed at which information could travel. And, of course, that is precisely what light does.

As well as this example, Whitworth cites several other ways in which the observed laws of nature coincide with the concept that ours is a virtual reality. It is perhaps incorrect to say that Whitworth considers this proof that reality IS a simulation, only that supposing it is does not contradict what we know about the laws of physics. Whitworth asks what the consequence would be if reality arose from finite information processing. If that were the case, we ought to expect algorithmic simplicity. ‘Calculations repeated at every point of a huge VR Universe must be simple and easily calculated’. And, as it happens, core mathematical laws that describe our world do seem remarkably simple. Whitworth points out that if everything derives from information, we should expect to find digitization when we closely examine the world around us. ‘All events/objects that arise from digital processing must have a minimum quantity’. Modern physics does seem to show that matter, energy, space and time come in quanta.

If you look at the letters in this body of text, each particular letter is identical to every one of its kind. This ‘a’ looks like that ‘a’, this ‘b’ is identical to that ‘b’ and so on. That is because of ‘digital equivalence’. Each letter arises from the same code so obviously they are identical. Similarly, if each photon, each electron and every other particle arises from the same underlying code, they too would be identical to each other. Again, this is what we observe.

What other ways might reality seek to minimise waste in its information processing? In the virtual worlds that run on our computers, the world is typically not calculated all at once. Rather, the computer only renders the part of reality that the observer is looking at. If that were also true of RL – if reality is only calculated when an interaction ocurrs – then measuring reality ‘here’ would necessarily cause uncertainty with regards to what happens ‘there’. Or, as Whitworth put it, ‘if complementary objects use the same memory location, the object can appear as having either position or momentum, but not both’.

If the network running our VR was to become overloaded in certain regions, what would the result be? Well, SL residents know all too well what to expect if too many objects are rezzed or too many people gather in one sim. You get slowdown. Suppose that a high concentration of matter similarly constitutes a high processing demand. That being the case, wherever there is a high concentration of mass there ought to be a slowdown of the information processing of spacetime. This is in agreement with general relativity, which argues that time runs noticeably slower in the presence of strong gravitational fields caused by a high concentration of mass.

Summing up, Whitworth asked the reader, ‘given the big bang, what is simpler, that an objective universe was created out of nothing, or that a virtual reality was booted up? Given the speed of light is a universal maximum, what is simpler, that it depends on the properties of featureless space, or that it represents a maximum processing rate?… Modern physics increasingly suggests… that Occam’s razor now favours a virtual reality over objective reality’.

Sort:  

Great article!

I've always been interested in Cellular Automata and Conway's Game of Life where gliders were first seen. I read Wolfram's huge book, A New Kind Of Science, where a lot of this is discussed. Mind blowing stuff!

It appears to me that SL = Second Life, although not explicitly stated here. I've been in there for 12 years, on and off these days. I believe I read in your intro that you spend time there.

Very interesting article, great to see things like this discussed here. I'll check back and see if others chime in, and maybe reply in here again.

Thanks Kenny! Yes, SL does stand for Second Life (I should have made that clear in the main text: My bad!) I have not read Wolfram's book yet, but I would like to one day.

BTW if you are ever in SL send me an IM and maybe we can get together. My name in SL is Extropia DaSilva.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 63004.21
ETH 2441.89
USDT 1.00
SBD 2.68