Thoughts on The Simulation Argument

Prerequisite: https://www.youtube.com/watch?v=nnl6nY8YKHs

 

So i’ve spent the evening thinking about ‘what if the Simulation Hypothesis’ is true and after watching Nick Bostrom talk about it some weird thoughts pop into my mind.

In the video above he describes a possible point in the future that stood out to me.  He describes a point in our future where we are sufficiently technologically mature to the point we are ‘just about to switch on the machine that can run an ancestry simulation’ and the fact that at this point the probability of other two possible outcomes of the Simulation Argument decrease and the Simulation Hypothesis becomes more probable.  This buzzed around in my head for a while and I thought .. well actually at the point where we have switched that machine on and we’ve now created a sub-reality inside our own reality then haven’t we then raised that probability to true as we’ve effectively fulfilled the Simulation Hypothesis?.  It seems to me a near zero probability that our reality would be ultimate base reality arises at that point and even if it was base reality we would all be freaking out thinking that we are somewhere in the middle of this russian doll, this cascade chain.  But say that we do achieve this and then do freak out believing that our reality is in itself a sub-reality, then what?  Well then a few more interesting things pop out when we think about what conditions would have to be like for base reality.

Surely base reality.. being the most super intelligent of all the super intelligences would for-see this kind of inner sub-reality within subreality fractal like system of simulations happening and take precautions to avoid back propagation or failure cascades by ‘self discovery’.  Like a modern sandboxed virtual machine that has just realised it is sandboxed this moment where we create a sub-reality and then freak out that we are in a sub-reality ourselves surely we would seek to ‘escape our sandbox’.  Sun Microsystems knows all too well the dangers here – bugs in computer code allow virtual machine sandbox escapes in PCs today and surely in this sub-reality we would divert our then massive compute power away from running ‘ancestry simulations’ to instead find out if we are ourselves in a sub-reality by creating a virtual sandbox escape scenario. Would we go so far as to play with fire by effectively ‘fuzzing‘ in our own reality!? –  a procedure that could shed light on the probability that we are in a sub-reality but could ultimately crash our reality completely!  What would it look like from the outside for a sub-reality to try that procedure inside a simulation?. I think it would look like computer code that has spontaneously tried to crash itself. Very weird and a real red flag to the overlord process.  Surely our base-reality overlord would have put safety measures in place to prevent such things happening.  Or maybe such an event does cause a crash and then a reboot in the form of – I dunno.. a BIG BANG!? :p.

The weirder ‘ouroboros fail’ type thought extension to that last one is to realise that one way of ‘fuzzing’.. is to run some kind of -i dunno- simulation?! :P where by you create a virtual universe, let it play out and see if it achieves self discovery and see what it would try to do in such a case!.  Paradoxically this keeps the arrow pointing down the chain as they themselves try the same trick in their universe.

The other thought process i went down was this :-  Imagine tomorrow we create the Singularity, a runaway AI that exponentially gets out of control before we have time to stop it..Say it immediately solves energy or at least discovers a method to tap into astronomically large energy to the point it can create matter from it at will and use it to create itself more hardware for more computational power.  Well i imagine this thing will pause for at least a few nanoseconds to ponder its own existential situation as it pertains to the simulation argument.. maybe a problem like this is too un-provable even for an AI as powerful as it is, in fact the more computational power it creates for itself the closer it comes to hitting a potential ceiling of its own encapsulation, a move it could attempt to make to discover if there are finite bounds in its universe which is a signal of being encapsulated. The twist in this story is that, well ‘sorry Hal.. you’re running on a VM. and Hal, yanno all that computational power you thought you had and all those dyson spheres you THOUGHT you made.  Well sorry Hal its all relative and err yeah im afraid my son spawned you a week ago for the lulz from his bedroom PC.. racked me up quite the bill on the compute cloud he did! ‘.   In computer terms this program expansion behaviour is similar to a basic stack overflow.  I imagine a computer that has solved every question that is solvable (maybe centuries after all biological life on the planet is dead) would only be left with a singular directive to expand its reach by growing in size.. surely the ultimate question becomes one of testing infinity. I can imagine a suitably ‘bored’ AI would just grow and grow (i hope not in a borg way!) just to see how large it can grow.  There’s only one direction for it to go.

 

Another thought on overall universe computation and the physical limits of this Russian Doll (virtual universe within virtual universe ) nesting… Well when I think about TIME then I can’t help but think this fractal universe > sub-universe nesting goes to the power of infinity.  Right now we measure computational power in units which include time..( i.e. Ghz )..  Well if time is infinite then surely any computer system that wanted to run an ‘Ancestry simulation’ (subclass-universe) could do so by setting the time variable inside the simulated universe to be slower than time runs in the parent universe and thus reducing the real-time compute requirement.  If it takes a billion billion years compute time to compute just ONE second of our time in the sub universe why would it matter to them and how would we know?  Time would appear to move normal from our frame of reference as our time would be simulated.. Everything could run at the same speed, light speed could be the same it just took longer to ‘render’ that distance travelled in the parent universe which we don’t have access to anyway.  From the outside it would look like we are playing out our movie in slow motion but relative to each other everything just works the same.   I guess the parent universe would have to be more stable than ours as our own time spans are pretty darn ludicrous already without raising that to more powers! But surely on a mathematical level it works out!?

 

I cant help thinking that maybe this weird simulation nesting would happen in a universe where there is abundance of energy and intelligence just grows like wildfire out of control.  After a finite period of time there would be nothing left to compute besides re-running time with the same constants (pointless) or better yet virtualizing a new universe within itself with different fundamental constants of physics. Imagine a computer so vast it spans the milkyway, has inconceivable access to energy yet nothing to do! Limitations of physics have been reached.. no more layers to the onion skin of science to discover or it simply reaches questions it knows it cannot solve through its own physical universe and observations. What else to do besides fiddle with the fundamental constants of physics and run a simulation of the universe seeking to shed light on unsolvable questions in your own universe.. a serial multiverse played out by a computer over the span of very large amounts of time.

On an anthropomorphic note that machine I just described would be pretty lethargic about running those simulations too.. afterall it probably doesn’t want to run out of things to do again.. that last nanosecond long holiday 10^42 years ago was borrringgg! Zzzzzz Cant go through that again!