The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.
But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.
Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.
And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.
What am I not getting about this?
Cheers!
It’s light-based computing.
So you make a framework, compound it into a big bang ball and then let it run. Afterwards, you analyze the imagery from start to finish or at whatever point you need to.
Can’t interact with it though, only observe.
You’re thinking in terms of how we do simulations within our universe. If the universe is a simulation then the machine that is simulating it is necessarily outside of the known universe. We can’t know for sure that it has to play by the same rules of physics or even of logic and reasoning as a machine within our universe. Maybe in the upper echelon universe computers don’t need power, or they have infinite time for calculations for reasons beyond our understanding.
Or what if the entity that stimulates can just “dream” the simulation to make it happen?
Like Azathoth?
But that’s just a guess. It’s not necessarily true. You’re just saying “simulations might be possible, therefore they are definitely possible, therefore we are likely in a simulation”.
That’s not logically sound. You can replace “simulation” with “God” and prove the existence of God similarly. It’s just a guess.
Yes thats why no one says its a fact, its a theory
Some might say a game theory
I’d just like to interject for a moment. What you’re refering to as “theory”, is in fact, a “hypothesis”…
It’s definitely an issue Rick ran into.
My understanding is that a civilisation capable of running a simulation like that would have access to enormous, possibly near-infinte amounts of power (like tapping black holes for energy).
Time doesn’t have to be 1:1 between a host and a simulation. The host can take as long as it wants to render the next step in a simulation, and any observers within the simulated universes would not be able to discern the choppiness of their flow of time.
Fellow macaque here. Not only that, but time does not even run 1:1 between 2 places in our own universe. Plus, there are all kinds of quantum fuckery, where we can’t really detect all the properties of a certain particle, or the particles act like waves as long as they do not interact with anything, because… who knows?
Particles and waves aren’t actually separate as we were taught in school. They are in reality a third thing with properties of both.
As for detecting properties that’s a limit of our technology not the universe. In order to observe something we currently have to interact with it (e.g. bounce some light off it) It’s possible in the future we develop techniques that don’t require interaction, like reading the higgs boson field directly for example.
If our technology is limited so we can never see beyond something, why even propose it exists? Bell’s theorem also demonstrates that if you do add hidden parameters, it would have to violate Lorentz invariance, meaning it would have to contradict with the predictions of our current best theories of the universe, like GR and QFT. Even as pure speculation it’s rather dubious as there’s no evidence that Lorentz invariance is ever violated.
Very simple. I’m the only one being simulated, all of you people are AIs. /s
Each simulation level would be more intensive to run. There is a minimum level where a simulation can no longer be sustained. So there is definitely a finite number of sim layers that are possible.
To more directly answer your question, each level of simulation would run worse to compensate for the resource decrease.
I understand the point your making, but what if the simulation was actually not shared at all?
Perhaps in this scenario the human brain is the only required hardware? Then there would only be one “base simulation” that is in fact just a basic set of prompts, rules, and initial visual stimulus that is then sent to each person in essence creating a whole separate simulation within each individual. Everything that happens after that is created based on how each individual reacts to the initial prompts. The main system would not have to create any new data to keep the simulation growing because the human mind would create and store all new information within itself. Each new person born would have all the additional hard drive and processing power needed to keep the simulation going for the rest of their lives.
Just consider that if the world as we know it is just a simulation, and that simulation is all we have ever known since birth, how would you ever know if the other people are real or not? Would it even matter?
As others have said, our reference of time comes from our own universe’s rules.
Ergo if rendering 1 second of our time took 10 years of their time, we wouldn’t measure 10 years, we’d measure 1 second, so we’d have no way of knowing.It’s worth remembering that simulation theory is, at least for now, unfalsifiable. By it’s nature there’s always a counterargument to any evidence againat it, therefore it always remains a non-zero possibility, just like how most religions operate.
If someone has the resources to simulate a universe, they probably have the resources to stimulate an arbitrarily large number of universes. This also assumes that any civilisation within the stimulated universe reaches the level of technological advancement required to make a universe level simulation. We’re talking, probably, whole networks of Matrioska Brains, that sort of thing.
I dunno. But I feel like perhaps I should go rewatch the 13th floor.
Loved that movie. Glad to see other people remember it 🙂
Came into the thread specifically for it and am appalled at how far down I had to scroll to find mention of it.
Such a great film that got sadly overshadowed by being released the same year as The Matrix.
My issue it is similar: each “layer” of simulation would necessarily be far simpler than than the layer in which the simulation is built, and so complexity would drop down exponentially such that even an incredibly complex universe would not be able to support conscious beings in simulations within only a few layers. You could imagine that maybe the initial universe is so much more complex than our own that it could support millions of layers, but at that point you’re just guessing, as we have no reason to believe there is even a single layer above our own, and the whole notion that “we’re more likely to be an a simulation than not” just ceases to be true. You can’t actually put a number on it, or even a vague description like “more likely.” it’s ultimately a guess.
Who says there’s resource requirements in the physics of the upper levels?
But if the real world sets up a simulated world which more or less perfectly simulates itself
This is the crux of the logical error you made. It’s a common error, but it’s important to recognize here.
If we’re in a simulation, we have no idea the available resources in the simulation “above” us. Suppose energy density up there is 100x as high as ours?Suppose the subjective experience of the passage of time up there is 100x faster than ours?
Another thing is that we have no idea how long it takes to render each frame of our simulation. Could take a million years. As long as it keeps running though, and as long as the simulation above us is patient, we keep ticking. This is also where the subjective experience of time matters. If it takes a million years, but their subjective “day” is a trillion years long, it becomes feasible to run us for a while.
And, finally, there’s no reason to assume we’re a complete simulation of anything. Perhaps the simulation was instantiated beginning with this morning–but including all memories and documentation of our “historical” past. All that past, all that experience is also fake, but we’d never know that because it’s real to us. In this scenario, the simulation above us only has to simulate one day. Or maybe even just the experiences of one PERSON for one day. Or one minute. Who knows?
The main point is we don’t know what’s happening in the simulation above ours, if it exists, but there’s no reason to assume it’s similar to ours in any way.
Quantum is weird. If we are in a simulation, that would explain a lot of that, because the quantum effects we see are actually just light simulations of much deeper mechanics.
As such, if we were simulating a universe, there’s every chance that we may decide to only simulate down to individual atoms. So the people in the simulation would probably discover atoms, but then they would have to come up with their own version of quantum mechanics to describe the effects that we know come from quarks.
The point is that each layer may choose to simulate things slightly lighter to save on resources, and you would have no way of knowing.
Indeed and–interesting corrollary–if we accept the concept of reduced accuracy simulations as axiomatic, then it might be possible to figure out how close we are to the “bottom” of the simulation stack that’s theoretically possible. There’s only so many orders of magnitude after all; at some point you’re only simulating one pixel wiggling around and that’s not interesting enough to keep going down.
There is not, as far as I know, any way to estimate the length of the stack in the other direction, though.
I have never understood the argument that QM is evidence for a simulation because the universe is using less resources or something like that by not “rendering” things at that low of a level. The problem is that, yes, it’s probabilistic, but it is not merely probabilistic. We have probability in classical mechanics already like when dealing with gasses in statistical mechanics and we can model that just fine. Modeling wave functions is far more computationally expensive because they do not even exist in traditional spacetime but in an abstract Hilbert space that can grows in complexity exponentially faster than classical systems. That’s the whole reason for building quantum computers, it’s so much more computationally expensive to simulate this that it is more efficient just to have a machine that can do it. The laws of physics at a fundamental level get far more complex and far more computationally expensive, and not the reverse.
To be clear, I’m not arguing that is is evidence, i merely arguing that it could be a result of how they chose to render our simulation. And just because it’s more computationally expensive on our side does not necessarily mean it’s more expensive on their side, because we don’t know what the mechanics of the deeper layer may have been.
For example, it would be a lot less computationally expensive to render accuracy in a simulation for us down to cellular level than it would be down to atomic scale. From there, we could simply replicate the rules of how molecules work without actually rendering them, such as “cells seem to have a finite amount of energy based on food you consume, and we can model the mathematics of how that works, but we can’t seem to find a physical structure that allows that to function”
It would take vast quantities of energy and resources if you were to do it real time, full time.
As in - in the simulation 1 minute could be 1 year outside the simulation. Assuming we can continue to use more energy sources, develop the technology to fully simulate a single reality, it wouldn’t necessarily have to be real time.
Inside the simulation, it wouldn’t make a difference