A team of physicists now argues that the long-standing science-fiction idea of a fully simulated universe clashes directly with fundamental mathematical limits. Their conclusion draws a clear boundary between what algorithms can compute and what physical reality actually contains.

The Allure of a Programmed Cosmos
The suggestion that we might be living inside a vast digital simulation has moved far beyond late-night debates and online forums. Films like The Matrix, philosophical arguments from Nick Bostrom, and the rapid rise of artificial intelligence have given the simulation hypothesis an unexpected sense of legitimacy.
The core idea appears straightforward. Imagine a civilisation so advanced that its computers can calculate an entire universe, complete with conscious beings who are unaware they are digital constructs. If computing power keeps expanding, why would such simulations not exist—and why could our universe not be one of them?
This narrative fits neatly with modern language about data, code, information, virtual spaces, and ever-present algorithms. It also sidesteps direct experimental testing. A flawless simulation, by definition, would show no obvious errors or glitches that could give it away.
For years, the simulation hypothesis seemed almost impossible to test: too sweeping, too abstract, and too cleverly framed to pin down.
When Physicists Turn to Logic Itself
In research published in the Journal of Holography Applications in Physics, Mir Faizal, Lawrence M. Krauss, Arshid Shabir, and Francesco Marino take a very different approach. Instead of searching for clues in the cosmic microwave background or quantum fluctuations, they examine the logical structure of computation itself.
Their central claim is blunt: no finite computer program, regardless of its power, can fully reproduce the universe. This is not a matter of engineering difficulty, but a logical impossibility.
To support this, they rely on three foundational results from logic and information theory:
- Kurt Gödel’s incompleteness theorems
- Alfred Tarski’s undefinability of truth
- Gregory Chaitin’s limits on algorithmic information
These ideas are usually confined to pure mathematics and theoretical computing. Here, they are applied directly to the foundations of physics.
Gödel Versus the Idea of Cosmic Code
Gödel demonstrated that any consistent mathematical system rich enough to include basic arithmetic is necessarily incomplete. Within such a system, there will always be true statements that cannot be proven using the system’s own rules.
- A basic household product washed down the pipes restored perfect flow leaving repair workers stunned
- Greenland declares an emergency after researchers spot orcas breaching dangerously close to melting ice shelves
- A growing lifestyle trend among seniors: why more “cumulants” are choosing to work after retirement to make ends meet
- A 100-year-old woman reveals the daily habits that keep her thriving and why she’s determined never to end up in care
- A breakthrough kitchen device is being hailed as the invention that could finally replace the microwave for good
- Vegetable patch: the forks stuck in soil that gardeners swear by to keep cats and birds away without chemicals
- A highly unusual February polar vortex disruption is rapidly approaching and experts say this year’s event is exceptionally strong
- Hairdresser reveals hard truth about short hair for women over 50 that many won’t want to hear
The researchers assume that a future “ultimate” physical theory—such as a complete theory of quantum gravity—would be expressible as a formal system with a finite set of axioms and rules. In principle, this would make it something a computer could manipulate.
If physics is captured by such a formal theory, Gödel’s result implies that there will always be true facts about the universe that the theory, and any program built from it, cannot derive.
From the standpoint of a total simulation, this is fatal. A genuinely complete simulation would need to encode every physical truth. Gödel guarantees that some truths will always escape.
Tarski and Chaitin Close the Loop
Tarski’s theorem adds another constraint. It shows that any sufficiently powerful system cannot define, from within itself, a universal test of truth. There is no internal algorithm that can label every statement as true or false.
Chaitin extends this limitation through algorithmic information theory. He proves that there is a strict upper bound on how much mathematical truth can be compressed into a finite program. Some facts are algorithmically random, meaning they cannot be shortened or generated by any simpler rule.
Applied to physics, this leads to a striking picture. Even a hypothetical cosmic supercomputer would have unavoidable blind spots. Certain features of reality could not be compressed into a computable model or predicted by any finite rule set.
When physics is viewed through the lens of logic, it appears to contain more truth than any algorithm can ever capture.
Why a Complete Simulation Breaks Down
The issue is not whether simulations are useful. We already model weather systems, galaxy formation, protein folding, and even artificial agents that imitate thinking. These are all partial simulations.
Goodbye Hair Dye for Grey Hair: The Conditioner Add-In That Gradually Restores Natural Colour
The simulation hypothesis makes a far stronger claim: that the entire universe, including every physical process and conscious experience, could be generated from a single underlying code.
According to Faizal and his colleagues, this ambition runs into an unavoidable logical barrier. A total simulation would require:
- One formal theory capturing all physical laws — blocked by Gödel, since some truths will remain unprovable
- An internal method to identify every true statement — ruled out by Tarski’s theorem
- A finite program compressing all physical information — limited by Chaitin’s incompressibility results
Taken together, these results imply that any attempt to encode the full universe into software must leave something out. A simulation cannot be finite, computable, and fully complete at the same time.
Reality Beyond Algorithms
The authors extend their argument into the philosophy of science. If the universe contains truths that no algorithm can generate, then a purely computational view of reality is fundamentally incomplete.
They argue for a non-algorithmic understanding of the universe. This does not invoke mysticism, but follows directly from formal logic. It refers to structures and truths that cannot be reduced to fixed rules or executable code.
On this account, the universe is not software running somewhere else. It is a structure that outstrips every possible program.
To address this, they propose a Meta-Theory of Everything (MToE). Rather than being just another formal system, it would combine:
- Standard, calculable physical laws that simulations can access
- Principles that point to truths beyond formal computation
This perspective echoes ideas from Roger Penrose, who has long argued that human understanding—and possibly consciousness—relies on non-computable elements. Using Gödel’s work, Penrose suggested that the mind can grasp truths no fixed algorithm can fully encompass.
Implications for AI, Physics, and Popular Culture
These arguments are unlikely to dampen enthusiasm for science-fiction worlds or virtual reality speculation. They do, however, reshape the scientific discussion.
For artificial intelligence, the work implies a ceiling on what purely algorithmic systems can achieve. Even advanced AI may simulate reasoning and creativity, yet remain confined to what is computable. If reality includes non-computable aspects, some features will always lie beyond such systems.
For fundamental physics, the paper functions as a kind of meta-level no-go theorem. Even a future theory of quantum gravity should not be expected to generate every truth about the cosmos the way code generates a video game. Logical gaps will remain unavoidable.
Key Concepts Behind the Debate
Several technical ideas underpin the argument:
- Formal system: a collection of axioms and rules used to derive statements, such as arithmetic or symbolic physical laws
- Computable: something an algorithm can produce in a finite number of steps, at least in principle
- Algorithmic randomness: information that cannot be compressed into a shorter description or generated by a simple rule
When physicists say the universe contains non-computable truths, they mean that certain aspects of reality cannot be fully captured by finite algorithms without loss. Parts can be approximated or modeled, but a perfect digital duplicate remains impossible.
From Thought Experiments to Real-World Models
Weather forecasting offers a helpful analogy. Modern models are remarkably accurate, yet they will never be perfect because tiny uncertainties grow over time. In that case, the limit is largely practical.
The claim about the universe is stronger. Even with flawless data and unlimited hardware, logic itself guarantees irreducible gaps. Some patterns of reality cannot be captured, not because of ignorance, but because they do not fit into finite code.
The same applies to large-scale cosmological simulations. Supercomputers already model galaxy formation, dark matter behavior, and black hole mergers. These tools are powerful and informative, but no one confuses them with reality itself. According to this work, even an infinitely refined simulation could never be identical to the universe it represents.
