Skip to main content

One post tagged with "Superintelligence"

Mathematical limits and structure of superintelligence

View All Tags

The Superintelligence Ceiling: Why SAD = 3 — and Why This Changes Everything

· 13 min read
Max Sereda
Унитарный Голономный Монизм

In 2014 Nick Bostrom published "Superintelligence," posing the main question of the decade: what will happen when AI surpasses humans? Working hypothesis: a superintelligence capable of recursive self-improvement amplifies itself without limits — and becomes incomprehensibly powerful. "Intelligence explosion."

This hypothesis was not proven. It was not refuted either. It was simply accepted by default — because no one presented a mathematical argument that would limit it.

This post is such an argument. Not philosophical, not engineering, but information-theoretic: from the structure of the Fano projective plane PG(2,2) it follows that the depth of recursive self-modelling of any finite system does not exceed 3. Not "approximately 3." Not "3 for current systems." Exactly 3, for any system, forever.