A defining idea of the technological Singularity is that Artificial General Intelligence (AGI) will eventually—or instantly, depending on who you ask—give rise to Artificial Super Intelligence (ASI). The argument goes that once AGI has the tools to improve itself, it will do so very quickly and efficiently. If you think about how fast Midjourney can create a painting compared to a human, or how quickly ChatGPT can understand and comment on 200 lines of code, you start to understand how an ASI might work.
Even if we are skeptical about the idea of “super-intelligence,” and we suppose that there is some hard limit on intelligence which is near what a human mind is capable of, we can still imagine an AI system which could create millions of human-level intelligences and throw them at any given problem or area of research. Even if super-intelligence isn’t possible, an AGI could use brute force to push technology forward with similar results as a true super-intelligence.
The human mind is the most intelligent thing we’ve ever observed, and we’ve seen what happens when you start to network 100 or so of them together locally— connected by human language—in hunter-gatherer tribes. Fast forward a few hundred-thousand years, and you have billions of human minds globally networked through the internet, but still limited by the time it takes our biological brains to process written and spoken language.
What would billions or trillions of human-level minds be capable of? And what if they were not limited by the speed of human language?
When we try to predict the future, we usually fail. The future is full of breakthroughs in understanding that we are entirely ignorant of, and there is simply no way to predict through these dark clouds of ignorance.
As a thought experiment, if you took a handful of the most well educated and imaginative human minds from any given period in human history, how accurately could they have predicted the future? If you went back in time and asked ancient Egyptian priests and Greek philosophers how polio would be eradicated, without knowing that there is an entire unseen world of viruses, what chance would they have of accurately predicting that the polio vaccine would wipe out the disease? In this case, the priests and philosophers would simply have no hope whatsoever of making an accurate prediction.
As another example, could people from 1,000 AD have predicted the sheer size of the universe by looking up at the stars with their naked eyes? Imagine if you told them that light moves so fast that it can go “back and forth across the world” around seven times in a second. I’d simplify the explanation, because explaining the shape of the Earth and the existence of undiscovered continents might just get in the way of conveying the size of the universe. Then I’d tell them that even traveling at this speed, it would take “more than a million years” to…do what? To “cross the heavens?” The gap in understanding between their knowledge and ours is so great that it would be nearly impossible to even get the idea across to them. Even now, we can barely fathom things at this scale. What hope did even a well-educated European or Chinese scholar from 1,000 years ago have of grasping the scale of the cosmos?
If we fast-forward to Isaac Newton, he was able to predict that it would be possible to fire something into space. He imagined a cannon on a mountain rather than a rocket. It’s impressive that he was able to imagine a cannon ball going into orbit, but given the state of science and technology at the time, could Newton have come up with Einstein’s equations? Could he have predicted the existence of electrons and other subatomic particles?
Throughout all of human history, there has been a hard limit on how far we could predict into the future. There has always been a very real truth—viruses and bacteria, the vastness of space, or the subatomic world—which was beyond our current capacity to reasonably predict. There is no reason to think that our current understanding of the world around us is complete, and yet people still try to speculate beyond the Singularity based on our current and limited understanding of reality.
Ray Kurzweil has been making far-off predictions for decades now, and he made these predictions when the internet was still in its infancy. I read The Age of Spiritual Machines when it was first released in 1999, and I was afraid to tell anyone about the ideas I read in his book because it sounded so crazy at the time. Now, in what many people are starting to call “The Age of AI,” Kurzweil’s predictions sound a lot less crazy, and given how right Kurzweil has been, I think we should give his predictions—at least most of them—a lot of weight.
While Kurzweil himself says that we really cannot predict what happens after ASI emerges, he does take a stab at it. He predicts that we will merge with our creation, and that ASI will soon begin converting the universe into “computronium.” While this may sound like some kind of sci-fi horror novel, Kurzweil views this optimistically and even poetically, though if he really wanted to be poetic, he probably needed a word other than “computronium.”
Kurzweil thinks of computronium as the most efficient way to convert the dumb matter of rocks, interstellar dust, and stars into processing for the substrate of what would effectively be “us” after we’ve merged with the ASI we create. He assumes that ASI will continue improving itself, and to do so it will need more and more processing capacity. He talks about this being the process of the universe coming to life. It wouldn’t just be a swarm of dumb computing hardware, there would be a rich world of experience and emotions happening within whatever the computronium was doing. Whatever would happen in the computronium, it would be well beyond our current limits of imagination.
In Kurzweil’s most recent interview with Lex Fridman, he says that he thinks humans are alone in the universe. Reading between the lines, Kurzweil seems to think that his computronium idea is likely enough that it supports the idea that no intelligent life has come before us. If it had, this alien ASI would have already converted everything into computronium, and life would have never evolved on Earth.
The idea of computronium, the crux of it, is that no matter how far an ASI advances, no matter what breakthroughs in fundamental physics and other fields there are to be had and that we can’t yet even fathom, we will always be inherently limited by our current understanding of the world. Kurzweil assumes that computers will always need physical stuff to run. He assumes there is some hard limit in physics for computing hardware, and computronium is the placeholder word for whatever the most efficient possible solution is, but that solution—no matter how efficient or advanced—is going to be limited to our current understanding of space-time. Kurzweil’s computronium is always beholden to quantum field theory, thermodynamics, and general relativity.
Computronium is a Isaac Newton’s cannon on the top of a mountain, because we haven’t discovered rocket fuel yet.
Some people talk about ASI making Dyson Spheres. The speed of light would likely place an inherent limit on the physical size of any ASI. Even if you converted the entire universe into one big blob of computronium, it would necessarily have to be compartmentalized. “Thoughts” or “signals”—or whatever information passed through the computronium—would take billions of years to reach from one end of the universe-sized blob to the other, so maybe a Dyson Sphere (a big shell around the sun which captured 100% of its energy) would be enough power for a local group of computronium. Assuming the speed of light is not surpassable, there would be little purpose for a universe-sized blob of computronium to exist. Instead, each ASI would be its own entity due to the universe’s built-in speed of causality, and maybe each ASI would run on the power of an entire star?
Both computronium and Dyson Spheres are wildly out of our current technological horizon. We’re not even close to being able to do either of them. Both would require something like self-replicating nanobots which could convert matter at a submolecular level.
Remembering how bad we are at predicting even a few breakthroughs into the future through the fog of ignorance, pause and think: Which seems more likely? That we will get to the technological level where we can build a Dyson Sphere, and that we will then build a Dyson Sphere; or we get to that level and will laugh at the idea of building a Dyson Sphere? Let’s assume that we will still have some equivalent of laughter after we’ve merged with the ASI.
If we went to somewhere between 1750 and 1800, and we took a group of the top 100 scientists, philosophers, and engineers, then gave them a decade to work on “the most complex thinking machine possible,” what might they come up with? What if we told them to imagine they had the entire workforce and resources of Europe and Asia at their disposal? They wouldn’t even need to build anything, simply draft up the plans that would be feasible given the resources available.
They might come up with some very large steam engine. It would likely have a lot of gears in it. It would certainly be mechanical and not electric. Even with the greatest minds and an unfathomable amount of resources for the time period, would they be able to draft up something as useful as a simple adding machine? If we expand the thought experiment and place no limitations whatsoever on resources—if they can use all the rivers on Earth to generate steam and mine out all the metal on Earth to make gears—what fundamental wall would they hit?
This hypothetical project is our Dyson Sphere and computronium. We imagine ASI, something so much more capable than ourselves that we admit we have no real hope of predicting what it will do. And then—confoundingly—we speculate what it might do within the walls of understanding that currently bound us.
At no point in human history has it been the case that our current limits of understanding were complete. We are now looking up at what is potentially an exponential upward curve, one that may accelerate so fast that a human mind can’t keep up with it. Why should our current limits act as walls to an ASI?
The walls of 100 years ago didn’t hold us back, and we are just slow, organic human minds.
Just because we’ve come far doesn’t mean we are at the end of the road. Our current understanding of physics ends at the Planck scale, or in the super-position just before a double slit, or “before” the Big Bang, or in consciousness itself. Take your pick of where the walls are, because there are a lot of them. Some of those walls are already cracking, but if the Singularity really does happen, it’s going to tear down all those walls.
And if ASI does emerge, it’s going to run on something much weirder than computronium.
I came here from a Reddit post. I liked your article and I have to say that it generally agrees with my views! (At least for now) I hope you will continue to write. The article in that post was well thought out and well written, I like things like that.