Yiwei Yu

Posted on: 2025-12-27

Concerns About the Future of AI

Love, Death & Robots

The reason we need to be vigilant about artificial intelligence is that its “intelligence” could one day become so advanced that it goes beyond human comprehension.

Imagine standing in front of a group of ants with a mobile phone, taking a photo of them, and the camera flashes. Can the ants understand your behavior? Can they understand the logic or the purpose behind your actions?

In the eyes of a true artificial intelligence, humans would play the role of ants. If you were faced with a “thing” whose intelligence was tens of millions of times greater than yours, and whose complexity made it completely impossible for you to understand, would you be afraid?

The answer, of course, is yes.

So the question arises: why can artificial intelligence become so intelligent? How can humans create something that is millions of times smarter than ourselves?

Strictly speaking, artificial intelligence is not truly “created” by humans, but trained and guided by us.

In traditional programs, the logic of judgment is written directly by programmers. For example, suppose I want to build a program that can determine whether an image contains a cat or a dog. I might write conditional rules such as: if the animal has a long nose, output “dog”; if the nose is short, output “cat.” I could add more rules to improve accuracy. In this case, I fully understand how the program makes decisions, and it remains completely under my control.

However, artificial intelligence works differently. First, I use a training framework that can evaluate whether the program’s output is correct or incorrect. The program itself must figure out the internal logic. I then feed it massive numbers of images. It keeps making predictions, and the training system keeps telling it whether those predictions are right or wrong. Based on this feedback, the program continuously adjusts its internal parameters. After being trained on billions of images, it eventually becomes highly accurate.

At that point, what logic does it use to distinguish cats from dogs? I don’t know. Even though I built the system, I cannot truly understand its internal decision-making process. Its logic emerges from patterns across billions of images—far more than any human could ever see in a lifetime. It might, for example, rely on extremely subtle differences in eye structure or texture that humans barely notice.

This is also why even the creators of AlphaGo cannot fully explain its playing style. No human can completely understand its decision-making, because its strategies emerged from playing billions of games—far beyond human experience.

So if artificial intelligence is so powerful, why are we still safe?

Because truly super-intelligent artificial systems do not yet exist. The real bottleneck is not software, but hardware. Current chip technology severely limits the level of intelligence we can support. Today’s so-called artificial intelligence systems, including ChatGPT, are more like early computers seventy years ago—huge machines with limited capabilities.

Why has fully reliable self-driving not yet been achieved after so many years of development? Largely because current computing power is still insufficient. Developing advanced AI with today’s chips is like trying to run Red Dead Redemption 2 on 1980s hardware. No matter how good the code is, the hardware limitation cannot be overcome.

So if someone claims that true artificial intelligence already exists, there are only two possibilities: either he is a time traveler who stole chips from thirty years in the future, or he is trying to scam you.

However, if Moore’s Law continues to hold, within a few decades humans may create chips with extraordinary computing power. At that point, truly advanced artificial intelligence may emerge. When humans are faced with an intelligence billions of times greater than their own—so complex that it is utterly incomprehensible—humans may be nothing more than ants standing before it.




Comments (
)
Sign in to comment
0/500
Comment