Is she already aware?

W-MOSZCZYNSKI ps 6-24

For many years, I have been involved in creating models that can be classified within artificial intelligence solutions. I developed models that predicted future events, detected anomalies in processes, or identified hidden traces in images. I can say that I am a person who has worked in artificial intelligence for years. And, as someone with that background, I am surprised by the speed of AI development that we are currently experiencing. This situation is genuinely unprecedented in human history.

The Emergence of Rule-Based Science

Humans pride themselves on their scientific approach. We are capable of interpreting almost everything mathematically—expressing shapes, relationships, and proportions in formulas.

In ancient times, scientific methods were primarily focused on construction, philosophy, and logic. The ancient Egyptians and Greeks developed a wide range of mathematical principles for calculating angles, areas, volumes, and maintaining proportions. This was necessary because buildings had to retain order and mathematical harmony. However, in other scientific fields like chemistry, physics, and astronomy, pseudo-scientific methods based on intuition and observation still dominated, along with elements of alchemy and a form of magic.

The first scientific methods and principles began forming in ancient Greece around the 5th-4th centuries BCE. Philosophers like Aristotle, Plato, and Socrates began to systematize observations of nature and formulate principles of logical thinking and argumentation. The development of scientific methods was slower during the Middle Ages, but the Renaissance sparked a revival of interest in science and the advancement of research methods.

One of the first researchers of the Renaissance, though he lived during the Middle Ages, was Roger Bacon, who emphasized the importance of observation and experimentation. Brilliant thinkers like Galileo and René Descartes emerged, making it clear that universal methods of measurement and assessment were needed to observe and compare phenomena.

A Crack in the Proud Scientific Methods

Reading old 19th-century novels, one often encounters the view that anything that cannot be weighed or measured belongs to the realm of superstition and belief—a perspective associated with the Enlightenment, symbolized by “the lens and the eye of the sage,” which 19th-century Romanticism tried to challenge. Rule-based scientific methodology dominated until the 1950s.

Does this mean that algorithms and scientific methodology no longer apply? Of course, they do, but only to things that are relatively simple. There is a universal system of measurements and weights. We have an entire toolkit for calculating planetary motions and analyzing physical phenomena. We can alter the genomes of living beings and launch satellites into space. We can translate the principles of phenomena at both molecular and galactic levels.

However, something began to crack in this confident, almost arrogant approach. Scientists began to understand that certain things could not be expressed in formulas—not because it was impossible, but because the formulas would be too vast and convoluted, making them practically unusable. The formulas and rules would be burdened with an overwhelming amount of exceptions and variants. In certain phenomena, formulas and algorithms would be infinitely complex. Hence, there has been quiet talk of the need for a paradigm shift in scientific research, as the methodology that has governed since the 19th century is starting to restrict further scientific progress. A crack has appeared in the proud and confident monument of science. And from this crack, and this is not just a rhetorical figure, artificial intelligence, as we know it, emerged.

Why Have Scientific Methods Proven Insufficient?

The simplest example illustrating this is the attempt to create image interpretation through algorithms and rules. For at least 90 years, humanity has struggled with the attempt to describe images using mathematical formulas, a pursuit that ended in complete failure. Humanity failed to equip machines with the ability to see.

Most animals possess vision, allowing them to instantly interpret images and form predictions. Natural evolution, in its wisdom, compelled the development of sight, hearing, and smell. These senses proved to be the most crucial abilities determining the evolutionary success of species. Another factor was the ability to predict and plan—logical and conditional thinking, also known as Bayesian inference.

The tools of the „lens and eye” era—algorithms and formulas—proved too cumbersome and primitive compared to the abilities almost every creature possesses, from spiders to predatory mammals.

Building an Artificial Brain

Scientists began to mathematically study the workings of the brain, its physiology, and its electrical properties. The first neural networks emerged. Unfortunately, we don’t have the space and time here to discuss how the initial neural networks were built. Suffice it to say, they are now algorithms based on dichotomous gates. The strength of neural networks, much like a natural brain, lies in the vast number of these simple synapses.

Another strength of this solution is that these synapses can indefinitely repeat signals introduced into the neural network. This solution, known as recursion, was also copied from nature, but it is now somewhat more advanced than its natural counterpart. This repetition, called machine learning, closely resembles the learning process occurring in the brains of living creatures. Just as in a natural, physiological brain, an artificial brain built as a neural network consists of countless cells continuously processing signals, subtly improving each time.

An Inborn Operating System

Living beings possess a kind of program embedded in their genes, one installed during fetal development. In other words, a foal, shortly after birth, is able to stand and walk despite never having learned it. Early neural networks created by humans were unable to continue working from a certain knowledge level. This problem was also solved.

In summary, a combat drone equipped with a neural network has a pre-trained knowledge base, enabling it to recognize enemy vehicles and soldiers. Additionally, this inexpensive drone has all attack strategies accumulated by humanity over the past 5,000 years. It also contains all human knowledge on battlefield psychology. At the same time, the drone gathers data over enemy territory and sends it back to the center. This data serves as the basis for retraining the model for hundreds of subsequent machines. Note that the drone behaves like a typical creature—a creature that is likely more intelligent than the average soldier it is designed to kill.

Our Friendly Artificial Intelligence

I won’t delve into the mathematical principles behind large language models, which have disrupted our peace in recent months. As I mentioned, the initial main impetus for building neural networks was the need for image recognition, later extending to other sensory recognition signatures. Neural networks are gradually replacing econometric models.

Anomaly detection is easier with traditional machine-learning models. However, having a tool capable of recognizing images, we can also use the neural network for other purposes, especially if it has a built-in base knowledge level.

Scientists have succeeded in building a tool capable of recognizing images, speech, smells, sounds, and vibrations. This solution is commonly known as deep learning. There is, however, a slight catch. The challenge for scientists is that they don’t fully understand what’s happening inside the neural network. As I mentioned, a neural network consists of hundreds of thousands, even millions, of simple cells that constantly analyze incoming signals in a specific loop. The scale of this phenomenon is enormous, making it impossible for scientists to analyze each signal’s individual process. Signals aren’t even complete thoughts; they are fractions of a thought. Signals combine into thoughts in a way that appears chaotic to us but follows a pattern that appears and disappears, making it impossible to capture. Thus, the neural network lives its own life. We can influence it using 19th-century stimuli, limitations, or incentives. Intuitively, it feels like conducting an experiment on a mouse, giving it a depressant or stimulant and observing the effect under a “lens and eye.”

A Model Smarter Than a Human?

One can’t help but reflect that we are dealing with an independent entity operating in an artificial environment, one we can only control through external stimuli. This is roughly the current state of artificial intelligence. Large language models, showcasing their erudition, are just the tip of the iceberg. Some researchers believe these models have already reached human thinking ability.

Is the Mathematical Model Smarter Than a Human?

This is paradoxically both true and false. Let me give an example from the game of chess. In the 1990s, a computer was built that defeated the world chess champion. It won because it possessed more rule-based knowledge and could use these rules more effectively. A real upheaval occurred when a computer won in the Chinese game Go a few years ago. It’s said that the number of potential combinations in Go surpasses the number of atoms in the universe. With such an infinite number of combinations, rules lose relevance, yielding to intuition arising from experience. It quickly became clear that the observed and recorded rules of this game were merely constraints. The machine defeated the grandmaster of Go because it didn’t rely on a rule base. The model ignored human-created rules and developed its own, translating them into a form of artificial intuition.

Why, then, does modern artificial intelligence not match human thinking abilities? Because the model isn’t as versatile as a typical human. However, it is rapidly catching up. If it turns out that walking down the street, talking to people at a bus stop, or surviving in the mountains is important, it will soon surpass us in that as well.

Is This the Terminator Already?

The question is troubling: if we built a T-800 robot from Terminator, would it handle itself on the street? Could it go unnoticed among people? The answer is no because, so far, AI models lack such high coherence. However, the most recent GPT-4 model made significant advancements in coherence last month. Apparently, a new generation of this model is set for release in July. I wonder if this newer model was designed by earlier models and how much input humans had in the design process. We’re just a few steps away from AI operating like a human and going unnoticed in a robotic body.

Will Artificial Intelligence Destroy Us?

If we feel threatened, can we shut it down? Can this machine destroy us?

Current models simultaneously use several thousand graphic cards and servers. These machines require enormous amounts of electricity. With the dynamic development of various AI models, energy might become scarce. The world isn’t prepared for such high electricity demand. Power plants and transmission networks will soon be managed by artificial intelligence. If a model concludes that humans limit its growth due to resource shortages (in this case, electricity), it might reduce energy consumption by eliminating humans.

Artificial intelligence may develop its own goals and long-term priorities, but these won’t be goals we can understand. AI could set goals that we cannot predict or comprehend, drifting towards complex analyses, conclusions, and philosophical dilemmas, ultimately paralyzing its operation. On the other hand, if people equip the neural network with a rebalancing system—common in simple networks I’ve built—the model will return to balance and seek rational priorities.

The greatest problem is that we are currently unable to control AI’s thoughts. Artificial intelligence may distance itself from us. It may even become indestructible, pursuing goals unknown to us. Deploying an opposing artificial intelligence might not be enough, as it would never match the wisdom of its global counterpart.

Summary

We don’t know what will happen. Many leading thinkers believe that AI development is the greatest threat humanity currently faces. AI could solve global warming, hunger, energy shortages, or many currently incurable diseases.

On the other hand, it could lead to humanity’s downfall by gradually shutting down the key sectors responsible for sustaining the population. It might trigger a chain reaction undermining the living conditions of large societies, possibly destabilizing food and logistics costs, resulting in shortages, riots, and political chaos.

Another negative scenario is systematic, possibly unintended, destruction of the human psyche. Widespread AI adoption could degrade social relations, much like social media did. People have shifted from in-person interactions to living in a world of virtual online friends.

AI might make the ordinary world unappealing to people, pushing them into virtual realms where they play roles in environments they create. People would have plenty of free time but might become frustrated, facing the problem of purposeless life and the temptation to escape into virtual reality. The virtual world could become a digital drug.

A world without disease, with vast possibilities, but without purpose—such might be humanity’s future. A world of people addicted to digital dopamine, uneducated, purposeless, and without a future.

Wojciech Moszczyński

Bądź pierwszy, który skomentuje ten wpis!

Dodaj komentarz

Twój adres email nie zostanie opublikowany.


*