
What Actually Happened?
In January 2022, the first high-level language model named ChatGPT 3.5 was introduced. Before this, there were popular high-level models, such as Google Assistant, Google Search, and widely used text translation systems. However, a model that so directly simulated a human being had never appeared before. The new model was capable of writing articles, creating beautiful images, and, most significantly, writing code in any chosen programming language.
Until now, coding was the domain of humans. It’s a relatively simple task that involves entering commands according to the rules of a selected programming language. These commands, placed in sets of instructions, served to build programs, solve complex technical problems, and organize, gather, and store information in databases. Programmers felt secure in their jobs because the barrier to entry in their profession was high. One had to study programming for a long time, then gain practical experience and climb the ladder of advanced skills.
The introduction of new artificial intelligence systems in the form of chat models dramatically impacted the job market for IT professionals and researchers in data analysis. After the initial shock, preliminary analyses emerged, estimating the scale of changes that had irreversibly altered the world of the IT industry.
A Brave New World of Artificial Intelligence
Suddenly, everything changed because a machine appeared, falling like a thunderbolt from a clear sky, capable of programming—and, shockingly, it did so faster and better than the average programmer.
It turned out that a free cloud-based robot could program better than a human, working for free and around the clock, 24/7. A single experienced programmer could delegate work to machines, which meant their productivity as a programmer increased manifold. Solutions that scientists had worked on for months suddenly became obsolete and unnecessary. Faced with this leap in productivity, it became clear that there were simply too many programmers. Mass layoffs began in the IT sector. First, interns and those at the start of their careers were let go, then experienced programmers, and entire expert teams were dissolved. However, some specialists, such as testers and architects, retained their jobs.
Neurolinguistic Programming Practically Ceased to Exist
In data science, the worst-affected professionals were those specializing in NLP—experts in neurolinguistic programming, a research area focused on solutions based on written or spoken information. They used artificial intelligence to analyze content generated by users of the vast online environment. The high-level language model, like ChatGPT, effortlessly handled tasks that, under normal circumstances and with traditional tools, could never have been achieved.
To illustrate, NLP specialists created small language models or used specialized paid libraries, such as those provided by Azure. Through these tools, they created programs capable of relative text comprehension, without deep understanding. This phrase, “without deep understanding,” means that the work of these systems was very superficial and could not compare to the effectiveness of systems like ChatGPT. Systems that were ready, free, and advanced enough to dismantle an entire specialized field within data science, fully staffed and equipped with excellent infrastructure, became obsolete overnight.
Similar upheavals affected the entire IT industry. It felt like a carpet bombing campaign. Wherever the bomb fell, everything was annihilated. Wherever the cold face of artificial intelligence appeared, jobs vanished, research teams were dissolved, and infrastructure was dismantled. Formerly modern libraries associated with deep learning models became outdated curiosities, valued only for historical interest.
Overwhelming pessimism was not accompanied by hope. People who had dedicated years to achieving their professional positions suddenly became redundant. On internet forums, the apocalyptic words echoed like a death knell: obsolete, obsolete.
What’s Next for the Industry?
Since then, things have changed slightly. Two strong signs suggest the industry’s rebirth.
First, better language models have emerged, which theoretically should indicate a worsening situation for programmers and data analysts. One might think that new systems would ultimately bury the careers of ambitious and well-paid experts. The opposite happened. Something like a lifeline, a glimmer of hope, or a rope appeared, allowing former castaways to climb back aboard the comfortable IT sector.
Second, managers who previously had little understanding of technological development began to take a keen interest in artificial intelligence. This wasn’t just pure curiosity but rather a survival instinct. There was a growing awareness that the competitive advantage brought by the advent of real artificial intelligence could soon become crucial for a company’s survival in a perfectly competitive environment.
Growing Awareness – Decreased Uncertainty
I observed firsthand what changed in social awareness. Just two years ago, the topic of artificial intelligence was highly attractive mainly because it sounded impressive. Bringing up the subject made one appear modern and versatile. At various meetings and conferences, there was much talk about the need to implement artificial intelligence. Nobody had a clue how to do it or how it might work. Interestingly, even I didn’t fully understand how such solutions could be massively implemented into companies, despite working in this field for over 30 years, leading countless projects based on artificial intelligence technology. Sadly, I, too, didn’t know how artificial intelligence could be implemented on a large scale and cheaply. Existing solutions were prohibitively expensive, clumsy, endlessly prolonged, and often ended in spectacular failure. Those who implemented them were often not entirely satisfied, as they were not fully effective solutions. Suddenly, a high-level model appeared, and everyone felt a glimmer of hope.
Is Everything Possible Now?
Implementing artificial intelligence, the latest kind I discuss in this article, is not possible without human involvement. On the other hand, we’re not yet at the stage where artificial intelligence could implement itself. Sure, theoretically, it could do so if given specific goals, the right environment, and resources. It could do it, but only theoretically. A human is necessary, and it will remain this way for a long time.
Firstly, people still manage companies and public institutions. They assess and make decisions. The situation will change dramatically once intelligent accounting systems and management modules become widespread. Currently, everything is done manually, and management happens in the quiet of executive offices.
Efficient management by artificial intelligence requires linking all internal company environments, such as costs, revenues, human resources, and energy. Moreover, the machine would need to possess excellent market sensitivity. For now, robots lack such abilities, and they won’t acquire them anytime soon—not due to technical limitations but because of limited human horizons.
The optimism and knowledge managers now possess in the area of artificial intelligence have sparked great hope for development and fear of annihilation at the hands of those who will adopt new technical solutions more swiftly. Mass layoffs of programmers have been replaced by a massive search for artificial intelligence implementation experts. A new term has emerged—Generative AI.
A New Place, A New Role
The search for experts in creating artificial intelligence solutions has begun—people capable of capturing a cloud-based robot and putting it to more or less regular work. These specialists previously created artificial intelligence models using ready-made programming libraries. It must be acknowledged that in some areas of data science, little has changed, while in others, everything has changed.
To bring artificial intelligence into a company, one must first gather necessary information from management. They need to understand the goals, needs, issues to solve, circumstances, and budget. A high-level cloud model won’t do this. The person collecting the information must be experienced in understanding business, recognizing problems, competition, and relentlessly seeking a competitive edge. This expert must prepare a project outline that interprets the client’s business needs. Only then can they consider deploying a language model.
Various Uses of Language Models
The simplest way to use high-level language models is to deploy them for reading texts. Today’s companies are inundated with hundreds and thousands of comments, remarks, and evaluations of their work. The development of social media and all forms of online activity has led to the rapid growth and complexity of content. People from the late 20th century might not fully understand today’s comments and feedback posted on internet forums. They wouldn’t misunderstand the content because of new words or names that didn’t exist then, but due to the complexity, intricate form, and often overused nuances and subliminal messages.
Data analysts previously managed to build models that interpreted and classified individual comments and explanations. However, these models had enormous limitations and were, in essence, a form of filter creating a rough classification.
Language models not only understand all the nuances; they can read hidden emotions better than humans, including disapproval, hatred, or concealed admiration. Feedback from clients is invaluable to any company actively operating in the market. Machines can also summarize hundreds of thousands of messages into one significant message, creating divisions and conducting deep statistical analyses of user sentiments.
Another area is interpreting photos and videos. Previously, data analysts manually built tools for interpretation with similar capabilities to high-level language models, specifically the latest GPT-4. However, these models were built manually for specific objects. In contrast, language models interpret all objects, massively and instantly. This marks another monumental shift.
Where Machines Can’t, Humans Step In
Despite its vast intelligence, a machine cannot interpret most business-logical phenomena. There’s a strange phenomenon where the machine desperately tries to solve something, does it poorly, and then attempts to convince the user that it did it correctly, even if the solution doesn’t work at all and is utterly nonsensical. Yet, at first glance, it appears to be a good solution.
Recently, I tested language models’ abilities to solve complex optimization problems in operational programming. Unfortunately, beyond the simplest examples, the machine couldn’t solve most tasks. Of course, this will soon change, as new modules will be added, someone will make an effort, and model producers will refine the issue.
Another matter is the skill of choosing the right models for issues. An experienced data analyst can build optimization, forecasting, or population classification models faster and more effectively. Models lag in this process. This will probably change soon, and
it’s not about proving why humans are better and why we’re still afloat. That’s not the point.
The New Role of a Data Analyst
As I mentioned initially, the development of the latest language models has created a role for humans in working on solutions. This role, of course, was created by people themselves. These people likely realized that machines alone cannot continue to develop effectively without collaborating with experts.
There’s much truth in this. Let’s return to a story I’m fond of. The introduction of the automated loom in the 18th century caused thousands of weavers worldwide to lose their jobs. The arrival of the steam locomotive put thousands of horse-drawn carriage owners and raftsmen out of work. However, technological advancement opened new opportunities. Today, a single programmer or data analyst can do the work of a dozen programmers or data analysts.
A language model suggests solutions and provides ready-made code. However, it cannot combine all code modules into a single finished product. It cannot start solving optimization problems and then combine and test different solutions. The language model performs dull, metaphorically dirty work for the human. The human is the decision-maker; the human directs the machine, which does the tedious work on their behalf.
API for ChatGPT
To implement artificial intelligence efficiently—by launching new Generative AI—a user must register on the ChatGPT website. Then, they must obtain a special API. API, or Application Programming Interface, is a direct connection protocol to a cloud solution like a language model, with a programming environment, such as code editors on the specialist’s local computer. Now you can start creating a model, start coding yourself, and then request the machine through the API protocol to perform specific tasks, like classification or searching and analyzing specific texts. Then you can code what the model should do with the results from the GPT language model’s work.
The API module for ChatGPT-4 is free for the first three months. After that, OpenAI charges a small fee depending on the level of user activity.
Summary
So far, no technology—except perhaps the invention of modern authoritarian systems and weapons of mass destruction—has harmed humanity. Revolutionary new technology has not led to job loss in the medium term. Short-term, people were laid off and forced to change professions. Technological advancement generally greatly improved people’s living conditions.
The first and most frequent benefit of introducing new technology was a dramatic reduction in the cost of previously inaccessible goods and services. Materials and clothing, once very expensive, became cheap and accessible to everyone after the introduction of weaving machines. Transport, once a significant development barrier, became widespread and very affordable.
Obtaining information once involved hours in libraries, poring over books and crumbling source materials. Today, any information is within reach; you only need to search the internet.
Once-expensive solutions for creating artificial intelligence systems, inefficient and labor-intensive, are now available to most users immediately and for free.
Everything is changing, seemingly for the better. What will be the next chapter of this story for the IT industry?
Wojciech Moszczyński