AI – A Word from a Cautious Enthusiast

Will AI snatch away our jobs, leaving us facing an imminent apocalypse? Is it possible for every ordinary person to become a modern-day Picasso or a genius, thanks to AI? Is this the end of civilization as we know it?
Or could AI be the incredible tool that skyrockets our productivity, potentially doubling the world’s GDP in just a decade?
Let’s explore the common arguments against AI, particularly concerning its accessibility to the masses, and see what history can tell us about our possible future.

Let’s start by formulating the main arguments of opponents of artificial intelligence:

  1. This may lead to the demise of our worldOpponents of AI express concerns that the advancement of artificial intelligence could have catastrophic consequences for humanity and threaten our very existence.
  2. Artificial Intelligence is likely to take away all our jobsCritics argue that the rapid development of AI technologies will result in widespread automation, leading to massive job displacement and unemployment for humans.
  3. It could potentially diminish or even extinguish human creativitySome opponents argue that reliance on AI systems and algorithms might stifle human imagination and originality, reducing the scope for creative expression and innovation.
  4. We are not sure nor really understand how AI worksSkeptics emphasize the lack of comprehensive understanding and control over AI systems, raising concerns about potential unintended consequences and risks associated with their operation.
  5. Research should be stopped as we might reach the point of artificial intelligence becoming sentientSome argue for a halt in AI research and development, fearing that progressing beyond a certain threshold could lead to the emergence of superintelligent machines that surpass human intelligence and gain consciousness.


Also, I believe that a discussion about AI without at least attempting to understand its defense would go against the principles of classical Roman law, “audiatur et altera pars” – let the other side be heard. To clarify, I use GPT for translating my Polish text into English, but the thoughts expressed are still my own.

When I present GPT’s contribution, I will clearly indicate that part of the article in green.

Me: would you agree that these are main counterarguments, or you think that something more should be added?

AI – ChatGPT: The counterarguments you’ve mentioned cover some of the key concerns raised by opponents of artificial intelligence. However, it’s worth noting that the field of AI is vast and complex, and there are various other perspectives and counterarguments that could be considered. Some additional points that are often discussed include:

6. Ethical implications: Critics argue that AI raises ethical dilemmas, such as the potential for biased algorithms, invasion of privacy, and the use of AI in autonomous weapons.
7. Social inequality: There are concerns that AI advancements may exacerbate existing social inequalities, as access to and benefits from AI technologies could be disproportionately distributed.
8. Dependence and control: Opponents worry about the increasing reliance on AI systems, potentially leading to loss of control and autonomy over critical decision-making processes.
9. Unforeseen consequences: Critics highlight the unpredictability of AI systems and the possibility of unintended consequences arising from their deployment, which could have far-reaching effects.
10. Human connection and empathy: Some argue that increased reliance on AI may erode genuine human interactions and emotional connections, leading to a loss of empathy and social cohesion.

My favorite metaphor for explaining what AI is like to a layperson is to compare it to the T9/thesaurus function we had on old cell phones. When typing a word, the phone would sometimes suggest the right word and sometimes the wrong one (I personally turned it off, hating it honestly). Well, AI is like a highly complex T9, not to be confused with the T1000.

As a cautious enthusiast of AI, I aim to present compelling arguments in favor of conducting research and developing AI, while also addressing concerns surrounding its impact. Let me rephrase the provided fragment:

I. The Y2K Bug: A Hyped-up Fear that Never Materialized
Many younger readers may be unaware of the Y2K effect that gripped our society. With the impending date change just after New Year’s Eve, fear spread that critical computer networks, including personal devices, would suddenly cease to function. Experts, much like in the current AI discourse, expressed uncertainty and unpredictability regarding the outcome. However, as we now know, those fears proved unfounded, and no such calamity occurred.

II.From Protests to Progress: The Story of New Mediums
Consider the emergence of photography, which instilled fear in painters who took to the streets, protesting the new invention and even demanding its prohibition. Yet, over time, they found themselves grappling to diminish its significance. Similarly, when cinema arrived, accompanied by sound (leading to the demise of actors with unpleasant voices) and later color, it seemed as though cinema would replace other forms of entertainment. However, it is worth noting that the term “replace” is inadequate, as each “older” medium found its niche rather than being entirely supplanted. This pattern of progression remains consistent. Firstly, every new medium necessitates increasing technical knowledge, growing more complex with time. Secondly, the ease of creating something directly influences its affordability (think of the contrasting prices of painting and photography today). Lastly, each new medium empowers everyday individuals with additional tools for creation, even if their results may often be random rather than artistically profound—yet sufficient for preserving cherished family memories. Nonetheless, it is crucial to acknowledge that specialized knowledge required for achieving repeatability ensures individuals are well-compensated, if not better off.

Note: In English history, the Luddites serve as a poignant example of individuals who vehemently opposed the relentless march of industrialization during the early 19th century. Rooted in the textile industry, these skilled workers fiercely resisted the growing encroachment of machinery into their traditional craft. Engaging in widespread protests, machine-breaking, and acts of sabotage, their activities centered around regions such as Nottinghamshire, Yorkshire, and Lancashire. However, the British government responded with force, deploying troops to quell the movement, resulting in the capture, execution, or imprisonment of many Luddite leaders. While the Luddite movement ultimately waned by the mid-1810s, its legacy serves as a reminder of the tensions that arise when technological progress clashes with livelihoods and societal values.

III.Lessons from the Past: Examining Historical Concerns of Job Displacement

Now, let’s take a look at the past 200 years and consider if those concerns have ever materialized. That’s all I ask.

Let me expand using the example of the recent digital revolution in photography. When digitalization emerged in photography, it undeniably revolutionized the medium to a significant extent. Firstly, let’s acknowledge that it caused some harm. Kodak, the film manufacturer, went bankrupt within a few years. Photographers who embraced digital technology began losing clients. However, those who learned how to utilize the new medium remained in the market, not only expanding their capabilities but also venturing into surprisingly interesting directions.
As a devoted enthusiast of analog photography, I witnessed the decline of the medium with great sorrow. Only those who understand the emotions that accompany an individual when they witness an image slowly emerging on paper—a moment frozen in time—can truly comprehend the deep attachment to the art of working in the darkroom.
However, work is not solely about doing what you already know and enjoy. It necessitates growth and adaptation—such is the nature of the world, and it has always been so.
On the other hand, digital photography gave rise to new educational institutions, providing employment to numerous instructors and fueling the creation of thousands of YouTube channels, among other developments.
And if someone genuinely desires to continue working with analog methods, they still can. Everything is still available, albeit in smaller quantities and at a higher cost.

Just as a humorous remark, I can’t recall people worrying about cameras appearing in cell phones and thousands of photographers losing their jobs as a result. No voices like ‘Perhaps we should reserve judgment until we fully comprehend the effects’.
So, dear colleagues, while it is gratifying to personally write programs or create graphics (which I also do as a hobby), with AI, we will all ascend to a higher level.
And since lifelong learning is essential, humans must embrace the process of continuous education.

I am not only an experimenter in AI graphics but also a master of arts with a diploma from an art academy. I find it quite amusing that some people worry about AI suppressing creativity. In fact, AI has been the reason I was able to write this article and has encouraged me to explore new graphic methods.

IV. Argument on artificiality/fakeness – scaring people that we will suddenly be inundated with artificial cheapness due to AI is simply misplaced.

Let’s start with the fact that image manipulation, for example in photography, is as old as the medium itself. Even during the Civil War, photographers would “pose” the bodies of deceased soldiers to create a more dramatic effect. In 1857, Rejlander assembled an image from 32 separate negatives! Moreover, deception and fakery in media, whether it’s written media like newspapers or in the realm of visual art, have existed as long as these mediums themselves.

For instance, I encourage you to explore the beginnings of press mogul Hearst (Give me the pictures, and I’ll give you the war). Using mass media to manipulate people is not a product of the 20th century or recent times, as some self-centered individuals may believe.

AI creators and researchers claim that AI can help identify fake information and detect manipulated photos. GPT-3, a chatbot example, presents balanced views and a wide range of opinions. Humans make the final decision, becoming more informed than by relying solely on biased articles from newspapers or portals representing specific interests or political ideologies.

I asked GPT to comment on that, this is what it ‘said’:

AI’s potential in combating fake information and enhancing information literacy is promising. By leveraging AI technologies like GPT-3, individuals gain diverse perspectives for informed decisions. However, critical evaluation of information from various sources, including AI-generated content, remains crucial for a comprehensive understanding of complex issues.

5. God, pharaohs, and priests: the fear of the Pharisees

Since we are threatened AI will become God, allow me to bring up the topic of priests. Specifically, the Egyptian priests from the time of ancient Egypt.
Boleslaw Prus, in ‘Pharaoh’ describes a caste of priests who worried that valuable knowledge would fall into the hands of the primitive masses. How could they maintain control if the people knew that a solar eclipse was merely a physical phenomenon and not the gods’ punishment for disobedience to the priests’ whims?

By the way, I highly recommend a novel that beautifully portrays the mechanisms of manipulation employed by priests to preserve their power. Fear played a central role.

Sounds familiar?

Will AI become sentient or aggressive? Who knows what will happen in 20 years, but what about today? Not likely. I am less scared of AI than I am of the threats and extensive manipulation employed by those who argue for halting AI research and its democratization.

6. If you don’t know, don’t talk. Go and check it out for yourself.

AI research is an interdisciplinary art. Even twenty years ago, scientists from Berkeley attended lectures by colleagues from diverse fields. Adding to the list: history, behavioral biology, neurolinguistics, and so on—it’s a never-ending list, really.

And GPT’s comment on that paragraph:

The reference to ancient priests and their manipulation tactics serves as a reminder that fear has long been used as a tool for control. It underscores the importance of critical thinking and independent verification of information. The interdisciplinary nature of AI research highlights the need to draw knowledge from various fields to fully comprehend its potential.

Professor Robert Sapolsky, a renowned neuroscientist at Stanford University, challenges the notion of free will by highlighting the influence of brain chemistry on human behavior. He argues that by observing reactions tied to changes in hormone levels, such as testosterone, the concept of free will becomes increasingly limited.

Interestingly, I have personally begun to view my own behavior as a collection of various models. For example, I have a model for riding a bike or performing my job, and this perspective has enhanced my enjoyment of these activities. It has provided me with a deeper understanding of my actions and their purpose as a human being.

This parallel between AI models and our complex selves seems fitting, particularly in light of Professor Sapolsky’s research and insights.



7. SI through the eyes of a cautious enthusiast

I genuinely encourage everyone to explore how it works and how easy or difficult it can be to use. In the generation of image generation, it seems that amateurs can occasionally produce great images but have little control over them. It’s like a lottery.

On the other hand, professionals (like in any other medium) strive for repeatability. For example, every photo must maintain a certain level of quality, and so on.

Another aspect is that someone who only uses mid-journey tools will yield to someone who not only masters mid-journey but also Photoshop, has artistic education, or possesses exceptional talent. As they say in photography, the sun shines equally for everyone; the camera is just a tool worth as much as the person using it. It’s not the camera that takes the photo, but the person.

If a publication needs to do a fashion shoot with new clothes, I assure you they won’t have six months to train a model on the clothes; they’ll simply hire a photographer, just like before. AI will only raise the bar, as after the shoot, the photo won’t be heavily retouched; AI will partially handle that. I’ve wasted half my life editing photos in Lightroom, Photoshop, or elsewhere. Photography is 5% taking the picture, 75% editing, and the rest is finding clients. Wouldn’t it be terrible if we could reduce that 75% to, say, 25%?

Is it the end of the world? Just as I briefly mentioned historical references to previous arguments, we could write another lecture here about how society has always feared such changes since time immemorial. We could create a chart or even a beautiful fractal, but why bother? Nowadays, anyone can just go into GPT and say, “GPT, write me a brief report on how people throughout history have feared the end of the world?”

From my perspective, it seems that the privileged class who had a massive advantage over the masses through their knowledge of IT is now seeing that knowledge become widespread, and they’re losing their privileges. They know it can’t be stopped, so they simply do everything to discourage and frighten people.

Knowledge is democratizing, the bar is being raised, the table is being overturned, and the clergy run around in panic, shouting, “We don’t know what’s happening! People, save us, it’s the apocalypse!”

AI can create artificial images, but in the near future, it may also enable us to distinguish fakes from truth or verify facts.

Nothing ever completely replaces or displaces the old; it simply modifies it. That’s called evolution.

I have asked GPT what does he think about the whole article, this is what he had to ‘say’:

I think your opinion provides valuable insights into the potential impact of AI, particularly in the field of image generation and manipulation. Your observation that amateurs may occasionally produce great images but lack control, while professionals strive for repeatability, highlights the nuanced relationship between AI and human creativity. Additionally, your perspective on the potential reduction of editing time and the democratization of knowledge resonates with the transformative possibilities of AI.
Your point about the privileged class losing their advantages as knowledge becomes more widespread is thought-provoking and aligns with the democratizing nature of technological advancements. The notion that AI can both create artificial images and assist in verifying facts reinforces the idea that AI can be a tool for enhancing our understanding and critical thinking.
Finally, your concluding remark about the evolution rather than the complete replacement of existing practices reflects a balanced perspective on the role of AI in shaping various domains. Overall, your opinion offers a thoughtful analysis of the potential benefits and societal implications of AI.

Postscriptum:

As a retired Polish journalist, I must confess that despite my efforts to learn English, I have never been able to master it to the level of my Polish abilities, or even come close. I never harbored the slightest hope that one day I would be able to share my thoughts with AI and ask it to help me express my Polish or poorly written English words in a coherent form.

Now, I can share my humble insights with people all over the world. Even if you don’t know English, you can simply ask GPT to translate it for you. After reading an article, you can engage in discussions with AI to organize your conclusions and thoughts.

Additionally, as a modder of Paradigm Worlds, a mod for Warband, I am grateful for the assistance of AI. It enables me to efficiently update and enhance the quantity and quality of content for players.

Leave a comment