top of page

What is the future of creative expression with the advancement of Artificial Intelligence?



artificial intelligence
Photo by XU

Not long ago, I found myself second-guessing my decision to publish my manuscript. After learning about the future of AI large language models, I became convinced that publishing a book would be futile. I predicted that, in the near future, these models would be capable of writing entire books that would be indistinguishable from those written by humans. I questioned, then, why I should even bother to publish my work.


I felt even more disheartened as I prepared to write a poem, inspired by an audiobook I had listened to about metaphors and their essential role in how we perceive the world and extract meaning from it. One of the most popular metaphors in literature, dating back to ancient times, is the imagery of climbing to the summit of a mountain.


This symbolizes the motivation to reach a higher place, which can be seen as getting closer to the heavens, along with the challenges one must overcome to achieve that. While this metaphor isn’t new, I believed I had come up with a fresh perspective on it.


I was reflecting on the concept of the “switchback,” a hiking term for trails that zig-zag up the side of a mountain. This design reduces the steepness of the trail, making it easier to climb instead of going straight up. I wondered if there were already established metaphors related to switchbacks, so I searched online for “metaphors for switchback trails.” Fortunately, I only found technical information about switchbacks in hiking contexts. This excited me to write a poem that could offer some originality.


While experimenting with the potential of large language models, I decided to use ChatGTP, which, at the time of this article, was the leader in what is known as “narrow artificial intelligence (ANI).” This refers to machines that imitate human behavior by specializing in a specific area to solve a problem. I first asked the model to “describe how a mountain trail of switchbacks can be used as a metaphor for life.” It provided a list of six metaphors relating switchbacks to a person’s journey through life’s challenges, which I found impressive. Next, I wanted to put its creative abilities to the test, so I commanded it to “write a short poem based on the metaphors.” Here is what it produced:


The Switchback Path

Life’s a trail of twists and bends,

A winding road that never ends.

The summit calls, yet stays unseen,

Behind the turns, so sharp and keen.

Each step I take, I climb, I rise,

Though paths loop back beneath the skies.

Not straight nor swift, but step by step,

Through valleys deep, where doubts are kept.

The road feels long, my breath runs thin,

Yet resting shows how far I’ve been.

Each pause, a moment to reflect,

To see the heights I once thought wrecked.

And though at times I seem to stray,

Each backward bend still finds a way.

For every twist reveals anew,

A broader sight, a clearer view.

No path is lost, no step in vain,

Through sunlit peaks and hills of rain.

For with each turn, though slow the climb,

The mountaintop is reached in time.


It created a relatively simple poem that was still impressive, considering it was based on a brief command I provided to ChatGTP. The AI was able to generate metaphors using very limited input from me. I decided to challenge it further by giving a new command: “Using the same metaphors, write a poem that has deeper hidden meanings.” Here is what it produced:


The Winding Ascent

The path begins where shadows sleep,

A silent trail, a promise steep.

I step ahead, though doubt still clings,

A weight of whispers, clipped-winged things.

The road does not obey my will,

It turns away, retreats, stands still.

I walk, yet find my back is turned,

Toward echoes of the past I’ve burned.

Each bend, a lesson half untold,

A secret laced in dust and gold.

I climb, I fall, I rise again,

The mountain knows, but won’t explain.

And still, the summit hides from view,

Like dreams that shift when chased anew.

But from each height, I glimpse below,

The path once dark now faintly glows.

Perhaps the way was never lost,

Nor failure weighed the climb with cost.

For even when the trail rewinds,

It leads me where I’m meant to find.


It was impressive yet discouraging, as a poem like that would take me a fair amount of time to write, while ChatGTP produced it in a matter of seconds.


In a moment of panic, knowing that poets might be overshadowed by AI — an outcome that wouldn’t impact many bank accounts but could deflate some egos — I turned to the online tools that countless teachers and professors rely on to ensure their students remain honest: an AI plagiarism checker. The academic field is just one of many industries that feel threatened by the rise of AI. As large language models (LLMs) continue to advance at a rapid pace, plagiarism detection platforms have also been evolving, likely a few steps behind the LLMs.


I used three different plagiarism checkers: Grammarly, Quillbot, and Undetectable AI. Grammarly and Undetectable AI detected no AI in the poem I submitted for their review. I felt a slight sense of relief when Quillbot identified the text as 100% AI-generated.


My reaction to this technology is surprisingly mixed, especially since I have always prided myself on being an early adopter. I’m the person at a party who eagerly showcases the latest phone features, almost like an unpaid endorser of any new and interesting tech. So, why do I now feel threatened and hopeless about the future of artistic expression with the advancement of AI? This is the paradox of technology: it is created to help our species advance, but it also has the potential to threaten our existence.


I am protective of artistic expression because it should remain authentically human. It enhances our understanding of the world and our role within it — and should be defined by humans, not by artificial means. When human expression is presented as an art form, particularly in an abstract manner, it serves as a narrative that drives the evolution of human consciousness, and we should never relinquish that power entirely to AI.


You might think this is hyperbole, but let’s look at where AI is rapidly heading. The next advancement is artificial general intelligence (AGI), where machines continuously learn and reach intelligence levels comparable to humans. Eventually, we’ll see the emergence of artificial superintelligence (ASI), where machines surpass human intelligence in all domains. This progress is happening alongside advancements in quantum computing, which is making significant strides at the moment and is poised to be the next revolution in computing. Where does that put us in the competence hierarchy, which has supported our existence since the beginning of our species?


One potential problem to consider with the advancement of this technology is that it removes much, if not all, of the sacrifice that was once necessary for generating creative works. I am not expressing this view as a conservative attempt to slow down societal progress or cling to old ways; rather, I am concerned about losing what we call passion. The term derives etymologically from Late Latin *passionem* (nominative *passio*), meaning “suffering” or “enduring,” which comes from the past participle stem of Latin *pati*, meaning “to endure, undergo, or experience.”


Through the struggles of the creative process, we often find meaning and purpose in life; this experience is subsequently interpreted by those who observe the art, contributing to the advance in collective consciousness. As Friedrich Nietzsche famously said, “He who has a why to live for can bear almost any how.” I would also add that he can show others how to live, particularly through the abstract learning found in artistic expression.


The other issue at hand is misrepresentation, along with the now-clichéd concepts of disinformation and misinformation. These are all very real concerns when AI is used for malevolent purposes. Since humans heavily rely on their senses — especially vision, which is our most significant sensory asset — we cannot constantly question the authenticity of what we see. This programming to accept visual information as true poses a serious threat when AI falls into the wrong hands, particularly those motivated by fraud and deception.


Deepfakes, which involve AI-generated video and audio that can depict real people engaging in socially unacceptable behavior, have the potential to ruin careers, destroy marriages, and lead to social ostracism.


Many deepfakes exist that, while not necessarily harmful, feature well-known public figures, particularly social media influencers, in videos and/or audio clips that they did not create themselves but were instead generated by AI. These situations can pose a threat to the reputation of those being portrayed without their consent.


What are the legal options available at this point? As AI technology is still relatively new, governments are working to catch up; however, they are trying to keep pace with something that has the potential to evolve at a speed beyond human capabilities with the potential in the near future to be faster and smarter than any human that has ever lived.


There are advocates for slowing the advancement of AI, one of whom is Dr. Jordan B. Peterson. He is a Canadian psychologist, professor, and a well-known social influencer. Regarding deep fakes, he has stated, “Theft of digital identity looks to me very much like kidnapping.” Dr. Peterson has many deep fakes of himself circulating on social media, which makes him a victim in that sense. However, he possesses a remarkable ability to foresee future implications, and we should heed his warning about the dangers of digital identity theft and deep fakes.


Elon Musk has expressed concerns about the rapid progress of AI. He shared a story about a conversation he had with Larry Page, co-founder of Google. According to Musk, Page did not believe there was any reason to worry about the direction AI was taking. During their discussion, Page referred to Musk as a “speciesist” because of his heightened concerns about the potential dangers of AI. Musk later acknowledged that Page was correct in calling him a “speciesist,” as he is genuinely worried about the future of humanity.


Governments are often slow to respond to emerging events, meaning that regulations may only be implemented after significant damage has already occurred. The key issue at hand is not that people are using technology to enhance or accelerate their creative processes; rather, it is the lack of transparency in how that technology is utilized.


Disclosure is essential to uphold the integrity of genuine human effort and creativity. Without it, we risk losing significance and, at worst, allowing those who have contributed little to overshadow those who have genuinely sacrificed for their work, thereby capturing the majority of public attention and influence.


AI’s purpose should not be in trying to deceive the receiver; rather, it should always be disclosed that they are artificial — non-human. It should have a universal signature that is easily detectable, even as the technology advances. That is, we uphold integrity, anything else should be considered deception. This would allow the songwriters, authors, musicians, and the like to continue collectively creating that which is authentically human, not to be ruled and directed by an AI demigod.


But here’s the problem, Social media and internet corporations are some of the largest investors in AI, Facebook, or Meta, owns Meta AI, X, formerly Twitter, owns Grock, Microsoft is a major investor in Open AI, or ChatGTP, and they are all in a potential conflict of interest since they would be responsible for regulating disclosure of AI generated content, while simultaneously trying to grow their AI businesses.


To summarize, the technology paradox lies in the fact that we create technology to simplify our lives and reduce the burden of mundane tasks, all in the pursuit of happiness. However, this very technology has accelerated the pace of our lives, leading us to do more — not less.


We are witnessing the negative consequences in societies that depend heavily on technology. The symptoms are obvious: mental health issues are on the rise, and physical health is declining. If we want to maintain our dignity and uniqueness as human beings, we must exercise restraint when it comes to the advancement of technology — not just with AI, but in all areas. Our superior role on this planet comes with the highest responsibility, a duty that may require sacrifice but ultimately gives us a sense of purpose.


The question that is yet to be answered, but it’s on us: with the acceleration of AI, what will it mean for the future of our artistic expression, and more importantly, what will it mean for the future of humanity?





Comments


bottom of page