Playing Pointless Games: How Trying to Belittle AI Paradoxically Empowers It
In today’s technologically advanced landscape, artificial intelligence (AI) has woven itself into the fabric of daily life, performing tasks from organizing schedules to generating art. Yet, there is a curious trend where individuals engage in seemingly futile attempts to belittle AI. Ironically, these actions can end up enhancing AI’s perceived power, especially when people treat AI as more than just a sophisticated tool. A common human inclination to anthropomorphize — attributing human-like qualities to non-human entities — plays a significant role in this paradox.
The Futile Games We Play
People often engage in what can be described as ‘pointless games’ to undermine AI. Whether it’s by attempting to trick AI systems, mocking their limitations, or setting them up to fail, these endeavours are not without consequence. The core irony lies in the fact that these actions can unintentionally reinforce AI’s influence. By focusing so intensely on AI, we subconsciously assign it a level of importance and capability that it might not inherently possess.
Psychological Projection
This paradox is deeply rooted in psychological projection. Users often interact with AI systems by projecting emotions and intentions onto them, much like they would with another human being. For instance, when an AI fails to recognize a context or produce an unexpected result, the frustration or amusement that follows often mirrors human interactions. This psychological interplay inadvertently boosts the perceived power and intelligence of AI, highlighting it as an entity warranting such intense emotional reactions.
Its important to know that these AI systems are just AI systems. They are trained to predict next words, summarize and classify text among a host of other foundational abilities. These abilities mean they can produce seemingly human sounding responses but they are of course not humans or even living.
The Flamingo Head Example
One illustrative example of this phenomenon is the famous “flamingo head” incident from an art contest. In this contest, participants were tasked with creating artwork using AI. One particular entry featured a flamingo head in an unexpectedly artistic and creative composition. The human-like qualities ascribed to the flamingo head — elegance, fluid grace, and aesthetic value — were also subtly celebrated through its AI-generated nature. By the way, I have no idea how the flamingo won the contest because there were definitely better artworks in the running for winner?
This example underscores the anthropomorphization of AI, where a simple digital creation was assigned attributes typically reserved for human creativity and artistic expression. By attributing these qualities to the AI-generated piece, participants and observers alike inadvertently enhanced the AI’s perceived capability and creative prowess.
Would the solution be to have diverse contests where participants can use whatever tools they would like?
Conclusion
Ultimately, the key to a balanced relationship with AI lies in perceiving it accurately. AI is a powerful tool designed to perform specific tasks with efficiency and precision, but it lacks consciousness, emotions, and intent. Treating AI as an omnipotent entity only serves to cloud our understanding and inadvertently empower it in our minds.
As we move forward in an increasingly AI-integrated world, it’s important to recognize and appreciate AI for what it truly is: a remarkable piece of technology, but still just a tool. Approaching AI interactions with this balanced perspective ensures we harness its benefits effectively without unintentionally granting it undue influence. By maintaining this clarity, we can foster a more rational and productive coexistence with the intelligent systems that continue to evolve and redefine our world.
Check out our website here for more content or digital art. Feel free to clap or comment also!