An array of dots and dashes of color that spell CVAD in Morse Code

How to Hate AI

By Christopher Meerdo

Estimated reading time: 6 to 7 minutes.

Chris Meerdo facing forward wearing glasses and a black T-shirt. He has brown hair and a short beard.
Christopher Meerdo, M.F.A., assistant professor, Studio Art: New Media Art and Photography
One of my favorite quotes about art comes from the late Nam June Paik, the “godfather” of video art. His aphorism goes: “I use technology to hate it more properly.”
 
Nothing is quite as prescient as this quip, a historical remnant from the era of subverting the conventional televisual for performance, installation, and the readymade. Paik practiced as an artist to rail against the arrival of the home TV, an emergent technology that carried with it the recursivity of a hegemonic space that excluded folks like himself: an immigrant to the U.S. originally from Korea. He strapped the TVs with magnets, disassembled them, and installed them in perpetual screen-burning states of staring at figures of Buddha. 
 
I like to start my course, New Media Art Topics: Art and AI, now approaching its third year, with this story of Nam June Paik. Although he was active in the 1960s, the lessons for those of us interfacing with new technology are still relevant. I implore my students (and you, the reader), through the wisdom of Paik, not to categorically reject our moment of emerging media, that of artificial intelligence models, but to use it to learn how to hate it more properly. 
 
Large magnet atop an old television.
Nam June Paik, Magnet TV, 1965. Whitney Museum of American Art, New York.
In the pedagogy I have developed for the course, we sit with the uncomfortable space of generative AI models that can replicate the precision of hand-drawn and animated character design, a gateway many of our students enter with to access a larger discourse of contemporary art. If we hate it broadly, how can we possibly hate AI more precisely? 
 
In my course, students learn that all AI systems are simply models — files that can be downloaded and tweaked to one’s liking. I advocate for this “local” approach to working with the models compared to paywalled and opaque subscription-based websites. Much like Nam June opening the back of the TV, or John Cage climbing into the piano for his Prepared Piano pieces, or Sonia Sheridan strategically pulling apart xerox machines as part of her “Generative Systems” rationale, we are tasked with the same responsibility toward the medium of generative AI. Today, students must learn how to train, fine-tune, and manipulate models locally, using a command-line interface and node-based coding processes. This is how we can climb inside the machine, to use it not as intended, to create poetry and art. 
 
Alongside the deep-dive under-the-hood technical skill-building involved in working with these models, we spend equal time reading, discussing, and writing about the ethical implications of this burgeoning new media form. You may have noticed the general existential lament creatives have towards AI through court filings and online outrage. It is imperative to address these concerns from the outset, through an art-historical and art-theoretical lens. I’d like to share two points that often give students a moment of pause in our conversations concerning plagiarism and environmental harm. 
 
1. Generative AI models are trained on billions of images. But what does this mean? This incomprehensibly large pool of images becomes what’s known as a dataset, composed of both images and language. For example, both a photo of an apple and the word apple help describe the concept to the system. Images and words are translated into numbers. The model learns the averaged relationships of all of the numbers and can approximate new versions based on numerical patterns. Most importantly, it is impossible to summon any original training image back out of the model. The final model file is surprisingly small and no longer contains the original training data, only the mathematical impressions. An AI model file is similar to our own brains: a series of pathways and relationships that do not contain the original, only figments, mirages, and specters of reality. 
 
2. Training and using generative AI models requires electricity (kWh) to run computers and, at times, water to cool them. The amount of resulting atmospheric impact (CO₂e) is dependent on the type of power station a data center is connected to. Using AI is relative to many other human activities: commuting to work, taking a flight, eating a hamburger, watching short-form videos, or gaming. As of this writing, all data centers worldwide account for 1–2% of global energy consumption. These data centers are the entire internet (websites, cloud computing, and software), of which AI is still only a small portion (less than one-fifth). This means that one-fifth of 1–2% of global energy consumption is attributable to AI. For comparison:

• An average four-ounce beef hamburger emits 9.73 kg CO₂e.

Generating one Stable Diffusion image ≈ 0.3 g CO₂e.
→ One hamburger = roughly 32,400 images.

• An average U.S. golf course uses ≈ 21.5 million gallons per year. 
→ Equivalent to 252,941,176,470 ChatGPT LLM prompts per year. 
 
Helping students understand the impact of AI is an essential component of university education today. It is urgent that we provide in-depth technical knowledge of AI systems to prepare our students for a job market increasingly saturated with these workflows. 
 
Abstract art in blue and gray.
Artwork by B.F.A. student Cynthia Clyde, Studio Art: New Media Art, from the New Media Art AI course.
We can remain critical of AI systems while considering alternative economic models for their development and deployment. For example:
  1. Open, democratic technical infrastructures rather than corporate black boxes. Community-run models, transparent datasets, and public research that anyone can build on and participate in. Cooperative AI tools that augment labor rather than displacing it. Letting artists, educators, and cultural workers shape how systems function, rather than being dictated by those outside of the creative professions.
  2. AI used for solidarity, not surveillance: models designed to support social movements, environmental justice, translation, accessibility and community archiving.
  3. And lastly, artists, poets, and musicians, in the spirit of the Dadaists, must learn these systems deeply so they can question, poison, undermine, glitch, and reimagine how they might serve communities rather than perpetuate systems of inequity. 
The nuance of critiquing AI comes from understanding its economic and structural implications, not by evaluating the surface aesthetics of what it produces. We must keep Paik’s words close: let’s learn about technology to more adequately criticize it and help develop the world we want to live in together. 
 
This essay was completed without the assistance of LLM models. 
 
Editor's Note: Email your proposal to contribute an article to CVAD News and Views, cvad.Information@unt.edu.