Artificial Intelligence. Unless you've been living under a rock, you've heard of it. From the intrusive "assistants" on practically every app and website, to the chatbots, filters, and AI learning or enhancement systems hidden deep in your settings, feeding off your posts, images, and words. Like a parasitical infection, or perhaps a fungus, generative AI has fed from, killed, and used the carcass of creativity to persevere. But is it all bad? Perhaps this fungus, too, could be used for good- maybe in a soup? To find out the answer, I first researched a topic closer to home-- Artificial Intelligence in education. I interviewed five teachers from various fields, backgrounds, and knowledge on the subject. The thing they all had in common was their connection to the humanities. Their specialties were mostly contained in the English and History fields. From there, I researched the extent of the issues with AI, and where I feel most people seem to be unaware.
I would like to make two clarifications regarding the rest of the article. I will refer to “generative AI” to mean a machine learning program that creates, or generates, text or images. I recognize the obvious beneficial uses of more specific AI learning programs, such as finding cancerous cells, predicting early onset dementia, or making progress in scientific research. I also refer to “Artificial Intelligence”, or “AI”. True intelligence is yet to be found in these machines (and this is more than just an insult). These programs work by consuming media from across the internet and spitting out a sort of “average” or "best guess” gained by its sources, and almost all are actually “machine learning”, and not true artificial intelligence. When referencing Gen. AI, I am typically referring to large-scale models owned by companies like Meta, OpenAI, Google, and Microsoft.
I wanted to know a baseline for those about to be interviewed. Would they be more or less aware of the controversies? Would they have endeavored to learn beyond what they are told? Would they have researched this privately? Would they know of the variety of controversies, or just one? Here’s what I learned. Teachers generally feel that they are somewhat informed, but not very. This could be biased by humility. Many, if not all, admit to using AI for menial tasks like worksheets or lesson plans, as well as designing posters or shirts. Some assured me that they look them over before giving them to students. Two teachers knew of the environmental impact of using artificial intelligence, with others suggesting they had “heard about it” but not researched. However, when asked about AI use from students, all teachers explained that they had rules against it. Some allowed it in specific circumstances, allowing AI to act “as a tool, but not a replacement” for students' work. The protective measures that teachers take against prohibited AI work from students varies, but they mostly rely on their years of education and experience to “eyeball it”, or take a reliable guess as to the authenticity of the work. Ms. Lynam, an English teacher here at MHS, says that “the best way to get students not to do something wrong is to let them know you're well versed in it.” She also says, aptly, that ”the solution they're looking for in AI is not a long term solution.” Mrs. Troxell says that a large issue with AI in education is the same as plagiarism: “It’s presenting something as yours when it’s not.”
Most of the teachers interviewed agreed that AI is damaging to students' learning, ability to learn, ability to think, and ability to understand. None seemed to think this viewpoint extended to themselves. Mrs. Troxell adds, “...quick answers damage our ability for good thinking-- and creating good work doesn't come in a minute.” Mr. Cordts has a valid criticism: “Knowledge takes a lot of work; you can’t have AI do everything. I want a real doctor or a real lawyer when I want advice, right?” He also says, “I think it’ll make people lazy. When I worked… I had to read a lot, work a lot, people aren’t reading the way they used to.” The best solution (which I heard from a few different teachers) was to contain assignments in an environment in which it’s impossible to use AI. Mrs. Gonyo gives “most [of her] assignments and tests on paper in front of a student.” She also asks them to do an essay planning outline in order to better assess their real capabilities. Ms. Lynam says “It matters that people learn to say and write, and trust what they say and write.”
When asked about the largest problem with Generative AI today, Mr. Ludemann suggests, “It can be damaging to people’s creativity, to their work ethic and it can be damaging to their general knowledge.” Dr. Blessinger says, “The human intention behind art is very different from AI’s intention. The aim is to please people, not to express an opinion.”
There are two major fields of issue with generative AI-- practical and creative. On the practical side of things there are major issues, none of which seem to be being addressed in a noticeable way. The first are the aforementioned water usage troubles. To the best of our knowledge, ChatGPT handles about one billion queries per day. One 15th of a teaspoon (OpenAI head Sam Altman’s estimate of a single query's requirement) is 0.000085 gallons. Multiply that by a billion and ChatGPT alone is using 85,000 gallons of water daily. That’s equivalent to the water usage of about 1,000 American households (Readers Club at Medium).
This number doesn’t even factor the hundreds of other AIs that leech water from their environments. One might think this number is impossible, due to the fact that most large-scale servers recycle the water that they use by trapping steam and cooling it back into water. Why wouldn’t AI agencies do the same thing? The answer? Lack of regulation and more money. Cindy Gordon at Forbes states, “AI server cooling consumes significant water, with data centers using cooling towers and air mechanisms to dissipate heat, causing up to 9 liters of water to evaporate per kWh of energy used. The U.S. relies on water-intensive thermo-electric plants for electricity, indirectly increasing data centers' water footprint, with an average of 43.8L/kWh withdrawn for power generation.” If the lack of humanity somehow doesn’t concern the general public, the water consumption that will only grow should. “Already AI's projected water usage could hit 6.6 billion m³ by 2027,” says Gordon.
The next issue is possibly the most infamous: theft of intellectual (or physical, in the case of published photos, art, and literature) property. Generative AI, being at its core artificial and synthetic, is unable to truly create, imagine, or draw inspiration. This means that any work it generates is simply a compilation of other artist’s (real artists) work. Only a few years from its creation, in 2025, AI has already begun recycling its own work-- sort of like a Frankenstein made of Frankensteins. One argument against the stealing claims is that just like an artist, AI simply combines and copies artists before them (it). David Carson, JSK journalism fellow at Stanford University, explains, “training use is also not ‘transformative’ because its purpose is to enable the creation of works that compete with the copied works in the same markets — a purpose that, when pursued by a for-profit company like Meta, also makes the use undeniably “commercial.” The Author’s Guild understands perfectly the damaging effects this technology will inevitably have on not just authors, but all art created in our future, especially when treated so flippantly by almost all US adults.
“Generative technologies built illegally on vast amounts of copyrighted works without licenses—without giving authors any compensation or control over the use of their works—are used to cheaply and easily produce works that compete with and displace human-authored books, journalism, and other works. [...] the market dilution caused by AI-generated works will ultimately result in a shrinking of the profession as fewer human authors will be able to sustain a living practicing their craft and inevitably shut out important, diverse voices. This is not just a problem for authors and their ability to make a living from their work—of particular concern to the Authors Guild—but a problem for all. Our literature and arts, the creativity which Americans excel at and we export around the world, will decline. Unless we can limit AI’s use of books and journalism to permissioned use that gives writers compensation and, most importantly, control over how AI models can be used, we will find ourselves in a world of homogenized mashups of what has come before, and we will lose the written works that incite the public discourse fundamental to democratic culture."
David Carson compares Pulitzer Prize winning photo from Robert Cohen (top) to source less AI "generated" image (bottom)
Many AI users seem to think that their use of these systems doesn’t matter-- if they don’t claim the work as their own, or simply use it for “inspiration”, what’s the harm? The harm is clear and has been since these systems were created. AI “intelligence” is theft, plain and simple. Using it encourages this theft and flippant disregard for the environment. Also from David Carson in Theft is not fair use, “Dozens of lawsuits have been filed by news publishers, the entertainment industry, authors, photographers and other creatives who objected to tech companies pickpocketing their copyrights under the guise of fair use. [...] AI’s copyright liability does not end with its use of scraped training data. Generative AI is also liable for providing copyright-infringing outputs in response to the prompts given by its users. A lawsuit filed by The New York Times cites examples of problematic ChatGPT outputs that copy large parts of text from Times articles.” One last practical issue adds insult to injury. These massive AI servers and platforms cost more than they are worth. Edward Zitron from Where’s your Ed at? explains, “Large Language Models are too expensive, to the point that anybody funding an 'AI startup' is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.”
The creative, ethical, or long-lasting damages of AI on art and humanity have been addressed by those interviewed at MHS, but they stem beyond damaging youth’s ability to learn. From the Author’s Guild, “Isn’t this just another disruptive technology that does what humans can do, only more efficiently? Using AI to produce literary and artistic works differs from other uses of AI in one stark aspect. Human communities are bound together by culture—literature, art, music, and other forms of expression. [...] We need to ensure that human creators are compensated, not just for the sake of the creators, but so our books and arts continue to reflect both our real and imagined experiences.”
Additionally, some meaningful statistics take the gross indecency of generative AI to a new low. For instance, the founder of an AI “acting agency”, Eline Van der Velden, says “the AI talent studio has set its sights on creating digital movie stars with Tilly Norward [an artificially generated “actor”] as its first prospect.” Speaking on a panel at the Zurich Summit, Van der Velden said that studios are quietly embracing AI. “...By May, people were like, ‘We need to do something with you guys.’ When we first launched Tilly, people were like, ‘What’s that?’, and now we’re going to be announcing which agency is going to be representing her in the next few months,’ said Van der Velden.” If an AI actor isn't depressing enough to convince anyone to take action, I think humanity ought to re-examine its values. Real actors, real talent, and real love for the craft are being left without pay and without jobs because companies would rather pay no one at all than make something worth watching. On top of this, AI will only further a global issue of lack of literacy. From the National Institute of Literacy, “54% of U.S. adults read below the equivalent of a sixth-grade level, and 64% of our country’s fourth graders do not read proficiently.” And if one utterly lacks all moral bases except for peer pressure, a study done by Ars Technica says “...these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.” (emphasis added). Not only is generative AI damaging literacy by producing (typically inaccurate) piles of information at the drop of a hat with no effort required from a human input, it is also eliminating and graverobbing the “fingerprints of humanity” from all loved art forms. Authorsguild.org says, “The Authors Guild sees this as an existential battle for books and authorship. We cannot let big tech’s cynical vision define the value of books, works of art, and human culture. This is as much a fight for accountability as it is for defending quintessential human values. “
The excuses need to stop. Generative AI takes, and takes, and takes. What it gives is inaccurate, pieced together, stolen fragments from those doing real work for society. Every time one uses it for an outline, a layout, a paper, a solution, an idea-- it takes more. Stop supporting this blunt knife because it is easy, and start taking the difficult stand against generative AI. If not for humanity, for the environment. If not for the environment, for protected property. If not for that, or any of the other countless reasons, do it because you really hate the way it can’t get the fingers right.
Image Sources:
https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063
https://www.history.com/articles/prehistoric-cave-art-children
Other Sources:
https://www.thenationalliteracyinstitute.com/2024-2025-literacy-statistics
https://arstechnica.com/ai/2025/05/ai-use-damages-professional-reputation-study-suggests/
https://medium.com/readers-club/chatgpt-water-usage-1a1167244a5a
https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063
https://authorsguild.org/advocacy/artificial-intelligence/
https://jsk.stanford.edu/news/theft-not-fair-use
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/