AI Shaming and That Little Hint of Elitism
Photo by Adi Goldstein on Unsplash
There are those who simply don’t like AI and those who engage in AI shaming. Nothing wrong, it goes without saying, with personal preferences. It’s okay if you’re not a fan of AI-generated content, and it’s okay too if you are. For me, I don’t really make a big deal when I come across AI-generated content online. I’d read it if it catches my attention and it’s informative, and simply wouldn’t if it doesn’t, just the same as I’d do with human content.
What I really find disturbing, though, is the current trend of AI shaming. Any shaming, for the record, because I firmly believe it’s a form of violence, no matter what. Unfortunately, the Internet and the anonymity it grants are making shaming particularly easy to resort to today, and each corner of the web is increasingly filled with people regurgitating toxic comments left and right. We’ve long had body shaming, and now we have AI shaming as well. And no, AI shaming doesn’t make you superior. It might just show you care about status.
Nobody is safe from it: The threat of AI shaming AI shaming can target AI users as well as non-AI users. Even just the suspicion that you used ChatGPT is enough to set some people off, even if you didn’t. Openly admitting that you used AI for simple assistance doesn’t change much, as many people would raise an eyebrow anyway. Sometimes the backlash is against the content itself, sometimes it’s against the person behind it.
Also, the reasons given for such extremely negative reactions can vary. It might be because the content is deemed low-quality, or because people perceive the very act of using AI tools as cheating or as spreading inauthentic content. But in broad terms, as put forward by Philippine researcher Louie Giray, who first identified it as a social phenomenon:
AI shaming refers to the practice of criticizing or looking down on individuals or organizations for using AI to generate content or perform tasks.
Recently, Cambridge researcher Advait Sarkar has reached the same conclusion in his study titled AI Could Have Written This: Birth of a Classist Slur in Knowledge Work. He argues:
AI shaming is a social phenomenon in which negative judgements are associated with the use of Artificial Intelligence (AI).
Shamers often resort to belittling phrases like “You’re not a real artist!” or “You write like ChatGPT” to induce shame in anyone using AI tools and pressure them to correct their course.
Metal band Pestilence had to change their openly AI-generated cover for the album Levels of Perception, following their fans’ outrage. The backlash didn’t spare songwriter Kesha either, who had to replace the cover for her single Delusional after being accused of using AI-generated artwork. And the list goes on. The web really oozes comments implying that if you rely on AI tools to write, paint, or create content in general, then you shouldn’t be doing any of these things at all. Just make room for the truly capable ones.
Except that not even Nobel winners are immune to the stigma. When Chinese Nobel laureate Mo Yan revealed he had used ChatGPT to write a speech to praise fellow author Yu Hua, the crowd shivered. What an affront! Scandalous!
Again, I see why someone might not like AI, but why the shaming? Why the stigma? You’re free to choose what music to listen to, what books to read, and so forth. If you don’t like a product, you just don’t buy it. No one is forcing you to. Why the need to psychologically attack people for resorting to AI? The thing is, AI shaming might have less to do with forcefully expressing personal preferences or defending human craftsmanship and much more to do with elitism.
I shame you, therefore I am That there’s more to AI shaming than just concerns for authenticity, quality, and creativity is an argument brought up by both Giray and Sarkar, but the latter’s study digs deeper into that.
It suggests that AI shaming reflects an underlying class anxiety; for the first time (though, in fact, we’ve been there before), middle-class knowledge workers, the privileged ones, fear a lack of status. They’ve grown up having access to knowledge, and precisely because of it, they could draw a boundary between themselves and the less cultured. There’s nothing more annoying for the privileged, the shamers, to see outsiders, the shamed, making their way through “their territory,” so to speak.
And it really hits a nerve when a technology can help with that. Advait Sarkar, the author of the Cambridge study, argues that this is a recurring pattern in history: when a new technology arrives that can potentially break down barriers to social mobility, there’s always a group of privileged individuals who vehemently oppose it. From the invention of the printing press to TV, to photography… all of these technologies have been met with disdain in their time, even though we now consider them valuable media of expression. Now it’s AI’s turn, and the fans of the status quo, driven by class solidarity, gather around to protect their field.
So more than anything else, the phrase “AI could have written this” is a classist slur, according to Sarkar, because it’s meant not merely to criticize others’ work but to keep out potential competitors, as traditional barriers in knowledge are being smoothed away.
There’s the practical side of AI shaming — maintaining high barriers between classes means a lower chance of having to compete and, obviously, a higher chance of preserving income — but there’s also a psychological dimension.
People are just eager for something that sets them apart. It’s pretty much what sociologist Thorstein Veblen described in The Theory of the Leisure Class: human beings are socially antagonistic by nature; they want to distinguish themselves from others for the sake of it, not just because it brings material benefits. Lower classes emulate, higher classes seek distinction. That’s the core of elitism.
AI shamers seem to perceive AI as a tool capable of reducing the “us and them” logic, and that’s why it feels so threatening to them. Heaven forbid society stops seeing the shamers as the only skilled ones, because that would somewhat hurt their self-esteem. So they feel the need to engage in exclusionary behaviors to keep the barriers separating them from the rest high — it’s a social control mechanism.
Who really pays the price of AI shaming? Does AI shaming make you any better? No. Does it educate people? Neither. As Sarkar himself writes, struggling with his new shame culture-induced phobia of the word delve:
The proliferation of formal and informal shaming practices induces a cavalcade of societal harms, including psychological disorders, racial discrimination, and chilling effects.
Sometimes, when I ask ChatGPT to suggest synonyms for words that I would otherwise end up using too many times (as a non-native English speaker, my vocabulary isn’t as rich as that of a native one), the shame culture manages to worm its way into my mind as well. You’re cheating, it says. I freeze a little bit, and then mentally reply, Nonsense! It’s just like looking something up in a dictionary, only faster!
And then I think of all those people who already start at a disadvantage. Medium writer Jim the AI Whisperer, for example, admittedly makes use of AI to compensate for his aphasia (you know when you have a word on the tip of your tongue? That’s it.) And I challenge anyone to say that he’s not a real writer just because AI assists him in his creative process. That’s an ethical use of AI, worthy of respect. And it would actually be unethical to deny him the possibility to express himself in that way.
I also think of neurodivergent people with dyslexia and dysgraphia, or even those with physical limitations, like motor disabilities and blindness, and of how empowering AI tools can be for them. They can now put their thoughts into words, those thoughts that society had never been able, or willing, to read. AI (ethically used) can give a voice to those who’ve never been given the opportunity.
And this is where it gets particularly insidious, because beyond classism and elitism, AI shaming manifests ableism, reinforcing the idea that only the abled and neurotypical have “authentic” skills, even if indirectly — a dangerous move in a democratic society, and counterproductive in the long term for AI shamers themselves. Instead of looking at the big picture and recognizing that AI assistance could enrich all of us with fresh perspectives, they fixate on who’s better than whom.
Ultimately, not just the shamed pay the price of AI shaming. We all do.
Nobody gains anything from AI shaming. Not the shamers, who reveal more about themselves than the shamed in the process — it’s their class anxiety talking — nor, obviously, ethical AI users, on whom AI shaming takes a toll. No matter from what angle you look at it, shaming in general both expresses and causes social strain.
We might actually have so much to lose from it; the voice of skilled people who remained silent for too long, their diverse views never before explored, their ideas unheard. We might, in the end, miss the chance to democratize access to self-expression, take a lesson from it as well, and focus our attention on the real ethical dilemmas AI poses.
Don’t like the idea? That’s fine. Simply ignore AI content, whether confirmed or suspected. But please, stop shaming.
Read the full article here: https://ai.gopubby.com/ai-shaming-and-that-little-hint-of-elitism-27aed6510acb