Let’s Talk About GenAI, Ethics, and the Pearl-Clutching
Or, why I'm still not afraid to use AI in writing, but I tread carefully.
When the topic of generative AI (GenAI) comes up in academic circles, it’s often met with concern—and understandably so. Academia rests on values like originality, deep thinking, and intellectual integrity, and new technologies can feel disruptive to those foundations. On top of that, there are ethical concerns about how GenAI was developed and is impacting society more broadly.
But I’ve noticed that conversations about GenAI in academic writing sometimes come with an implicit assumption: that using GenAI is essentially the same as outsourcing one’s work to a ghostwriter. I have seen judgmental pearl-clutching that assumes that using these tools entails taking shortcuts, bypassing critical thinking, or ignoring the craft of writing.
And, while I completely agree that allowing GenAI to do all of the work is a terrible idea (not to mention wildly ineffective for anything remotely complex), I believe we need a much more nuanced discussion about the role of generative AI in academic work.1
Generative AI in Academia: What’s Actually at Stake?
I have previously written about the importance of setting your own ethical boundaries when it comes to GenAI:
It is also worth noting that academia differs from other industries in a few ways. Academic writing emphasizes originality and authorship in ways that other professions do not. When a celebrity hires a ghostwriter or an unattributed coauthor for their memoir, no one bats an eye. But in academia? That is a serious ethical breach. Ghostwriting, whether done by a human or an AI, simply does not fit with the way knowledge is produced in our field or attached to individual reputations.
What strikes me about these discussions is the assumption that any AI-assisted writing is the same as hiring a ghostwriter—a lazy shortcut that demonstrates a lack of integrity. The issue is not just GenAI's limitations (which, believe me, are numerous); it is the notion that using GenAI at all implies that you don’t value originality, creativity, or deep thinking.
And that’s where I think the conversation needs more nuance.
AI as an Assistant, Not a Replacement
From the start, I treated GenAI as a very eager, slightly unreliable assistant—the kind that can summarize an article in seconds while also confidently making things up out of thin air. I use GenAI tools like QuillBot for copyediting and proofreading, ChatGPT for making feedback to students clearer, and NotebookLM for summarizing academic articles. These tools don’t replace my work—just like hiring a human editor or research assistant wouldn’t. Instead, they make certain parts of writing faster, smoother, and sometimes even enjoyable.
The ethical question I ask myself is whether this is a task that my profession considers acceptable to pay or trade humans for in my academic work. If the answer is yes, then using GenAI in that role feels aligned with the ways scholars have always sought support in writing. Hiring a developmental editor, copyeditor, or proofreader? Perfectly acceptable. Having a research assistant read literature to create annotated bibliographies? Also common practice. Why would using GenAI for these same things suggest a lack of integrity?
The Ethics of Access
But here's something we need to talk about more: access. Not every scholar has the resources to hire human help for their research and writing. On the other hand, no one blinks when an endowed professor at a well-funded research institution gathers an entire team to assist with their research writing and pays them with their professorship funds.
This double standard becomes even more apparent when we look at emerging policies around GenAI use. Many academic presses now require authors to disclose the use of GenAI tools in their writing, yet they don’t demand the same transparency when human editors or research assistants are involved. How is that fair? Additionally, how do we address the concerns of biased reviewers who may dismiss work simply because it acknowledges GenAI assistance but accept work that does not disclose human assistance?
Most importantly, if GenAI can help level the playing field by making certain types of support more accessible, shouldn’t that be part of the ethical conversation? Scholars at elite institutions often have funding to pay for professional editing or graduate assistants who help with literature reviews and manuscript preparation. Meanwhile, independent researchers, adjunct faculty, and scholars in underfunded programs are expected to manage these tasks alone.
If we question the ethics of using GenAI for these roles, we should also be questioning the existing inequities in who gets access to human research support in the first place. The ability to delegate parts of the writing and research process has always been a privilege afforded to a select few, and GenAI is merely making that privilege more visible.
The Broader Ethical Picture
GenAI also raises ethical concerns that extend well beyond academia. My enthusiasm for GenAI's potential hasn’t waned since I first began using it around two years ago, but my understanding of its broader implications has grown.
For starters, GenAI's environmental impact is significant. These tools require massive amounts of energy and water to operate, and their environmental impact is something we cannot ignore.
Then there is the issue of disinformation—GenAI can create misleading or outright false content on a scale never seen before. That is a serious concern for research, journalism, and the entire information ecosystem.
GenAI also raises concerns about data privacy. Many tools retain user inputs to train their models, meaning unpublished research or confidential feedback may not remain private. Most GenAI platforms lack clear protections yet, making it essential for scholars to understand how their data is stored and used.
Another major concern with GenAI is its use of copyrighted material. Developers have trained LLMs like ChatGPT and Gemini on vast amounts of internet content, which includes copyrighted material— without obtaining consent. This raises significant issues regarding intellectual property rights and fair compensation for content creators. (Thanks to the commenters below for advocating for this inclusion).
For a more robust discussion of GenAI ethics, see
’s excellent piece here:
These aren’t questions with easy answers, and they deserve ongoing, thoughtful conversation. Pandora's box has been opened, and we must now collaborate to make GenAI more accessible, equitable, secure, and sustainable, just like any other technology.
What’s Next?
I am still working through all of this myself, but one thing is clear: AI is more than just a shiny new tool; it is a technology that is fundamentally changing how we write, research, and communicate. That means we must be curious and cautious about how we use it, as well as open-minded to how other students and scholars use it.
Over the next month, I’ll be sharing a series of newsletters exploring different ways scholars can use generative AI in their work—including where it helps, where it fails, and where we need to tread carefully.
In April, I’m also hosting a workshop on using GenAI to streamline administrative writing, where we’ll look at practical, ethical ways to make tedious writing tasks less painful (because let’s be honest, no one dreams of spending hours writing meeting summaries).
I’d love to hear your thoughts—are you experimenting with GenAI? Steering well clear of it because of your own ethical compass? Secretly intrigued but afraid to admit it at faculty meetings? Let’s talk.
I’ve rewritten my introduction from the original because I inadvertently offended some of my dear readers who oppose the use of generative AI for some of the ethical and material concerns raised later in the piece. This was not my intention, so I rewrote it to better reflect my intent.
Actually, pearls are my favorite. I have the real thing, inherited, and fresh water pearls too. I make jewelry and love mixing fresh water pearls with gemstones like lapis or turquoise. So I guess I clutch my pearls ;-)!
I'm still unhappy about books "licensed" without my permission (and without compensation.) But the situation with AI has gotten more complicated. Do we want to patronize and add to the fortunes of those who are currently tearing down the US government, including research, libraries, higher education? Who are literally doing a digital book burning by erasing the histories and stories of women, African- and Hispanic- and Native Americans? I am not going to outsource my mind to these tech bros, whose values are becoming more apparent every day. I do not trust them with anything, certainly not my work.
I'd point you to a provocative article in The Atlantic by Kara Swisher.
"For tech leaders at this moment, the digital world they rule has become not enough. Leaders, in fact, is the wrong word to use now. Titans is more like it, as many have cozied up to Trump in order to dominate this world as we enter the next Cambrian explosion in technology, with the development of advanced AI." https://www.theatlantic.com/technology/archive/2025/03/the-elon-musk-way-move-fast-and-destroy-democracy/681937/
So, everyone has to make their own decisions about what they do with their one wild and precious life (as Mary Oliver reminds us.) I think about the women in the not distant past who were prevented from pursuing the life of the mind. I'm not surrendering mine.
See Guest Post: Finding Your Voice in a Ventriloquist’s World – AI and Writing
https://scholarlykitchen.sspnet.org/2025/01/28/guest-post-finding-your-voice-in-a-ventriloquists-world-ai-and-writing/
I'll be happier with AI when it doesn't include theft of copyrighted works and doesn't require that we burn the world down to run, but I guess I'm just clutching my pearls, yes?