12 Comments
User's avatar
Janet Salmons's avatar

Actually, pearls are my favorite. I have the real thing, inherited, and fresh water pearls too. I make jewelry and love mixing fresh water pearls with gemstones like lapis or turquoise. So I guess I clutch my pearls ;-)!

I'm still unhappy about books "licensed" without my permission (and without compensation.) But the situation with AI has gotten more complicated. Do we want to patronize and add to the fortunes of those who are currently tearing down the US government, including research, libraries, higher education? Who are literally doing a digital book burning by erasing the histories and stories of women, African- and Hispanic- and Native Americans? I am not going to outsource my mind to these tech bros, whose values are becoming more apparent every day. I do not trust them with anything, certainly not my work.

I'd point you to a provocative article in The Atlantic by Kara Swisher.

"For tech leaders at this moment, the digital world they rule has become not enough. Leaders, in fact, is the wrong word to use now. Titans is more like it, as many have cozied up to Trump in order to dominate this world as we enter the next Cambrian explosion in technology, with the development of advanced AI." https://www.theatlantic.com/technology/archive/2025/03/the-elon-musk-way-move-fast-and-destroy-democracy/681937/

So, everyone has to make their own decisions about what they do with their one wild and precious life (as Mary Oliver reminds us.) I think about the women in the not distant past who were prevented from pursuing the life of the mind. I'm not surrendering mine.

See Guest Post: Finding Your Voice in a Ventriloquist’s World – AI and Writing

https://scholarlykitchen.sspnet.org/2025/01/28/guest-post-finding-your-voice-in-a-ventriloquists-world-ai-and-writing/

Expand full comment
Jenn McClearen, PhD's avatar

Janet, I always appreciate your thoughtful engagement—thank you! I really enjoyed the way you subverted my pearl-clutching metaphor here—well done!

Lately, I’ve been thinking a lot about my own participation in tech oligarchies and what that means, especially as many of the big players align with the new U.S. administration. I completely agree that it’s crucial to examine how we choose to engage (or not) with services like Google, Amazon, Meta, and, of course, generative AI.

While I don’t think you’re necessarily doing this, your points made me reflect on something else I often notice. The term "tech bros" tends to evoke figures like Musk, Zuckerberg, or Bezos, but I think it’s also important to avoid painting everyone in tech as solely driven by profit or lacking ethical considerations. My wife, for example, works in learning and development at a tech company (not one of those major players), and she’s currently involved in AI projects. Her company has dedicated AI ethics and sustainability teams actively working to address many of the concerns raised here. We need to be mindful that good work is happening everywhere to mitigate some of the dangers posed by the rapid development of these tools. It is, of course, a power struggle between doing good and society and profit, but the struggle is happening.

I also like your wisdom around deciding for oneself how much participation we are ethically comfortable with, given the stakes. Sure, there are things around privacy, sustainability, and intellectual property that society must decide together, but there's also an important element of individual awareness and decision-making that has to happen.

Expand full comment
Janet Salmons's avatar

Jenn, good points re. “tech bros” who have gone to the dark side. My family is full of techies who are working for social justice - but have no power to make policies (and no $millions to give to politicians.") I’ll use Kara Swisher’s “tech titans” to distinguish.

As an early adopter of online learning and online research, and an advocate for technology that allows us to connect with and learn from one another, I am distressed that those with the power are moving in the opposite direction, towards authoritarianism. The titans have recently agreed, for example, to use data they scrape from all of us to help identify people to deport.

I just can’t use a tool from those people that a) expects me to upload documents or b) do any “thinking” or writing. Sorry, no.

I think(!) students need to know the back story, the ethical implications, the risk of surveillance and the social control that ties in with the shortcuts they hope will help them complete an assignment quickly.

Of course your Substack work is so useful to people who really want to learn the skills so they can speak their own minds! Keep up the good work.

Expand full comment
N. M. Scuri's avatar

I'll be happier with AI when it doesn't include theft of copyrighted works and doesn't require that we burn the world down to run, but I guess I'm just clutching my pearls, yes?

Expand full comment
Jenn McClearen, PhD's avatar

I appreciate your comment, and I apologize if the pearl-clutching metaphor came across as dismissing the environment or copyright—that wasn’t my intention. I can see how it might have had that effect, though.

What I was really trying to get at is the specific concern around academic integrity—particularly how generative AI is often criticized as a tool for the lazy or disingenuous, while the use of a copyeditor seems to go unquestioned. I think that contrast is worth exploring in the ethical conversations.

Expand full comment
N. M. Scuri's avatar

Are you equating feedback on content and grammar with popping a few prompts into an IP scraping program, or does your point require more clarification?

Expand full comment
kat's avatar

I'm surprised you don't bring up another key (& ethically complex) component for the majority of generative text and large language model systems: copyrighted or nonvoluntary training data. OpenAI and its ilk draw from huge sources of text with murky intellectual property, consent, and data privacy laws re: its internal parts (not just for users). It's not that someone might be using predictive text tools to check their grammar, but that generative text comes FROM somewhere, and it certainly wasn't Sam Altman's brain. There's been a huge uproar about LLMs trained on Wikipedia or Reddit posts, for instance, which are drawing from text that people didn't consent to contribute. Even worse are those (like Meta's LLM, Llama) that torrented (read: stole) books to train its data.

I think it pays to *be specific* about what we MEAN when we say "AI." Letting it live as this floating signifier (Lucy Suchman's term, here) in the murky term "AI" helps "AI" companies obfuscate what their tools actually DO. Predictive text, image generation, large language model text generation, machine learning -- they each do something different, and it is vital to specify. It's one thing to use something like autocorrect or argue that one is "only" using "AI" to organise a schedule. But it's entirely another to fuel the "AI" structures that want to scrape all the books and articles and the internet as a whole into a woodchipper that generates profit for someone else.

Expand full comment
Jenn McClearen, PhD's avatar

Hi Kat! Thanks so much for your thoughtful comments! I appreciate you taking the time to add these needed perspectives as a tech scholar. Yes, one of the major concerns for many knowledge workers and content creators is how LLMs were trained on their content and they aren't being compensated for it. I've added another bullet point to the web version of this newsletter post that adds this ethical concern.

Expand full comment
Alix's avatar

In addition to the critical points you mention and the theft of data that is being conducted to train these models (and being continued throught agreements with academic publishers who are once again being interested in money more than in the creation of knowledge), I would like to bring up another critical point when talking about the unethical nature of AI, it needs to work of many exploited people in the Global South to work, workers who are exposed to traumatising material specific to train the models. Time did an exposé on it in 2023 so it's not new information. Me getting help with my grammar is honestly not worth the environmental and human cost, the ethical and moral issues are too large. I refuse to take a plane if a trip takes less than 30h by train if I can help it because of climate issues. I study social justice and want my work to help people so I'm not going to use a tool that actively makes both of these things worse. This is once again an example of tools that use the energy and labour of some (always the same may I add) to help others.

You discuss the fact that it's not required to disclose when human workers and helpers were involved in the same way that we have to disclose AI. I had never considered that, but honestly it is a case of 1) it should because it does involve others and that should be mentioned just like any other participants who have been integral to the research taking place, 2) most of my gripes with AI is not about the fact that people use it for grammar or to help write grant applications and the likes, it's about the human costs outside of the Global North, it's about the environmental destruction it's creating (from it's training and use, from the frantic building of new data centre that is diverting money away from necessary climate mitigation and adaptation measures, etc.), it's about giving more money to tech bros who are already deciding too much of what we think and how we think, let's not give them even more of a tool. It's funny you mention that you use AI to summarise papers because research has started to come out that AI does a terrible job at summarising. It does not actually summarise because that requires thinking which as we know AI does not and cannot do, what it does is that it reduces the number of words. Which might be still useful, the main issue being if 1) you're not aware of that fact, and 2) the fact that it is often wrong on some points and hallucinates, even when what you're doing is "summarising". The paper I was looking at (which I don't have the reference to) was discussing this "summarising" aspect that they tested for several genAI tools when it came to news articles (as opposed to academic articles) and it was as much as 40% of those summaries that had either partially or fully misrepresented the articles, sometimes said the complete opposite of what was in the article. Of course, not all AI is created equal, but by encouraging the use of genAI, there is also a big issue of letting something else decide for you what is important. Abstracts were created for a reason, that's good enough a summary for me and hopefully was created by a human who did actually THINK about what was important.

I work with housing issues and the inaccessibility of the housing market and I have had to explain to people that no I wouldn't be using AirBnB when travelling, including for work because that would completely antithetical to my research and I could not morally do that. GenAI in this case goes in the same box. I have friends who do research in climate modelling and use AI models, I have absolutely no issue with those, they're are locally and specifically trained for their research purposes, but the kind of genAI that is available for the public does not fit in this category. (My department also uses genAI to create illustrations and it is driving me up the wall because that's another can of worms that should not be opened.)

I appreciate that at least these discussions are taking place rather than what happened after it was first released and there was widespread acceptance of genAI with no questions asked.

Expand full comment
Jenn McClearen, PhD's avatar

Hi Alix, Thank you so much for your thoughtful reply and your clear concerns about the human and environmental impact of generative AI. Your point that people must make decisions about where they can prevent further harm by boycotting or minimizing consumption--mitigating flying, boycotting certain companies, and not participating in gen AI--is well taken. Fast fashion and mobile phones are other ones that come to mind for enviornmental and human rights abuses. But as phyiscal objects, those things are also easier to consume second hand. Thanks for sharing this comment with PNP readers!

Expand full comment
Karen Smiley's avatar

You raise some great points about nuance in how AI is used and whether using AI can help to offset lower privilege. As several commenters have noted, many people are far more concerned about AI tools being trained on stolen content than they are about AI tool use per se. Other aspects of ethics, besides theft of content and environmental impact, are concerning as well. I wrote about them here & would love to hear your thoughts: https://open.substack.com/pub/karensmiley/p/top-5-things-to-know-about-ai-ethics?r=3ht54r

Expand full comment
Jenn McClearen, PhD's avatar

I love this post, Karen! Thank you so much for sharing it. I think you’ve already seen that I added it to my original post—your thoughtful approach to working through these issues in a reasoned, ethically aware way really stands out. I’ll definitely be bookmarking it for future reference! Thank you!

Expand full comment