12 Comments
Apr 10, 2023Liked by Jenn McClearen

I can relate to your childhood. I have had similar experiences with learning how to read and write. My mom also has been the biggest support to me in helping me learn and navigate my challenges. Thank you for sharing this!

Expand full comment
author

Thank you so much, Mansa! I'm glad it resonates.

Expand full comment

I feel the exact same way!

As a student with both ADHD and Dyslexia who loves to write, this hit home. I had an ethical dilemma when I began writing on Substack: Was using AI to edit my original work or help me plan the sequencing of articles acceptable? So, I put it to an Instagram poll, and the results were amazing! 82% of respondents answered it was ethical to use AI in writing.

Tools that make communication more seamless and fluid should not be discouraged in schools but taught. Allowing students to communicate their own thoughts effectively.

Expand full comment
author

Thanks for your comment, Ivan! I'm so glad the post resonated. It's always nice to meet a kindred spirit.

I agree that we should be training students how to use it well in writing. I saw someone make this argument on Facebook contrasting "cheating" with AI, meaning letting it do all the work to mediocre results, and using it as a tool. She said, "If you can cheat with ChatGPT, then it will soon take your job and you won't be employable. On the other hand, if you use ChatGPT to accelerate your own thinking and learning, you can prepare yourself for jobs of the future."

Expand full comment
Apr 12, 2023Liked by Jenn McClearen

Hi Jen, thanks for your thoughtful contribution to the AI conversation! I share your optimism, and also feel that many of us in academia (esp TT folks, I recognize that adjuncts and PhD students are in a diff position) can be well-placed to interrogate and integrate AI tools ethically -- what we model for our students will help them learn how to use it, and I tend to believe that if we refuse to address it or simply work to prevent students from using it, we will be fighting a losing fight. Like you, I share the concerns about AI ethics and what AI will mean for some jobs being eliminated, but I think that we have to understand how the technology can be used in our own work to participate in those conversations. There is no rolling it back, there is only finding ways to regulate and integrate it thoughtfully.

Expand full comment
author

This sentence really sums it up well: "There is no rolling it back, there is only finding ways to regulate and integrate it thoughtfully." Yes! I prefer to proceed optimistically and curiously while maintaining a critical skepticism for what all of this will mean for work, for learning, and beyond. I think I was seeing mostly skepticism in academic circles, which is why I wanted to write from a different persecutive. Thanks so much for sharing!

Expand full comment

Yes! Totally agreed -- so much skepticism in academic circles and frankly a naivety about the realities of this technology. From what I understand, AI is going to be on par with the iPhone, the laptop. If we don’t engage with it, we’ll get run over. Thanks again for your take!!

Expand full comment
Apr 10, 2023Liked by Jenn McClearen

Loved this Jenn! As a fairly new teacher just starting in writing and researching I was hesitant about using AI as it felt like cheating, but how you have described using it has been the validation I needed to give it a fair try in a similar way!

Expand full comment
author

Glad to hear it, Rose! Let me know what you think as you try some things out. As I said, my thoughts around this will constantly be evolving, so I love hearing about both insights and concerns as people engage with these tools.

Expand full comment

I always enjoy getting notifications from Substack, as I have found an amazing community of writers, and I always learn something new. I'm especially interested in the latest about AI, as things are moving at breakneck speed in regards to this technology. Like you, Jenn, my background is in education, as as someone who has had a life-long struggle with learning disabilities, tools like typewriters, word processors, and spell checking programs are an absolute godsend. With speech-to-text technology, things get even better. Not just for me, but for people with issues that make using a pen or keyboard difficult, if not impossible. AI is being used to save lives in medical diagnostics, and to improve the lives of everyone in a thousand different ways. I am by no means a luddite.

However, I do not share your enthusiasm for ChatGPT as it is currently being used. I understand your use of ChatGPT as an assistant, like spell checker and other tools. They are not the same, however. In the case of Word, or Grammarly, you are the creator (and "Two bee oar knot too Bea" will still sail through grammar checks). This new generation is a different thing all together. Yes, it can help with ideas (and if it can help me write an elevator pitch for my next book, I'd be thrilled), but while you are willing to use it as an assistant, many are not.

As you know, Buzzfeed has already published AI-generated articles, cutting their freelance staff. The San Francisco Ballet used Midjourney-generated art to promote this past winter's The Nutcracker. TickTok and YouTube are full of videos on how to write a novel, generate a cover, and upload it to Amazon. Scifi publisher Clarksworld had to stop accepting short stories after it was flooded with sub-standard submissions generated by spambots using ChatGPT.

You state that you as the writer are ultimately responsible for what is produced under your name. The problem is when others are less careful and have no problem tossing out derivative content. I discussed my concern with another Substack writer who said "the teachers can use bluebooks." As an educator, you know as well as I do that the current system of colleges and universities rely heavily on the overworked adjunct who doesn't the extra time to closely review every paper for signs of chatbot "writing assistance." The problem is, writing is supposed to help the writer learn critical thinking skills. Using a chatbot and calling it done because the student sees writing as "busywork" will not help them acquire those skills.

ChatGPT and all that come after (and there will be more) could very well be a great thing. I just have concerns.

https://nancyscuri.substack.com/p/i-think-you-know-what-the-problem

https://www.wired.com/story/sci-fi-story-submissions-generative-ai-problem/

https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/

https://futurism.com/buzzfeed-publishing-articles-by-ai

https://news.artnet.com/art-world/san-francisco-ballet-catches-heat-for-promoting-its-nutcracker-performances-with-a-i-generated-art-2236291

https://slate.com/technology/2023/02/chat-gpt-cheating-college-ai-detection.html

Expand full comment
author

Hi there! Thanks so much for adding to the conversation. I really appreciate your points here because you expanded in several spaces I was mulling over and you also have lead me in new directions.

The reasons you have to be skeptical are all founded and I agree that the breakneck speed that these technologies will now be integrated in numerous industries will have a negative impact on workers now performing these jobs manually. I'm not surprised that Buzzfeed has let some freelance staff go because it is very easy to write a list style article via some of these AI tools. Technological innovation combined with a workforce that is increasingly expendable, i.e. freelance or gig workers, will mean those workers will bear the brunt of these changes and I am concerned with how expendable workers are in many industries. I would like to see companies build in strategies to redistribute workers into new roles if tech makes their jobs obsolete, but unfortunately, that's unlikely for contract staff and that remains an area of concern. This is not a process that AI caused, but AI is likely to accelerate it because it will be cheaper than humans.

Teachers are going to have to decide if we are going to work with these tools or against them, and I agree that most of us, especially the precariously employed, don't have the time or energy to ensure no one ever uses a questionable AI Tool for our classes. I think that's why I prefer to think of this ethically because we can grapple with these issues alongside our students rather than taking on more work to prevent it. Right now teachers don't have good strategies to identify students who pay other humans to write papers for them and there's a whole internet business where people buy and sell college papers. ChatGPT is going to accelerate this phenomenon and we are going to have to convince our students that the critical thinking and creative skills they are learning in college are more valuable than cheating the system.

In the end, my strategy is going to be digging into these tools, using them, teaching how to use them responsibility, and continuing to have great conversations like this one on how we shall proceed as a society in this brave new world!

Expand full comment

Agreed. Pandora has cracked open the box, and we're here now. The key word is "ethical." As you say all we can do is shine a light and teach people to do things the right way. Thank you!

Expand full comment