Write With Impact Academy
Write With Impact Academy
A chat about ChatGPT with the Director of the Harvard Writing Center

A chat about ChatGPT with the Director of the Harvard Writing Center

Jane Rosenzweig ponders the impact on students and workers of ChatGPT on their ability to write (and think).

Since the roll-out in December of ChatGPT—and the recent launch of GPT-4—I’ve been trying to sort out what this new technology means for writers like myself.

One writer who has also been thinking through this question is Jane Rosenzweig. Jane is the Director of the Writing Center at Harvard University, where she’s been teaching undergraduate and graduate students how to become better writers for nearly 23 years.

When ChatGPT was first released to the public in December, she shared her initial impressions on Writing Hacks, her Substack newsletter, published opeds in the Boston Globe, and was interviewed on CBS Sunday Morning. So, to help me parse through this phenomenon, I invited Jane for a conversation about ChatGPT on my podcast, Write With Impact.

As an educator, Jane is concerned about the impact this technology will have on students’ ability—and patience—to work through the process of learning how to write, a process she sees as critical to their ability to discover and articulate their own ideas on the subjects they are learning.

After exploring the impact of ChatGPT on the process of learning how to write, Jane shares a few practical writing tips drawing on her two decades of experience teaching thousands of students at Harvard, as well as her work coaching corporate clients.

She talks about cutting repetitive language, removing words that dilute the impact of your message, applying the 80-20 rule to using the passive voice, and eliminating what she calls “fake transitions”.

If you're a fan of podcasts like me and prefer to absorb thought-provoking conversations while taking a walk in the park or doing the dishes, listen to our conversation on Apple Podcasts.

If you would like visible proof that our conversation was not generated by AI, watch the complete conversation on my YouTube channel.

And be sure to check out Writing Hacks, Jane’s excellent Substack where she shares practical writing tips.

Here are a few excerpts from our conversation to give you a flavor for what we covered:

Writing as a process to discover what we think

"A useful way that I've found to explain my concern about this and to think about my concern is the difference between process and product. So ChatGPT creates a written product. You say to it, write me a college paper, write me some real estate listings, write me an investment report. Whatever it is that you're doing in your field and out it comes, right?

It's a complete product. So that's one way of thinking about it. This machine can do our writing for us. But when you teach writing, you're not really in search of a product. You're in search of a thought process. For my students, I'm much less interested in what the product is. We can get a product from anywhere.

What I'm interested in is what are they interested in? What kinds of questions do they have and how does the process of writing in answer to a question help you figure out what you think? And that's not what GPT is designed to do, right? It's a product.

So I think that for educators, the concern and it's a very reasonable and real concern is if what we're really trying to do is teach critical thinking and help our students become people who understand the world, know what they think, know what they want to ask questions about.

It's hard to imagine that this chatbot is the way to achieve that goal, and yet it's there and it's easily accessible and people are using it. So now we have to figure out how we coexist with this technology.

A piece of writing advice that I always give that people seem to find very helpful is that when you are writing a draft of something, you always figure out what you think along the way. So it's helpful to look in that last paragraph in the conclusion, to say, alright, maybe this is actually my main point.

I've written myself to my main idea. So that's a process. And that happens to my students. They'll find the thing that they want to be their thesis in the conclusion of that first draft. But it also happens to all of us when we write at work. You may sit down to think through a memo or to think through some plan.

And as you're writing, you realize, oh no, actually, this is what I want to say, or this is what we need to focus on. So what happens when instead of going through that process, you prompt the chatbot, "Write me a memo about X or write me a paper about X." Now, and perhaps in the optimist framing of this, comes this output and you realize it's not what you think.

And you go back and you work through that same process that you would have gone through without prompting the chatbot first and you still figure out this thing that you think that matters to you. But do you? Or do you take this output, this product that looks okay in certain ways and that becomes your idea.

That becomes what you go with. And what are we losing in that process? In my experiments with the chatbot, it can write a paper about a variety of things. Right now it's not the greatest paper, right? But it looks like a paper. Am I going to hand that in, or am I going to do something with that, revise that in some way, instead of ever figuring out what my idea would have been if I had started without the chatbot?

As I'm idealistic as a writing teacher, there's no point in doing any of this if you're not figuring out what you actually think. I tell this to my students all the time: this is not a little hoop to jump through to show me that you can put words on a page. I know you can put words on a page. We are doing this because I think, and I hope you think, it's important that you figure out what you believe, what matters to you in the world about this idea, what side of this issue makes sense to you.

When you go through the process of analyzing evidence, you're actually figuring out whether that evidence makes sense to you, what that evidence points towards. What happens when you just put a passage into the chatbot and say, analyze this evidence for me.

What happens to what you actually think?"

Losing the human connection

"I think that the idea that more and more of what humans do at work will be outsourced to machines is alarming for a number of reasons. And I guess I want us to be asking the question, just because a machine can do something, does that mean a machine should do the thing?

Sort of like, Glenn, I like to read your posts on LinkedIn. If you told me your posts were entirely generated by ChatGPT, I don't think I would want to read them anymore. That's not interesting to me. So we could ask, why is that? Well, it's because I want to know what Glenn is thinking about right now.

And if Glenn types into the machine, which we know a lot of content is already generated this way (not yours). But if Glenn types into the machine, "Write a post about how content creators are losing work because of ChatGPT." I can do that myself, right? I don't need to read you doing that.

So I think we're going to need to find the place where the human connection matters. And as a writer and a writing teacher, I've always kind of resisted the terminology of "content creation" as opposed to writing, because it sounds sort of like, who wants to read content, and who wants to engage with content?

I want to engage with ideas. And ideas come from a human. And I feel like this may be just another step towards this idea that it doesn't matter who created the product, the product is just there. But then it always has mattered, right?

So if you ask people, do you want to read a novel that's been entirely generated by a AI? Some people might say, sure, that's entertaining. Other people will say no, because this author that worked hard to create this world, that's actually part of my experience of reading this, is that it came from another human being's mind.

And it's possible that we will adjust to a world in which that human being first generates a draft with AI and then edits it. Right now, that seems depressing to me. I think about this for a living, so I may be an outlier in finding that depressing, but I think we should all stop and think about whether that's the world we actually want to live in.

Now, does that translate to if someone writes a real estate listing, am I as upset if I find out that the way this house I might want to buy has been described was written by ChatGPT?

Probably not, so we have to figure out where that line is exactly.

QUESTION: How do you think ChatGPT will change the way we write--and the way we learn to write? Please leave a comment and let's have a conversation.