Shifting Tides with AI in Higher Education

  Reading time 9 minutes

For those who are involved in higher education in any way, there is one blaring question that has been circulating feverishly for the past few months: “How can I be sure if my students are using generative AI (ChatGPT, Bard, Bing, etc.) to write their assignments or not?” Fret not, I have the answer that we’ve all been searching for: you can’t. In fact, and perhaps more distressing for some, it’s actually safer to assume that they are using generative AI to help write and ideate their assignments at this point. Does that mean the learning has left the picture? Or is it time to rethink how we assess student learning?

a block of text sailing to the right and morphing into a boat

Before I get into a long, drawn-out (this is a pun, you’ll see why later) metaphor about higher education’s relationship to artificial intelligence (AI), there are a few things I’d like to raise as points of interest: 

  1. This isn’t a new problem.
  2. It isn’t actually even a problem.
  3. There is a lot to be learned from this moment in time, for all of us.

Now that we’ve set sail on this journey together, I want to present the idea of the Ship of Theseus (pictured above). The Ship of Theseus thought experiment posits that over time, small pieces of a ship are replaced, one by one, until all of the original parts of the ship have been replaced. Once every piece has been replaced, is it the same ship? How about once more than 50% of the original pieces have been replaced? I haven’t waded too deep into that thought experiment, but it did make me think about another ship: the Ship of ChatGPTseus (pictured below).

a boat constructed out of ascii text

Original Writing and AI-Generated Writing

If a student enters an instructor’s essay prompt, verbatim, into a generative AI tool (for ease, we’re going to say ChatGPT) and gets back an essay that they do not alter at all, which they then turn in, is that original writing? I think this one is pretty clear: probably not. However, how realistic is that? As an AI-enthusiast who has been playing with chatbots since before it was cool (shoutout to Google DialogFlow), I don’t think it’s very realistic. Of course, this is possible, but once you’ve had a chance to play around with ChatGPT and its cousins, you begin to recognize that the essay prompt would need to be extraordinarily basic for generative AI to ace it without further manipulation from the student. 

So this begs the question: what amount of AI-use in student work is acceptable? This is where it gets (and stays) murky, but it’s going to be okay.

Rather than looking at this moment in technological advances as a problem or hurdle, or focus on finding the perfect AI-detection tool, I propose that we look at it as an opportunity to change course and rethink our approach to learning and assessment. As showcased in Alex Joppie’s recent blog post, “A Brief History of Academic Integrity Panics about Disruptive Technology,” education has faced similar hurdles in the past and used them as a moment of recalibration. This is our moment to look at our compass (either moral or figurative, you can choose) and start integrating prompt engineering into our teaching.

What is Prompt Engineering?

Simply put, prompt engineering is the skill that encapsulates refining prompts to get generative AI to create the work you need its assistance with. For example, if you wanted ChatGPT to write your next term paper, you would need to feed in the instructor’s prompt and then continue refining your text input until the response met your needs. 

However, the most realistic approach to getting ChatGPT (or its cousins) to do your assignments is to compose alongside ChatGPT via prompt engineering. This means using generative AI to write chunks of text, to reword ideas, to check your argument for errors, to evaluate your tone, or other parts of the writing process. This isn’t the process many of us are used to, but that doesn’t make it wrong, bad, or an abdication of learning. To the contrary, it’s an opportunity to develop a new skill that is likely going to redefine the workflow and process for many fields.

But Jes, I don’t want my students to use ChatGPT! Okay, I get it – but hear me out: talk to students about it before issuing a blanket “don’t use AI” policy. Talk to them about the assignments that you do and do not want them to use it for. Guide them through using it in a sustainable and learning-focused way. 

For example, in Visual Design for Games (ANI 344), I prohibit students from using it at all for assignments, but we use it during in-class activities. While this violates my general ethos around AI use and education, one of the core tenets of that class is learning the fundamentals of concept art production for games. If I let students use DALL-E or MidJourney, they won’t learn the fundamentals or know when the AI is or isn’t doing a good job. I talk to them about this on the first day of class so they understand that I’m not anti-AI, I’m pro them learning the fundamentals necessary to create good concept art.

Alternatively, in other classes I teach, I don’t mind its use for basically anything outside of reflections (which I also discuss with my students on day 1). If they want to use it to help refine some of their thoughts for a game analysis? Sure. They want to use it to help outline ideas for a game? Sounds awesome. If they want to use it to write a reflection about having played the table-top masterpiece, Castle Panic, absolutely not, because that robs them of the opportunity to reflect on the triumphs of that game’s mechanic design and get the benefits of metacognition.

Back to the Ship

a ship sitting on a pile of pages, words, and lettersOkay, but what about this ship metaphor that we’ve been patiently waiting to make sense of? Well, part of bringing in the Ship of Theseus is because I just like boats. Mostly, though, I thought it did a good job of representing that even if something is 100% AI-generated, it’s going to take a lot of work to get it to sail and look like the boat it’s supposed to be. I don’t know if it’s a new ship, or if it should or shouldn’t be an academic integrity violation if someone fully rebuilds the ship, but I know it’s a lot of work to rebuild a ship.

I made you wait all of this time to tell you why this long, drawn-out metaphor was a pun, and you’re going to hate the reason. While I didn’t use AI to write a single word of the text in this post, I did use it for all of the ship images, with varying levels of success. I’m no Van Gogh, but if I wanted to make my own “text turning into a ship” image for this post, I could have done that. I could have done a better job even than any of the images I ended up using, but it would have taken me a long time. So, instead, I let DALL-E and MidJourney create this long, drawn-out (see? Wasn’t it worth it?) visual metaphor for me. I engineered the prompts to get the images close enough to what I needed, but still sunk a lot of time into the prompt and iteration process.

One last boat pun: the ship has sailed on ignoring generative AI’s potential impact on the future of higher ed, so it’s time to sail with the tides and start working it into your course materials.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.