Link to published article: https://www.eschoolnews.com/digital-learning/2023/10/13/chatgpt-is-our-new-learning-partner/
You may have heard about ChatGPT. There have been a few articles written on the subject (about 348,000 according to Google), and a significant percentage of them have been related to education. While I do not pretend to be an expert in large language models (because almost no one is), I do understand generally how they work and have spent quite a bit of time experimenting with ChatGPT as both a content creator and evaluator. I also have some experience in the classroom. I have taught at middle school, several high schools, community college and four-year universities. Whether you like it or not, we have a new partner in the classroom.
There are many primers on ChatGPT available, but I want to focus on the concerns that teachers and students have about using it in the classroom. Some schools (such as the entire NYC public school district) have attempted to ban it completely and others such as Yale have taken the opposite approach. In my opinion, the attempt to ban anything in the world of ubiquitous cell phones is a waste of time and effort. Students are ingenious, especially when it comes to getting around the rules. From a search of articles, both scholarly and in mainstream media, I believe the approach I am suggesting has not yet been proposed. I came upon it while thinking about the eternal pedagogical problem: how to grade group projects.
It is well-documented and often-repeated in teachers’ professional development that the right type of co-learning can deepen understanding and long-term knowledge gains. The key question is “What is the right type of co-learning“? We can all remember group projects in school, often selected by the teacher, occasionally self-selected. Sometimes it worked really well and we had a great experience building something together. Sometimes we were the only one that did all the work, or maybe the one that didn’t show up to any meetings and hoped our partners wouldn’t mind too much. How are teachers supposed to grade these efforts? Give everyone the same grade? Let students grade each others’ contributions? Try to guess how much time each student put in? There is no perfect solution.
And that, in a nutshell, is where we find ourselves with ChatGPT. From now on, every assignment must be graded explicitly as a group project with ChatGPT. Individual essays, science fair partner projects, group programming assignments, digital and physical art pieces; every single assignment from now on has a silent partner.
Of course, this does not mean that every student will use ChatGPT on every assignment. What it does mean is that we have to assume that they might. We have to transfer the responsibility of evaluating how much of the work is original from the teacher to the student, and we have to explicitly teach students how to take on that responsibility. ChatGPT might be the partner that did everything, the partner that didn’t show up, or somewhere in-between. Despite some efforts, there will never be a tool which can evaluate how much of an assignment was influenced by AI, just like we can never tell exactly how much of a science fair project was done by parents. I will even double down by saying not only will there not be such a tool, there should not be such a tool.
This leads to the most important question: if there is no such tool, how can educators know how much help the students received? How do we evaluate their knowledge? The answer: we ask them. We need to give that responsibility back to the students. We are their partners in learning, not their masters, and it is our job to help them understand what they are learning and how, not police and punish them for using tools we don’t fully understand or feel comfortable with.
It is time for educators to treat ChatGPT as an unreliable partner in all assignments and to provide a way for students to let us know how much help they received. I specify an unreliable partner because there is no way to know where ChatGPT got its information for any single response. It uses a mathematical model of likely words, not any kind of research method. It’s basically auto-complete on steroids. ChatGPT is like a classmate who has read extensively and is really confident about everything they say, but can’t remember exactly where they got their information from. It could be an academic publication or it could be a conspiracy website. And that is how we should treat it – a partner who sounds like they know what they are talking about, but still needs to be fact-checked.
I would like to propose the following sample rubric, based on how partners might rate each other:
Category | Student-Driven | Moderate ChatGPT Help | ChatGPT-Driven |
Topic Selection and Thesis Formulation | Student independently selected the essay topic and formulated the thesis. ChatGPT input (if any) was limited to guidance, suggestions, and corrections. | ChatGPT assisted in refining the essay topic or thesis statement, but the initial idea was student-generated. | The essay topic and thesis statement were primarily or entirely suggested or formulated by ChatGPT. |
Research and Data Collection | Student conducted all research and collected supporting evidence independently or with minimal ChatGPT consultation. | ChatGPT assisted in finding sources or evidence but did not do the research for the student. | ChatGPT conducted the majority or all of the research and data collection. |
Analysis and Argumentation | Student independently analyzed data and evidence to build arguments supporting the thesis. ChatGPT may have provided guidance on analytical methods. | ChatGPT assisted in the analysis and argumentation but did not build the argument for the student. | ChatGPT primarily or completely analyzed the data and constructed the argument. |
Writing and Structure | The essay’s structure, including the introduction, body paragraphs, and conclusion, was formulated by the student. ChatGPT involvement was limited to feedback and suggestions. | ChatGPT assisted in structuring the essay or improving its readability, but the content and organization were student-generated. | The essay was primarily or entirely structured and written by ChatGPT. |
Final Draft and Editing | Student independently revised and edited the essay. ChatGPT may have provided minor suggestions for improvement. | Student utilized ChatGPT for more significant revisions and editing but maintained original thought and structure. | ChatGPT conducted the majority or all of the revisions and editing. |
This rubric could easily be modified for any assignment, from a programming challenge to a play. It requires no technical knowledge about ChatGPT. In fact, we could substitute the word “ChatGPT” with “Parents”, “Wikipedia”, “Google Search”, “Tutor”, “TA”, or “Essay Factory”. It takes no more than a few seconds to fill out and to read. And it still allows the teacher to specify how much ChatGPT is allowed for any given assignment. Even if the rule is “none at all”, the rubric is still valid. The student still has to write down that they did not use the tool. It takes it from “I’m just tricking the teacher to save some time” to “I am explicitly lying about what I did.”
The value of this rubric is that it places the responsibility for learning back on the student’s shoulders. This is not about making less work for the teacher or taking away their authority. This is about helping students develop their own moral compass. As CS Lewis so famously said, “Integrity is doing the right thing, even when no one is looking.”, which is especially critical in the world of online learning. This rubric gives students the opportunity to show us what they did when we weren’t looking. It allows them a chance to have their integrity reinforced through practice. And if we treat this opportunity with understanding instead of punishment, it has the possibility of helping the students who need it the most.
You will notice that this rubric has no points attached. What if, instead of using it simply as another entry in the grade book, we took it as an opportunity for discussion with the student. If they are not afraid of getting a 0 for admitting that they used ChatGPT, it opens up a whole world of possible discussions.
“Why did you choose ‘The essay topic and thesis statement were primarily or entirely suggested or formulated by ChatGPT.‘ when it’s clear you wrote the rest of it yourself?”
“I didn’t really understand the question, but once I did I was fine.”.
“Why did you choose ‘The essay was primarily or entirely structured and written by ChatGPT.‘?”
“Well, I work every day after school and then look after my siblings…. I just didn’t have time”.
“Why did you choose ‘Student utilized ChatGPT for more significant revisions and editing but maintained original thought and structure.‘?”
“I thought my essay was really good and didn’t know what changes to make”.
If we allow students to self-evaluate without grade-based consequences, we can learn what supports they need as well as how we can improve our own curricula. It changes the conversation from which boxes a student needs to check to what we want students to get out of our lessons. We can even use it as a wonderful opportunity to teach students how to support themselves using tools like ChatGPT properly, without resorting to plagiarism. We could boost the equity in our classrooms immensely if students can individualize the help they are getting at the time, place, and pace that they need it.
It is no use burying our heads in the sand and banning AI-based tools. The horses are out of the barn and are running wild in all directions. These tools are becoming more and more powerful and are being used in new ways every day. We have a real chance to help students understand their own responsibility, take charge of their own learning, and use this amazing technology to improve their self-efficacy, their knowledge, their outcomes, and ultimately their lives. Let’s use it!