They recently penned an opinion article that first appeared in the IEEE Spectrum on 22 February 2023.
OpenAI in November launched ChatGPT, one of the most significant real-world applications of artificial intelligence to date. The tool allows its users to quickly generate sophisticated textual content that is uniquely constructed.
Such content, therefore, likely can avoid detection by traditional plagiarism tools—which creates a concern for universities about how to assess students’ learning and skill development.
Many types of assessments used to evaluate students require them to demonstrate they have understood new materials by investigating the content and collating their learning in the form of a written essay or report. The role of an academic assessor has been to evaluate individual students’ submissions to gauge the breadth and depth of their understanding of the topic.
If students use ChatGPT to write an essay or report, the problem is the output generated provides limited, if any, representation about the quality of their learning. The AI tool offers the opportunity for well-written researched content without the need for a student to search for detailed sources.
Unfortunately, this type of problem is not new. For many years, students have been able to copy text from essay banks. Anti-plagiarism detection tools such as iThenticate and TurnItIn have deterred the use of such repositories. Although the anti-plagiarism tools have been successful, they use sophisticated pattern-matching techniques—which makes them ill-equipped to detect the language constructs resulting from advanced AI.
Another way students get around writing original content is through essay mills, which provide writing services for a fee. It could be argued that the open nature of ChatGPT has leveled the playing field between those who can and can’t afford to pay for the services.
A different assessment for STEM students
In the fields of science, technology, engineering, and mathematics, a far broader assessment strategy than essays is used. To meet the learning outcomes, STEM students must demonstrate skills such as programming. Because ChatGPT can solve many mathematical problems and generate and debug code, however, the computing field cannot simply ignore the AI evolution. And ChatGPT’s capabilities are certain to improve over time.
The immediate reaction from academia is likely to be to adopt traditional assessment strategies that comprise predominantly closed-book, exam-style assessments.
“We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies.”
Before adopting that obvious quick fix, though, it is important to reflect on the reasons why a broad assessment strategy was adopted in the first place.
Engineering and computing students must tackle large, complex problems and adopt collaborative strategies. The skill sets are not easily or accurately tested individually in an examination hall in a three-hour period. To some extent, essays—and, perhaps even more controversially, doctoral theses—are already not well aligned to the needs of many employers.
Issues universities need to consider
ChatGPT and similar technologies will continue to shape the future of what we call the World of Work (WoW). As employers increasingly adopt advanced AI, the academic world will need to amend its teaching and assessment practices. ChatGPT and other AI tools are already being adopted in industry as a way to automate mundane tasks. The big question is: Should educators ban such developments or embrace them?
Here are some issues that universities might want to consider.
- Awareness is the best line of defense. Educate students and staff about the strengths and weaknesses of AI-generated content. For example, when does reliance on localized or peer-reviewed content matter, and when is a quick-and-dirty content review sufficient?
- Develop assessment and other educational practices for the WoW to embrace or reject the use of AI-generated content. Using authentic assessment tasks that are aligned to a local context or problem, for example, would require students to foster a culture of exploration and curiosity. Project-based assessment tasks are good examples that enable students to conduct exploration of ChatGPT and similar tools, but they ultimately demonstrate the learning and skill on their own. Assessment criteria also will need to recognize sophisticated use of AI but ensure greater recognition for elements that demonstrate higher-order skills such as evaluation and synthesis.
- Reassure staff that new tools are emerging to detect the use of AI. Princeton student Edward Tian has already created one such tool: GPTZero, which adopts thinking similar to OpenAI’s tool but uses deep learning in reverse to detect ChatGPT.
- Adapt to embrace opportunities brought on by innovation. Consider using ChatGPT and similar tools to advance pedagogy and curricula. Everyone finds unnecessary repetition and menial tasks tiresome. Use the tools to stretch and enable more innovation by students.
- Reinforce principles of professional standards and ethics. Advance a culture of academic integrity, acknowledging that new tools will emerge. If ethical culture is engraved in the group ethos, students and scholars will use new AI tools appropriately.
ChatGPT and similar tools should be seen as accelerating necessary change. We recommend an approach where teaching and learning adapt to recognize the opportunities posed by new technologies and continue to foster a culture of exploration and curiosity. Ultimately, our priority is to provide graduates ready to face the ever-changing WoW.
*The views expressed in this article are that of the author/s and do not necessarily reflect that of the University of Johannesburg.