As the university sector implements a new anti-plagiarism technology and monitoring markers to combat the misconduct, hundreds of students are allegedly caught in a new wave of cheating through ChatGPT or other forms of AI. In 2023, when the widespread impact of AI became clear, Sydney University revealed that there were 330 cases of apparent plagiarism via AI. However, due to widespread concerns that AI cheating is undetectable, most other universities are keeping quiet about the extent to which their institutions engage in it.
Professor Phillip Dawson of Deakin University, an authority on cheating detection, has speculated that, due to the limitations of current detection technologies, only a small fraction of AI cheats were identified. He said It should be assumed that pupils will use AI unless they are being monitored during an evaluation.
The majority of research demonstrating high detection rates relies on assuming that users just copy and paste without requesting ChatGPT to rephrase or paraphrase the text. For the purpose of calculating detection accuracy scores, it is assumed that the AI user is ignorant.
The most recent academic misconduct report from the University of New South Wales showed early signs of a “new wave” of suspected cheating involving ChatGPT along with other online tools, with a notable spike in referrals in 2023. However, the institution chose not to provide any data regarding AI cheating.
By June, the Tertiary Education Quality and Standards Agency (TEQSA), which acts as a watchdog over the university sector, will demand that all institutions of higher education outline strategies for dealing with the threat that AI poses to the credibility of their degrees.
A spokesman from TEQSA stated, “While TEQSA does not hold data on student misconduct cases within institutions, informal discussions with several institutions have made us aware of cases where AI was inappropriately used in assessment tasks during 2023.”
According to a representative from Sydney University, evaluation markers initially noticed AI detections because they feared that the student’s submitted work might not be original.
“We use the Turnitin AI tool alongside a number of indicators of misconduct as part of our investigation,” she explained, “for example, if it contains different use of language, is irrelevant to the question, has false references, or fails to answer the set question.”
After considering all the data, a decision maker will reach a conclusion based on a balance of possibilities. She assured everyone that the 330 claims were being thoroughly investigated but that useful data on false positives as well as proven allegations was still unavailable.
Dawson stated that the advent of generative AI had prompted some academics to reflect on their own values and that the field was still working to define the appropriate applications of the technology.
Universities that achieved the greatest success with Turnitin’s AI tool, which can distinguish among computer-generated text and human writing style, used it as supplementary proof, according to James Thorley, regional vice president of Turnitin.
He added that the universities he spoke with assured him that nothing bad happened when ChatGPT came out and that they had already made the necessary changes. “The universities I speak to are encouraging the use of generative AI but making sure it’s in the right framework and right guidelines.”
English instructor and associate professor at Sydney University Huw Griffiths is one of several professors using generative AI tools like ChatGPT in their classes. He explained that instead of reacting with paranoia, he wanted pupils to understand how AI works and how they may critically evaluate what they can do in relation to it.
He instructs his third-year Shakespeare students to use ChatGPT to learn about the true meaning of a metaphor from Shakespeare. After that, students are to evaluate the result in comparison to more conventional sources.
Similarly tight-lipped about its data, the University of Technology Sydney stated that it would rather have faculty talk to students about AI technologies “and, where appropriate, invite students to actively engage with these tools and to critically reflect on how they can be used” than share any information.
A representative from the University of Wollongong stated that the institution’s policy on academic integrity regards the use of artificial intelligence (AI) in evaluations as plagiarism.
After evaluating Turnitin’s AI capability, the decision has been made not to use it. “We are reviewing our decision not to utilize the Turnitin tool based on the latest evidence,” he explained. “We have been evaluating AI tools.”