US A New Headache for Honest Students: Proving They Didn’t Use A.I.

NY Times - Archive
Students are resorting to extreme measures to fend off accusations of cheating, including hours long screen recordings of their homework sessions.

A few weeks into her sophomore year of college, Leigh Burrell got a notification that made her stomach drop.
She had received a zero on an assignment worth 15 percent of her final grade in a required writing course. In a brief note, her professor explained that he believed she had outsourced the composition of her paper — a mock cover letter — to an A.I. chatbot.
“My heart just freaking stops,” said Ms. Burrell, 23, a computer science major at the University of Houston-Downtown.
But Ms. Burrell’s submission was not, in fact, the instantaneous output of a chatbot. According to Google Docs editing history that was reviewed by The New York Times, she had drafted and revised the assignment over the course of two days. It was flagged anyway by a service offered by the plagiarism-detection company Turnitin that aims to identify text generated by artificial intelligence.

Panicked, Ms. Burrell appealed the decision. Her grade was restored after she sent a 15-page PDF of time-stamped screenshots and notes from her writing process to the chair of her English department.
Still, the episode made her painfully aware of the hazards of being a student — even an honest one — in an academic landscape distorted by A.I. cheating.
Generative A.I. tools including ChatGPT are reshaping education for the students who use them to cut corners. According to a Pew Research survey conducted last year, 26 percent of teenagers said they had used ChatGPT for schoolwork, double the rate of the previous year. Student use of A.I. chatbots to compose essays and solve coding problems has sent teachers scrambling for solutions.
But the specter of A.I. misuse, and the imperfect systems used to root it out, may also be affecting students who are following the rules. In interviews, high school, college and graduate students described persistent anxiety about being accused of using A.I. on work they had completed themselves — and facing potentially devastating academic consequences.
In response, many students have imposed methods of self-surveillance that they say feel more like self-preservation. Some record their screens for hours at a time as they do their schoolwork. Others make a point of composing class papers using only word processors that track their keystrokes closely enough to produce a detailed edit history.

The next time Ms. Burrell had to submit an assignment for the class in which she had been accused of using A.I., she uploaded a 93-minute YouTube video documenting her writing process. It was annoying, she said, but necessary for her peace of mind.
“I was so frustrated and paranoid that my grade was going to suffer because of something I didn’t do,” she said.

These students’ fears are borne out by research reported in The Washington Post and Bloomberg Businessweek indicating that A.I.-detection software, a booming industry in recent years, often misidentifies work as A.I.-generated.
A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average.
“At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,” said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland.

Turnitin, which was not included in the analysis, said in 2023 that its software mistakenly flagged human-written sentences about 4 percent of the time. A detection program from OpenAI that had a 9 percent false-positive rate, according to the company, was discontinued after six months.
Turnitin did not respond to requests for comment for this article, but has said that its scores should not be used as the sole determinant of A.I. misuse.
“We cannot mitigate the risk of false positives completely given the nature of A.I. writing and analysis, so, it is important that educators use the A.I. score to start a meaningful and impactful dialogue with their students in such instances,” Annie Chechitelli, Turnitin’s chief product officer, wrote in a 2023 blog post.
Some students are mobilizing against the use of A.I.-detection tools, arguing that the risk of penalizing innocent students is too great. More than 1,000 people have signed an online petition started by Kelsey Auman last month, one of the first of its kind, that calls on the University at Buffalo to disable its A.I.-detection service.
A month before her graduation from the university’s master of public health program, Ms. Auman was told by a professor that three of her assignments had been flagged by Turnitin. She reached out to other members of the 20-person course, and five told her that they had received similar messages, she recalled in a recent interview. Two said that their graduations had been delayed.
Ms. Auman, 29, was terrified she would not graduate. She had finished her undergraduate studies well before ChatGPT arrived on campuses, and it had never occurred to her to stockpile evidence in case she was accused of cheating using generative A.I.

“You just assume that if you do your work, you’re going to be fine — until you aren’t,” she said.

Ms. Auman said she was concerned that A.I.-detection software would punish students whose writing fell outside “algorithmic norms” for reasons that had nothing to do with artificial intelligence. In a 2023 study, Stanford University researchers found that A.I.-detection services were more likely to misclassify the work of students who were not native English speakers. (Turnitin has disputed those findings.)
After Ms. Auman met with her professor and exchanged lengthy emails with the school’s Office of Academic Integrity, she was notified that she would graduate as planned, without any charges of academic dishonesty.
“I’m just really glad I’m graduating,” she said. “I can’t imagine living in this feeling of fear for the rest of my academic career.”
John Della Contrada, a spokesman for the University at Buffalo, said that the school was not considering discontinuing its use of Turnitin’s A.I.-detection service in response to the petition.
“To ensure fairness, the university does not rely solely on A.I.-detection software when adjudicating cases of alleged academic dishonesty,” he wrote in an email, adding that the university guaranteed due process for accused students, a right to appeal and remediation options for first-time offenders. (Ms. Burrell’s school, the University of Houston-Downtown, warns faculty members that plagiarism detectors including Turnitin “are inconsistent and can easily be misused,” but still makes them available.)
Other schools have determined that detection software is more trouble than it is worth: The University of California, Berkeley; Vanderbilt; and Georgetown have all cited reliability concerns in their decisions to disable Turnitin’s A.I.-detection feature.

“While we recognize that A.I. detection may give some instructors peace of mind, we’ve noticed that overreliance on technology can damage a student-and-instructor relationship more than it can help it,” Jenae Cohn, executive director of the Center for Teaching and Learning at U.C. Berkeley, wrote in an email.

Sydney Gill, an 18-year-old high school senior in San Francisco, said she appreciated that teachers were in an extremely difficult position when it came to navigating an academic environment jumbled by A.I. She added that she had second-guessed her writing ever since an essay she entered in a writing competition in late 2023 was wrongly marked as A.I.-generated.
That anxiety persisted as she wrote college application essays this year. “I don’t want to say it’s life-changing, but it definitely altered how I approach all of my writing in the future,” she said.
In 2023, Kathryn Mayo, a professor at Cosumnes River College in Sacramento, started using A.I.-detection tools from Copyleaks and Scribbr on essays from students in her photo-history and photo-theory classes. She was relieved, at first, to find what she hoped would be a straightforward fix in a complex and frustrating moment for teachers.
Then she ran some of her own writing through the service, and was notified that it had been partly generated using A.I. “I was so embarrassed,” she said.

She has since changed some of her assignment prompts to make them more personal, which she hopes will make them harder for students to outsource to ChatGPT. She tries to engage any student whom she seriously suspects of misusing A.I. in a gentle conversation about the writing process.
Sometimes students sheepishly admit to cheating, she said. Other times, they just drop the class.
 
Last edited:
Upon watching that video she submitted and looking at those screenshots i'm absolutely convinced she did in fact cheat and use an AI tool to write that. and i'm saying that as a programmer.
To be fair she probably didn't understand shit. Programming is hard for a lot of people. When I was in school we had a statistics class where they taught us how to program with R. I was the only person in my program who understood how to use it. I ended up doing the assignments, putting my draft in the share folder for the rest of the class then editing my code so it looked nicer than everyone elses and literally the entire class just submitted my assignments for every single assignment. They never got caught, i'm pretty sure they all passed. Their grades were all a bit lower than mine because the drafts all had something wrong with them, usually formatting, or they'd be less efficient than the final one I'd turn in., but they all met the requirements of the assignment.

The thing is, they were never going to understand it, they were likely barely going to ever need to use R in the real world, the course was kind of fucking stupid all around. But it was mandatory for the program we were all in.
 
Simple solution: Return to written papers instead of typed. I've heard some zoomers can't even write cursive so this can be used to bring back rudimentary writing skills
Maybe colleges should stop giving pointless busy work assignments and actually teach you useful stuff instead. I learned more over one semester with a lab class than I did the rest of my college career taking lecture classes.
 
  • Optimistic
  • Like
Reactions: Pimozide and Aidan
They can't make you prove a thing and they know it. If I were a student and somebody tried to claim I used AI to write a paper or whatever i'd demand they prove it or restore the grade, and if they got pissy about it make it quite clear I will refer the matter to a lawyer if they try to fuck with my education, to say nothing of that kind of baseless slander. Thats a pretty serious accusation in academia. One they'd best have more proof than their own AI program says so before they start making. Just the accusation alone can fuck you permanently in some fields like law or medicine
They’ll demand you bring the original word processor file for analysis and then determine whether you actually wrote anything using that. I was reading through a business school student’s lawsuit against Yale for this exact type of problem, and that’s what they demanded after his hearing.

Just do in person testing with restricted Internet access via programs or handwritten responses. It’s really not that hard to stop this issue at the collegiate level. The problem is that faculty are lazy and too weak willed to tell students used to taking tests at home to fuck off. The people entering college right now are basically products of the covid years. If you told them they have to hand write essays in a bluebook they would faint.
 
There are some days, when I've been looking for work, where I've written probably a dozen at least unique cover letters in a single day all tailored to whatever specific job I was applying for. I'm pretty sure learning to write a cover letter is something I was taught in high school, probably when I was 15 or 16. Why is this even a college assignment at all?
In order to rake in as much of that sweet, sweet, student loan money as possible, colleges have had to lower their admission standards to pad out their attendance numbers. Meanwhile, as a consequence of No Child Left Behind and other bad government educational policy, the standards for graduating the various levels of school have also dropped. As a result, college graduates are barely as literate and knowledgeable as high school students once were, high school can barely turn out graduates that would have passed yesterday's middle school, etc.
 
  • Like
Reactions: Grub
So glad I finished school before Current Year Clown World.

No "social media", nor "social justice", and no "ChatGPT".
Turnitin existed 15 years ago when I was in school and was false flagging plagiarism even back then. When I was paranoid about this I wrote everything I did in school in a dropbox synced folder with history enabled so you could see historical docx or code files in the event some retarded automated system said I cheated.
 
Turnitin sucks. I thankfully graduated before ai took off, but even then, Turnitin would get shit wrong and say you plagiarized stuff even if you hadn't.

I'm glad I graduated in a year when ai wasn't readily available to normies. I feel like every degree holder past a certain year will now always be questioned if they had ai help.
 
From today onward, all assignments must have the word NIGGER as a header, and the word NIGGER must also be hidden at least 15 times in the presentation, as AI is not allowed to write the word NIGGER.

As an actual solution, make all the important assignments and tests into a live presentation. No printed shit, all handwritten. The point shouldn't be to forbid AI from being used but to make it so much of a hassle that studying is just easier.
If you can't handwrite paragraph responses to questions you aren't educated in the first place.
 
  • Like
Reactions: Pimozide
Back