Everyone Is Cheating Their Way Through College - ChatGPT has unraveled the entire academic project.

Source: https://nymag.com/intelligencer/art...eating-education-college-students-school.html
Archive: https://archive.is/DzOE6

Chungin “Roy” Lee stepped onto Columbia University’s campus this past fall and, by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in. “At the end, I’d put on the finishing touches. I’d just insert 20 percent of my humanity, my voice, into it,” Lee told me recently.

Lee was born in South Korea and grew up outside Atlanta, where his parents run a college-prep consulting business. He said he was admitted to Harvard early in his senior year of high school, but the university rescinded its offer after he was suspended for sneaking out during an overnight field trip before graduation. A year later, he applied to 26 schools; he didn’t get into any of them. So he spent the next year at a community college, before transferring to Columbia. (His personal essay, which turned his winding road to higher education into a parable for his ambition to build companies, was written with help from ChatGPT.) When he started at Columbia as a sophomore this past September, he didn’t worry much about academics or his GPA. “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

By the end of his first semester, Lee checked off one of those boxes. He met a co-founder, Neel Shanmugam, a junior in the school of engineering, and together they developed a series of potential start-ups: a dating app just for Columbia students, a sales tool for liquor distributors, and a note-taking app. None of them took off. Then Lee had an idea. As a coder, he had spent some 600 miserable hours on LeetCode, a training platform that prepares coders to answer the algorithmic riddles tech companies ask job and internship candidates during interviews. Lee, like many young developers, found the riddles tedious and mostly irrelevant to the work coders might actually do on the job. What was the point? What if they built a program that hid AI from browsers during remote job interviews so that interviewees could cheat their way through instead?

In February, Lee and Shanmugam launched a tool that did just that. Interview Coder’s website featured a banner that read F*CK LEETCODE. Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.) A month later, Lee was called into Columbia’s academic-integrity office. The school put him on disciplinary probation after a committee found him guilty of “advertising a link to a cheating tool” and “providing students with the knowledge to access this tool and use it how they see fit,” according to the committee’s report.

Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company, OpenAI, would punish him for innovating with AI. Although Columbia’s policy on AI is similar to that of many other universities’ — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human.

Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (Sarah’s name, like those of other current students in this article, has been changed for privacy.) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT.

Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

Teachers have tried AI-proofing assignments, returning to Blue Books or switching to oral exams. Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”

That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
 
I tried cheating with ChatGPT in a physics assignment and I could easily see that it was pure trash it spewed out. Maybe at a high school level it would be usable, but when it comes to college I would be horrified of trusting it blindly after my first and last experience.
To be fair, this was half a year ago and maybe it has improved since then.
 
Then go back to pencil and paper and exam halls. We managed. Practical skills can be evaluated in person too. De-digitalise the entire thing. Return to parchment if you have to. I’ve no sympathy for any academics setting essays that they can’t tell are written by chatGPT, if you’ve automated everything to the point t you can’t tell if a student cheated using it your course is worth shit
 
AI is honestly best used as a study partner and it’s a very bad idea to use it for actual schoolwork. You’re going to take your final and eat shit because you just fucked around gooning as the machine spit out whatever, while smart people used the AI to check notes, consolidate information, and ask follow-up questions.
Treat it as Google but more specific, that’s the way you’re supposed to use this tool.
 
Colleges make the entire """learning""" process about regurgitating information and writing essays, in order to make it easier to run as many classes as possible, and thus make the entire process totally vulnerable to knowledge dumps and just spitting out AI answers, the same way you used to just re-write the Wikipedia article if you wanted to cheat. Maybe if they had more assessments on each student's ability to reason and explain their work, the quality of grading would go way up. Then again, grades are already a useless metric when you compare the knowledge represented by an 'A' in gender studies vs. an 'A' in mechanical engineering.

Students that do this are kind of retarded though, like okay you're not actually learning anything and ChatGPT isn't very smart so your work is going to sort of suck anyway. If you think the assignments are dumb, first consider if they're trying to teach you how to problem-solve in a certain way, and that's the entire point of them. I caused myself a lot of trouble in school by skipping homework I felt was redundant, and when test time rolled around I realized I wasn't nearly as practiced at the material as I thought, and the point of doing variations of the same problem 100 times is to make it intuitive and work the logic into your subconscious problem solving.

Korean kid in the article at least has the presence of mind to note that he's there for networking and not to actually learn coding. He sounds like kind of a fag though so I'm of two minds on it.

“I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”
I know it's clear from this quote that this girl is retarded, but how does it take you 12 hours to write an essay? Unless it's a detailed research paper, in which case ChatGPT is going to give you total dogshit anyway?
 
I tried cheating with ChatGPT in a physics assignment and I could easily see that it was pure trash it spewed out. Maybe at a high school level it would be usable, but when it comes to college I would be horrified of trusting it blindly after my first and last experience.
To be fair, this was half a year ago and maybe it has improved since then.
It my experience it couldn't even get the correct answer if you gave it bearing problems to solve. In other words, it can't do basic trig. Even the essays it gives you are complete trash. I could see people using chatGPT to write a template for them to get started, but any teacher who accepts an unedited essay from ChatGPT had no standards to begin with. Are LLMs the problem, or is it that zoomers are dumb and professors have zero expectations beyond "make an effort"?
 
AI is honestly best used as a study partner and it’s a very bad idea to use it for actual schoolwork. You’re going to take your final and eat shit because you just fucked around grooming as the machine spit out whatever, while smart people used the AI to check notes, consolidate information, and ask follow-up questions that.
Treat it as Google but more specific, that’s the way you’re supposed to use this tool.
It's also good for spotting various wording, punctuation, and grammar errors (but obviously rewrite or paraphrase, don't just copy!) The problem is it's only good for around 3-4k words (I think), much more than that and it stops reading, so you have to drip feed it at some points.

I know it's clear from this quote that this girl is retarded, but how does it take you 12 hours to write an essay? Unless it's a detailed research paper, in which case ChatGPT is going to give you total dogshit anyway?
She did indigenous studies and "hippy farming," so she might be stupid enough to both pick bullshit classes and be unable to string buzzwords together easily.

Then go back to pencil and paper and exam halls. We managed. Practical skills can be evaluated in person too. De-digitalise the entire thing. Return to parchment if you have to. I’ve no sympathy for any academics setting essays that they can’t tell are written by chatGPT, if you’ve automated everything to the point t you can’t tell if a student cheated using it your course is worth shit
Depending on the college, some classes use practicals and pen and paper tests still, others use essays along with them to "test your understanding." However, that doesn't count for much when your students see essays as a chance to regurgitate brands and talk about Stanley cups in an essay about overhydration.
 
Haven't these colleges implemented AI-detecting software? Maybe they don't work, I'm not really in the loop.
AI-detection software doesn't work, too many give false positives for silly reasons like using quotes. Professors can accuse students of using AI, but if they don't have concrete proof of it, they can only accuse and hope for a confession.
 
Back