Everyone Is Cheating Their Way Through College - ChatGPT has unraveled the entire academic project.

Source: https://nymag.com/intelligencer/art...eating-education-college-students-school.html
Archive: https://archive.is/DzOE6

Chungin “Roy” Lee stepped onto Columbia University’s campus this past fall and, by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in. “At the end, I’d put on the finishing touches. I’d just insert 20 percent of my humanity, my voice, into it,” Lee told me recently.

Lee was born in South Korea and grew up outside Atlanta, where his parents run a college-prep consulting business. He said he was admitted to Harvard early in his senior year of high school, but the university rescinded its offer after he was suspended for sneaking out during an overnight field trip before graduation. A year later, he applied to 26 schools; he didn’t get into any of them. So he spent the next year at a community college, before transferring to Columbia. (His personal essay, which turned his winding road to higher education into a parable for his ambition to build companies, was written with help from ChatGPT.) When he started at Columbia as a sophomore this past September, he didn’t worry much about academics or his GPA. “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort. When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

By the end of his first semester, Lee checked off one of those boxes. He met a co-founder, Neel Shanmugam, a junior in the school of engineering, and together they developed a series of potential start-ups: a dating app just for Columbia students, a sales tool for liquor distributors, and a note-taking app. None of them took off. Then Lee had an idea. As a coder, he had spent some 600 miserable hours on LeetCode, a training platform that prepares coders to answer the algorithmic riddles tech companies ask job and internship candidates during interviews. Lee, like many young developers, found the riddles tedious and mostly irrelevant to the work coders might actually do on the job. What was the point? What if they built a program that hid AI from browsers during remote job interviews so that interviewees could cheat their way through instead?

In February, Lee and Shanmugam launched a tool that did just that. Interview Coder’s website featured a banner that read F*CK LEETCODE. Lee posted a video of himself on YouTube using it to cheat his way through an internship interview with Amazon. (He actually got the internship, but turned it down.) A month later, Lee was called into Columbia’s academic-integrity office. The school put him on disciplinary probation after a committee found him guilty of “advertising a link to a cheating tool” and “providing students with the knowledge to access this tool and use it how they see fit,” according to the committee’s report.

Lee thought it absurd that Columbia, which had a partnership with ChatGPT’s parent company, OpenAI, would punish him for innovating with AI. Although Columbia’s policy on AI is similar to that of many other universities’ — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human.

Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots — ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others — take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school. (Sarah’s name, like those of other current students in this article, has been changed for privacy.) After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT.

Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

Teachers have tried AI-proofing assignments, returning to Blue Books or switching to oral exams. Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

It isn’t as if cheating is new. But now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”

That future may arrive sooner than expected when you consider what a short window college really is. Already, roughly half of all undergrads have never experienced college without easy access to generative AI. “We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
 
The problem with this is that AI is garbage when it comes to literally anything that's specialized and deals with the real world. This is because AI gathers and compiles info from various sources, processes it, and spits it out. But if you've ever had a specialized hobby, you know that 90% of the 'info' out there on the topic is flat-out wrong. This is because of how humans learn: first, we learn rules which are generally true, even though there are many exceptions to said rules out there. Then we learn on a more granular level, zeroing in on details, learning the exceptions to those general rules, and fine-tuning our knowledge not only to a more specific level but attuning it to the specific conditions which we are working with and which are unique to our situation. However, most of the publicly available information on any given topic is at that surface-level generalization level. So that's what the AI will spit out, your essay will jive with what the textbook says, and you'll pass. But that was supposed to just be the first step in a pedagogical process which you were supposed to take on your own, so you would internalize those rules and then be able to realize where they don't apply and refine your knowledge further down the road.

In a way the colleges have made their own bed. I once read an autobiography set at Oxford in the 19th century and it was deeply shocking to me - the university was run completely by the faculty. They set the policies, they were in charge and so the primary goal of the university was education, hence the great results from the university system during that century. This is no longer the case anywhere - the university is run by a sort of funhouse mirror parody of a corporate c-suite. I know many professors who ran very difficult, engaging classes who were forced to dumb their courses down by these puffed-up bean counters because they had the bright idea to let students give performance reviews to the professors. I know professors who were told that they had to pass students who had demonstrated that they had zero grasp of the course material because high fail rates are bad PR. If you make a hollow, dumbed-down educational institution, don't be surprised when students start cheating en masse. Still, if you think this is 'good' in the objective sense you are retarded and have never tried to do anything that involves bending the natural world to your will, and had the exhausting experience of realizing that 80% of the shit published post 1980s about any given subject is either half-true or complete bullshit.

It's so bad that this is what I think would be necessary to build a functional university nowadays:

1. Decapitate 90% of the administrative staff on the quad
2. Winnow faculty down to actual subject matter experts and expunge ideologues. Put them in charge. Make the retained 10% low-level administrative staff their abject servants.
3. Enclose the campus in a Faraday cage
4. The only landline internet connection is to the library and is monitored. No wifi.

Why? Simple, writing academic papers is a virtually useless endeavor if you aren't doing an experiment or study. I'm NOT ALLOWED to write my own ideas or interpretations of a topic and back them up with facts. We were taught that our essays were to contain ZERO of our own thoughts and opinions. Every literal fucking sentence had to have a works cited.**** So if my job is to simply read studies, copy and paste quotes, and throw it into an essay, and do that for 50 pages, why on God's green earth should I not be using AI? What reason is there for me to not briskly glance over the 20 studies I use to write my essay, throw it into AI and let it spurge out whatever we get? If I'm not allowed to put any opinions or personal interpretations into the paper, then what difference is there whether I write it or an AI did? My essays, which I got all As on, were structured as follow:
It's hilarious because in my hard STEM classes I was told that we had to source facts but then we had to put them into context and use our own reasoning to come to our own conclusions about them. It was only the humanities that had this mentality - it's like they know that 90% of the field is bullshit, and so demand this ludicrous fact fetishism to cope and ape after the hard sciences.
 
Last edited:
Its gotten a lot better and you may have not prompted it properly. if you're using the free models that's part of the reason you weren't getting good results. Using a better model like o3-mini-high will usually give you better answers.
The "it can't do the most basic problems" is a cope by people who have a bias, don't know how to use it, or are too poor to afford $20/mo on ChatGPT plus.
Is the paid version that better?

I've have no desire nor need for it so never went all that deep into it. The free version is pretty awful. It never gives any indication that the paid version would have a chance.

I always saw it as creating the format for you and little else.
 
It's so bad that this is what I think would be necessary to build a functional university nowadays:

1. Decapitate 90% of the administrative staff on the quad
2. Winnow faculty down to actual subject matter experts and expunge ideologues. Put them in charge. Make the retained 10% low-level administrative staff their abject servants.
3. Enclose the campus in a Faraday cage
4. The only landline internet connection is to the library and is monitored. No wifi.
The most important policy would be to cut down on student numbers. No more than 0.1% of people have what it takes to benefit from higher ed, and no more than 0.1% of the above have what it takes for post-grad study.
 
It's hilarious because in my hard STEM classes I was told that we had to source facts but then we had to put them into context and use our own reasoning to come to our own conclusions about them. It was only the humanities that had this mentality - it's like they know that 90% of the field is bullshit, and so demand this ludicrous fact fetishism to cope and ape after the hard sciences.
I was in STEM and they were still like this. It really was brain drain. There wasn't a single class that wasn't like this. Sometimes the humanities would ask for opinion pieces but they were usually only a page or two. The only time my input mattered was during experiments.

Honestly, Academia is a flat out joke.
 
I graduated from college a long time ago but not too long ago that I don't remember what it was like. The reality is, my essays could 1010% be written by AI and there's no reason I couldn't or shouldn't do it. If AI existed in college I would've never written a single fucking paper and there's an extremely good reason why.
We don’t disagree. Guessing you went to college in the 00s? I did and had the same experience of being forced to go to a bunch of gay classes I didn’t give a fuck about and regurgitate a bunch of gay drivel to get my grade. This is the academy’s greatest failure. If you’re going to demand students take non-degree relevant coursework, then at least have the offerings and allow the flexibility to take something intellectually stimulating. For example, I am not a musician by trade, but would have loved to take music theory and composition courses because I find it interesting. But no, I had to take a gay communist sociology course to fulfill the social science requirement. Fuck college, they’re a bunch of homosexual communist thieves.
 
I tried cheating with ChatGPT in a physics assignment and I could easily see that it was pure trash it spewed out. Maybe at a high school level it would be usable, but when it comes to college I would be horrified of trusting it blindly after my first and last experience.
To be fair, this was half a year ago and maybe it has improved since then.
The Korean guy they profiled on the article was bragging about making cheating software.... That's why Harvard dropped his dumbass.

He's seething he go shut out of Harvard and is coping about getting into Columbia (where he's literally doing nothing but cheat with ChatGPT and try and party).

That said I'd bet his family is loaded from a college prep company (LMAO)
 
One other factor not yet discussed here is the rise of ADHD diagnoses from quack doctors, allowing students to get special accommodations. So now they can get a hack to say they got ADHD and now they have eight hours to take an exam instead of two. More time to run ChatGPT to make sure their answers are correct if they’d like.
Huge swaths of the university are not doing actual job training.
I’ve had opportunities to work with colleges on curriculum development and the community colleges are eager to work with private industry; the universities are basically “lol fuck off.” They talk a good game about getting students career-ready but ultimately it comes down to administrative staff deciding if they feel like it or not. Unsurprisingly they don’t like anything that might disrupt the various fiefdoms going on so collaboration is often fruitless.

The attitudes of community colleges and universities are night and day. Community colleges are more stigmatized but I think they are something to keep an eye out for.
 
The most important policy would be to cut down on student numbers. No more than 0.1% of people have what it takes to benefit from higher ed, and no more than 0.1% of the above have what it takes for post-grad study.
I think that implementing actual standards would do that itself. Starting with entrance examinations. I feel like just going back to Harvard's 1869 entrance exam would winnow the chaff beyond that point.
 
I wish I had time to weigh in on this topic in a more accessible place. I work with science teacher candidates and hold a credential myself. The young people coming out of college into these programs are increasingly more and more retarded, and this trend predates AI. It's only going to get worse.
I use AI for my work all the time. But I've actually had the experience of writing from scratch. I've had the experience of laying out a bunch of physical texts and trying to integrate the information from those texts into my own original thoughts. I've had to read so many, so, so, so many papers and grade them without any help. I think these experiences are essential to making the best use of AI. I don't know how you verify that younger people have those same experiences these days. We're creating people that will be no better than having AI talk to itself.

Best advice I can give is to take control of your kid's learning. Even if you still send them to school, pay attention to their work. Make them do it honestly when it is worthwhile practice. Who gives a fuck if AI kid got an unearned A and your kid got an honest C. When all this comes to a head the people with a pretty transcript will be worthless compared to people with actual skills.

I can't wait for the university bubble to burst. I can't believe I still find myself involved with the system even though it has historically fucked over my family going back generations. I guess I'm just there to be a pain in the ass and collect a check in the meantime.
 
Imagine wasting the best years of your life, when you are most capable of learning, just scrolling tiktok while a chatbot writes your essays. These kids are cooked.

Additionally, there's a lot of bullshit that college forces you to write. You need to do "discussion boards" when taking online classes for example. It's busy work that people want to push to the side.
I agree on this, but part of the point of that busywork is to make you better at efficient writing. If you habitually take an hour to write a simple paragraph or response in the workplace, you're fired.
 
Is the paid version that better?

I've have no desire nor need for it so never went all that deep into it. The free version is pretty awful. It never gives any indication that the paid version would have a chance.

I always saw it as creating the format for you and little else.
Yes. It's a lot better. The model you use matters like with any LLM. ChatGPT is multimodal and general so it's not specialized in just math problems and such. You also have to know how to prompt it. It isn't a magic tool that will understand what output you want. It also has limitations but, again, some people have such a massive bias they will look for reasons to discount it as a useful tool. I've had it solve most differential equation problems correctly. I didn't even add extra prompts aside from "solve this" which isn't best practice.
 
  • Informative
Reactions: Whiskey
Haven't these colleges implemented AI-detecting software? Maybe they don't work, I'm not really in the loop.
They use it but they can't use it as actual proof someone cheated. So they use it to get a general picture and if it's mostly ai generated they try and bury the assignment on some other technicality.
Unfortunately a lot of naturally smooth sophisticated writing gets falsely flagged and people have begun dumbing down their style deliberately.
 
It doesn't help that high schools in the US already teach students to write in a very, very structured way that wouldn't be hard for an AI to emulate. Forget ChatGPT, what if a student knows there's more out there, like Grok? Or that chinese AI that I'm forgetting. The one with the whale logo.

It's really not that difficult to set up a chat, feed it a few sample essays and then bam, it can generate the essay(s).
The whole education system is fucked in the US. Younger users will hate me for saying this, but there should be no such thing as "easy A" classes. Shit just gets you used to being a slacker, to the point where you might actually miss an obvious ADHD diagnosis because you weren't actually challenged throughout high school. Then you get to college and oh, shit. You don't actually know how to study, how to absorb information from what you're reading, how to write notes that you'll actually remember, etc. - guess you gotta depend on AI even more!

Oh, and God forbid schools actually make changes to their systems, no, no, we gotta just keep telling kids to do after-school shit. That way, you show colleges (in theory) that you're not a "couch potato" - that's a quote I was told when I lived in the US. It's so stupid, like these people in charge know that there's a problem (schools don't actually challenge a lot of students) but they don't actually want to fix the problem. Forget college at this point, save the middle and high schools first. Make those little shits READ.
 
It doesn't help that high schools in the US already teach students to write in a very, very structured way that wouldn't be hard for an AI to emulate.
It's probably good that there is a formula to writing - not everyone needs to or is capable of doing much more than that, but if everyone in the populace can follow the formula that would overall increase the efficacy and quality of communication amongst the population. For decades educationalists have been far too obsessed with the top of Blooms taxonomy and totally forgot that the basic skills need to be built first. It's the whole problem with the now controversial Whole Word model of teaching reading and the problem with the current trend in high school science education.

Anyway, specifically to your point of catching students using AI, the better tool is any program that monitors Chromebooks. You can see what they're doing and what resources they're accessing. You don't need an AI screener. The other solution is pen and paper in luddite's classroom.
 
I tried cheating with ChatGPT in a physics assignment and I could easily see that it was pure trash it spewed out. Maybe at a high school level it would be usable, but when it comes to college I would be horrified of trusting it blindly after my first and last experience.
To be fair, this was half a year ago and maybe it has improved since then.
For my highschool Advanced English class I was supposed to write this 10 page paper on Hamlet. I blew it off til the last minute and then wrote 2 pages and the rest of the 8 pages devolved into pure madness about how everyone in the play mutated into Strawberry ice cream monsters and Hamlets castle was destroyed by a giant ice cream monster and he killed his wife after he discovered she cheated because his child had waffle cone instead of hair.

My teacher gave me a 100% on it and I was too afraid to ask her if she ever read past the first two pages or not. Ever since I've always assumed that the majority of teachers either just read the first page or skim whatever you turn in, and I would bet the same applies to college.
 
Does all this AI shit work in not-English? My uni had the right to write exams and papers in not-English, so how would that impact things? I just assume Wang Chung and Ghandi Curry are cheating either way...

For content: I ran a quick Grok query about a niche part of my job and it came back with stuff 30 years out of date and the next query missed about 15 of 30 answers to a question that has a legally codified answer.
I am confident people who can't get much out of AI are simply poor users of it. Grok, in my experience, is optimal for coding, but much worse at the written word than GPT. To answer your question, yes, it can write in all languages and pass the Turing test in each. reddit, YouTube, and Facebook are now fully colonized by bots in the comments. I am fully convinced.

For fun, I had GPT re-write this in AAVE, Espanol, and as a radical feminist:

1. AAVE / Black SM Lingo POV:

man ion even be surprised no more. if u ain't gettin nun outta AI it prolly jus mean u don’t know how to use it fr. Grok? that one hit diff wit code n sh*t, but GPT? she the one u want when it come to words, period. n don’t get me started on them comments — reddit, YouTube, FB? bots ran thru allat. i’m talkin fully bot’d up. i’m convinced 100%.

2. En Español:

La verdad, si alguien no le saca provecho a la IA, es porque no sabe usarla bien. Grok me ha servido mucho para programar, pero para escribir, GPT es mil veces mejor. Y te digo algo más: Reddit, YouTube y Facebook ya están llenos de bots en los comentarios. Estoy totalmente convencido.

3. Bitter Radical Feminist POV:

Of course people can’t “get much” out of AI — because they never learned how to actually engage with tools unless spoon-fed by some patriarchal tech bro. Grok, predictably, is optimized for coding — because god forbid a machine be good at language, expression, nuance, or anything women have historically dominated. GPT is the only one that even remotely understands the written word. And let’s be honest: the entire internet is a bot wasteland now. Reddit? Bots. YouTube? Bots. Facebook? A digital graveyard of automated misogyny. Fully colonized. No soul left.
This shit is legitimately a before/after event in our timeline and I don't think many people notice. The gender ideology agenda owes much of its success to bots, I think.
 
Real problem underneath all this is people aren't allowed to just learn on their own and try for jobs because businesses require credentials above talent or work history.
They would look at IQ or other tests but they're racist or something.
 
  • Feels
Reactions: SBG
Back