Students using “humanizer” programs to beat accusations of cheating with AI

2026-01-2818:095686www.nbcnews.com

Students are taking new measures, such as dumbing down their work, spying on themselves and using AI “humanizer” programs, to beat accusations of cheating with artificial intelligence.

On college campuses across the United States, the introduction of generative artificial intelligence has sparked a sort of arms race.

Rapid adoption of AI by young people set off waves of anxiety that students could cheat their way through college, leading many professors to run papers through online AI detectors that inspect whether students used large language models to write their work for them. Some colleges say they’ve caught hundreds of students cheating this way.

However, since their debut a few years ago, AI detectors have repeatedly been criticized as unreliable and more likely to flag non-native English speakers on suspicion of plagiarism. And a growing number of college students also say their work has been falsely flagged as written by AI — several have filed lawsuits against universities over the emotional distress and punishments they say they faced as a result.

NBC News spoke to ten students and faculty who described being caught in the middle of an escalating war of AI tools.

Amid accusations of AI cheating, some students are turning to a new group of generative AI tools called “humanizers.” The tools scan essays and suggest ways to alter text so they aren’t read as having been created by AI. Some are free, while others cost around $20 a month.

Some users of the humanizer tools rely on them to avoid detection of cheating, while others say they don’t use AI at all in their work, but want to ensure they aren’t falsely accused of AI-use by AI-detector programs.

In response, and as chatbots continue to advance, companies such as Turnitin and GPTZero have upgraded their AI detection software, aiming to catch writing that’s gone through a humanizer. They also launched applications that students can use to track their browser activity or writing history so they can prove they wrote the material, though some humanizers can type out text that a user wants to copy and paste in case a student’s keystrokes are tracked.

“Students now are trying to prove that they’re human, even though they might have never touched AI ever,” said Erin Ramirez, an associate professor of education at California State University, Monterey Bay. “So where are we? We’re just in a spiral that will never end.”

The competition between AI detectors and writing assistance programs has been propelled by a heightened anxiety about cheating on college campuses. It shows how inescapable AI has become at universities, even for students who don’t want to use it and for faculty who wish they didn’t have to police it.

“If we write properly, we get accused of being AI — it’s absolutely ridiculous,” said Aldan Creo, a graduate student from Spain who studies AI detection at University of California San Diego. “Long term, I think it’s going to be a big problem.”

Do you have a story to share about technology in education? Contact reporter Tyler Kingkade

A teaching assistant in a data science course accused Creo of using AI to write a report in November. Creo explained to the TA that he has a habit of explaining step by step how he reasons through a problem, which ChatGPT is known to do, according to a copy of messages he exchanged with the TA.

Eventually, his grade was corrected but, to avoid another battle, Creo said he sometimes “dumbs down” his work by leaving words misspelled or using Spanish sentence structures that aren’t proper in English. And now, Creo runs all of his material through an AI detector pre-emptively.

“I have to do whatever I can to just show I actually write my homework myself,” he said.

‘How could Al make any of that up?’

At their worst, the stress from the accusations has driven some students to drop out of school.

Brittany Carr received failing grades on three assignments she completed as a long-distance student at Liberty University, a private evangelical school in Virginia that has one of the largest online enrollments in the U.S., because they were flagged by an AI detector. She showed her revision history, including how she’d written one first by hand in a notebook, according to screenshots of emails and messages she exchanged with her professors.

“How could Al make any of that up?” Carr wrote in a Dec. 5 email. “I spoke about my cancer diagnosis and being depressed and my journey and you believe that is Al?”

Her evidence wasn’t enough — the social work school still told her she needed to take a “writing with integrity” class and sign a statement apologizing for using AI, emails show.

“It’s a very weird feeling, because the school is using AI to tell us that we’re using AI,” she said.

It stressed her out. Carr worried another cheating accusation could cause the Department of Veterans Affairs to take away her financial aid. In order to avoid more false accusations, she said, she ran all of her material through Grammarly’s AI detector and changed any section that it highlighted until it concluded a human wrote the whole thing.

“But it does feel like my writing isn’t giving insight into anything — I’m writing just so that I don’t flag those AI detectors,” she said.

After the semester ended, Carr decided to leave Liberty. She’s unsure where she’ll transfer.

"I’m writing just so that I don’t flag those AI detectors."

Brittany Carr, liberty University student

Liberty University, which was co-founded by religious broadcaster Jerry Falwell Sr., said it does not comment on individual students. It said in a statement that all academic integrity concerns are addressed “with care and discretion, providing a process that keeps the student’s best interests at the forefront. Our aim is only to see our students succeed, and every student is afforded an exhaustive process to address any concerns about unfair treatment.”

Eric Wang, vice president of research at Quillbot, which makes a detector and a humanizer, said this kind of fear is going to remain unless educators move away from automatically deducting points instead of bringing students in to discuss how they use AI.

Once that happens, Wang said, “it starts to not matter whether you do or don’t sound like AI and instead moves us toward a world asking how are we using this technology but not losing our sense of humanity, our sense of creativity, and our ability to create great things on our own.”

‘Where’s the line’

At the root of many conflicts about students using chatbots to cheat is a disagreement about what counts as too much AI use on homework.

“When we did our first training program for 3,000 teachers,” said Edward Tian, CEO and co-founder of GPTZero, “every teacher and every student had a different understanding of what’s acceptable — just the understanding was very fragmented, and then it’s getting even more fragmented with the number of tools growing.”

Independent analyses of AI detectors show mixed accuracy. One pre-print study last year found GPTZero is good at finding AI-generated writing, but “its reliability in distinguishing human-authored texts is limited.” However, other research pegged the company’s detector at near-perfect accuracy. Meanwhile, separate studies from 2023 and 2024 have found that Turnitin had a low false positive rate, but failed to identify more than a quarter of AI-generated or AI-rephrased texts.

Both companies emphasized that research showing flaws in their detectors is outdated due to the rapid evolution of large language models and updates to their own detection software.

"It’s almost like the better the writer you are, the more AI thinks you’re AI."

Erin Ramirez, professor of education at Cal State Monterey Bay

AI detection probability scores are often misread too by users who fail to recognize the detectors are flagging text that it flags text that is likely generated by AI, rather than confirmed to be made by a chatbot, or fail to recognize that the text needs to be at least 300 words long for some detectors to effectively evaluate it.

Turnitin tells schools never to use its tools as the sole basis for deciding whether a student cheated, said Annie Chechitelli, the company’s chief product officer. It should instead prompt a conversation with a student about how and why AI was used, she said.

“The most important question is not so much about detection, it’s really about where’s the line,” she said.

GPTZero also has a disclaimer on its platform advising faculty not to use its detector to punish students.

Ramirez, the Cal State Monterey Bay professor, who is studying how AI can be used in K-12 settings, said anyone who relies on a detector has never put their own work through it.

“It’s almost like the better the writer you are, the more AI thinks you’re AI,” she said. “I put my own papers into AI detectors just to check because I don’t like to hold students accountable without knowing how the tool works. And it flags me at like 98% every time, and I didn’t use AI in any capacity.”

Faculty, administrators and AI detection company leaders all agreed that professors should have conversations with students after software flags their work to ensure no one is falsely accused of academic dishonesty. But to do that properly takes time — especially when many instructors have dozens or even hundreds of students each semester, said Morgan Sanchez, an assistant professor of sociology at San José State University.

“So it is creating a slight sense of tension, but more so it’s creating extra labor — uncompensated labor — that we have to do,” Sanchez said.

A booming humanizer industry

Turnitin, which has been around for a quarter-century offering tools to help educators catch plagiarism, is trying to keep up with humanizers. The company views humanizers as a “growing threat to academic integrity,” and issued a software update last August to detect text modified by the tools.

"The most important question is not so much about detection, it’s really about where’s the line."

Annie Chechitelli, Turnitin's chief product officer

The company has a list of 150 tools that charge as much as $50 for a subscription to adjust text so that it’s not flagged by an AI detector. Chechitelli referred to them as companies whose “sole goal is to really help students cheat.”

Demand has surged for the tools. Joseph Thibault, founder of Cursive, an academic integrity software company, tracked 43 humanizers that had a combined 33.9 million website visits in October.

Thibault, who also publishes a newsletter on cheating called This Isn’t Fine, said he thinks students would be better off ensuring they have a record of their revision history in Google Docs or Microsoft Word than using a humanizer. But ultimately, he believes, the shift that’s coming is moving toward more monitoring of students completing assignments.

“I think we have to ask students, what level of surveillance are you willing to subject yourself to so that we can actually know that you’re learning?” he said. “There is a new agreement that needs to be made.”

Students surveilling themselves

Superhuman, the company that makes Grammarly, developed a tool it calls Authorship that’s included with basic accounts. Students can turn it on to surveil themselves on Google Docs or Microsoft Word as they write and playback later. It will show which sections were typed, pasted from another source or generated with AI.

“We’re going to keep track of when you are going to Wikipedia,” said Jenny Maxwell, Superhuman’s head of education. “We’re going to keep track of when Grammarly is making suggestions and you’re taking them, we’re going to keep track of how much time you’ve spent in this paper or how many sessions.”

As many as 5 million Authorship reports were created in the past year alone, she said, though most of the time they aren’t submitted.

Maxwell said the tool was inspired by a viral TikTok video from Marley Stevens, who described the havoc of what she said was a false accusation of AI use that landed her on academic probation in 2023 at the University of North Georgia. In reality, Stevens said, she’d only used Grammarly’s extension to help fix spelling and punctuation.

The university declined to comment on Stevens’ case, citing federal privacy law, but said in a statement that “faculty communicate specific guidelines regarding the use of AI for various classes and those guidelines are included in the class syllabi.”

Stevens, who graduated last month, said it was difficult to keep track of each professor’s policy around AI usage — and it increasingly became hard to avoid writing on software that didn’t have it embedded.

“Google has AI embedded into it, Microsoft has AI embedded into it — like literally everything has AI in it,” she said. “So, in a roundabout way, there’s no way to write a paper without using AI, unless you go to the library and you check books out and use encyclopedias.”

Pressure on colleges

Some students believe universities should stop using AI detectors because of false positives. In upstate New York, an online petition calling on the University at Buffalo to drop the software received more than 1,500 signatures last year.

Kelsey Auman, who graduated last spring, started the petition after she fought to prove she did not use AI on several of her assignments. She knew enough classmates with similar experiences that they had a group chat named “Academic Felons for Life.” Auman said she started to run her papers through multiple AI detectors on her own before turning them in, hoping to avoid another dispute, but it created more anxiety when they also incorrectly flagged things she wrote as generated by a chatbot.

“So it’s like, how far do you want to go down the rabbit hole? I’m making myself crazy,” she said.

"We keep turning on what the academic institutions need to do to fix problems that they didn’t create."

Tricia Bertram Gallant, director of academic integrity at UC San Diego

The University at Buffalo said it does not have an institutionwide rule on AI use, but instructors must have evidence beyond a detector score to report a student for academic dishonesty.

Tricia Bertram Gallant, director of the academic integrity office at UC San Diego, advises faculty to realize how herculean of a task it is to ensure students don’t use AI if they aren’t completing the work in front of them.

“If it’s an unsupervised assessment, don’t bother trying to ban AI,” she said. “And don’t bother trying to prove AI was used because you end up spending more time doing that.”

But Bertram Gallant, who is also president emeritus of the International Center for Academic Integrity, wishes more people would put pressure on the government to regulate AI and the academic cheating industry, and on tech companies to make it harder for students to use their products to cheat.

“We keep turning on what the academic institutions need to do to fix problems that they didn’t create,” she said.


Read the original article

Comments

  • By 2pEXgD0fZ5cF 2026-01-3120:073 reply

    I don't see the whole AI topic as a large crisis, as others have mentioned: put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them, give them the freedom to so, but if as a result they fail the exams and similar in-person evaluations, then so be it. Let them fail.

    I would like to hire students who actually have skills and know their material. Or even better, if AI is actually the amazing learning tool many claim then it should enhance their learning and as a result help them succeed in tests without any AI assistance. If they can't, then clearly AI was a detriment to them and their learning and they lack the ability to think critically about their own abilities.

    If everyone is supposed to use AI anyway, why should I ever prefer a candidate who is not able to do anything without AI assistance over someone who can? And if you hold the actual opinion that proper ai-independent knowledge is not required, then why should I hire a student at all instead of buying software solutions from AI companies (and maybe put a random person without a relevant degree in front of it)?

    • By apatheticonion 2026-01-3120:232 reply

      It's a huge problem. I have several friends in university who have had assignments flagged as AI. They have had entire units failed and forced to retake semesters which is not cheap.

      Even if you fight it, the challenge goes into the next semester and pushes out your study timeline and associated costs.

      > put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them

      Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.

      One friend now uses a dashcam to record themselves when writing an assignment so they can prove no AI was used when they are eventually flagged.

      • By 2pEXgD0fZ5cF 2026-01-3120:311 reply

        Yeah bad choice of words on my part, I apologize. I can imagine that things are pretty chaotic right now and that there are quite a few problems like the one you describe. When I said I don't see a crisis here I meant that more in a more overarching sense and that I see this as solvable.

        > Works for high school, not so much for university degrees.

        I don't know about that. I can't speak for the US, but at the university where I got my degrees (Math & CS) and later worked prerequisite in-person tests to be allowed to take a given exam were not rare. Most modules had lectures (professor), tutorials (voluntary in-person bonus exercises and tutors to ask questions) and exercise groups where solutions to mandatory exercises were discussed. In the latter sometimes an additional part of the exam requirements was to present and explain a solution at least once or twice over the course of the semester. And some had small, mandatory bi-weekly tests as part of the requirement too.

        Obviously I can understand that this would not work equally well in each kind of academic programme.

        • By apatheticonion 2026-01-3120:48

          > Yeah bad choice of words on my part, I apologize.

          All good!

          > I can't speak for the US

          I just had to respond to this as the implication of being American touched a nerve, haha. Australian here.

      • By Swizec 2026-01-3122:11

        > > put more emphasis on in-person tests and exams. Make it clear that homework assignments are for practice, learning, and feedback. If a person thinks that copy/pasting helps them

        > Works for high school, not so much for university degrees. What's crazy is universities have an incentive to flag your work as AI generated as it forces the student to pay more money and is difficult to challenge.

        When I started uni (slovenia, 2007) the rules were simple: You are adults. The final exam (written + oral) is 100% of your grade. We don’t have the time or willingness to police what you do. Strongly recommend attending classes and doing homework but whatever it’s your life. If you get high enough scores on the optional midterms, you can skip the written portion of the exam.

        It was pretty great. Yes we all tried to cram for exams at the last moment. No it didn’t work very well. Needing 2 or 3 tries to pass was common.

        Then later we got the bologna system. Professors stopped bragging about fail rates. Students passing became an actual thing they were evaluated on. Homework became graded, midterms were mandatory and part of your grade, attendance was tracked, etc.

        College became like high school. More people passed but I think something was lost about teaching adulthood.

        For the record: I didn’t graduate. My freelance business got too busy and I could not keep up with both.

    • By ckcheng 2026-01-3121:34

      > more emphasis on in-person tests and exams

      $$$

      There’s a lot of interacting parts as to why many places have arrived where we are where cheap ghost writers (AI or not) can so easily negatively impact education. But it pretty much all comes down to costs.

    • By whattheheckheck 2026-01-3121:51

      Go ahead and let a random person do it. Degrees were gate keeping anyway

  • By threemux 2026-01-3119:471 reply

    In person, proctored blue book exams are back! Sharpen those pencils kids.

    I've been wondering lately if one of the good things to come out of heavy LLM use will be a return to mostly in-person interactions once nothing that happens online is trustworthy anymore.

    • By bambax 2026-01-3119:554 reply

      Yes! This "problem" is really easy to fix with in person exams and no computers in class, ever.

      • By armchairhacker 2026-01-3121:54

        There should be computers, just locked down ones that don’t leave the classroom. With today’s tuitions, colleges can afford a computer for every student.

        Writing code on paper is frustrating to the point where, beyond small algorithms, it’s probably not an effective metric (to test performance on real-world tasks). I think even essays may not be as good a metric for writing quality when written vs typed, although the difference is probably smaller. Because e.g. being able to insert a line in the middle of the text, or find-and-replace, are much harder. Also, some people (like me) are especially bad at handwriting: my hand hurts after writing a couple paragraphs, and my handwriting is illegible to most people. While some people are especially bad at typing, they get accommodations like an alternative keyboard or dictation, whereas the accommodation for bad handwriting is…a computer (I was fortunate to get one for exams in the 2010s).

      • By meroes 2026-01-3119:572 reply

        This is the “back to office” of education. It is not a one size fits all solution. There are so many remote and hybrid classes now you guys sound outdated.

        • By analog31 2026-01-3120:091 reply

          That’s fair, but at the same time, expecting any learning to occur in remote classes, when fair evaluation is impossible, may also be outdated.

          • By kbelder 2026-01-3120:142 reply

            Learning is just as easy remote and with AI, maybe easier. It's testing and evaluation of that learning that's difficult.

            Universities make money not by teaching, but by testing and certifying. That's why AI is so disruptive in that space.

            • By analog31 2026-01-3120:58

              Universities don’t make money.

              Granted, I’m 62, so I’m from the old world. I attended college, and taught a couple of college classes, before the AI revolution. There was definitely a connection between learning and evaluation for most students. In fact most students preferred more evaluation, not less, such as graded quizzes and homeworks rather than just one great big exam at the end. Among other things, the deadlines and feedback helped them budget their efforts. Also, the exercise of getting something right and hitting a deadline is not an overt purpose of education, but has a certain pragmatic value.

              Again, showing my age, in the pre-AI era, the technology of choice was cheating. But there were vanishingly few students who used cheating to circumvent the evaluations while actually learning anything from their courses.

              If teaching and certifying could be separated, they would be. In fact, it has happened to some extent for computer programming, hence the “coding interview” and so forth. But computer programming is also an unusual occupation in that it’s easy to be self taught, and questionable whether it needs to be taught at the college level.

        • By bambax 2026-01-3123:581 reply

          You don't need uni to watch youtube; you can do that on your own, for free. "Remote classes" are obviously a scam.

          • By meroes 2026-02-0117:10

            I don’t think there’s a way to claim remote classes are a scam without saying college as a whole is a scam with your logic. So why single out remote classes?

      • By seanmcdirmid 2026-01-3120:141 reply

        Until they need to start learning how to use them to get a job in the modern world?

        There should be a class that teaches you how to use AI to get things done, especially judging on how many even on HN admit they aren’t good at it.

        • By Ekaros 2026-01-3121:391 reply

          Is there even a point until field properly stabilise? Even with more fundamental stuff there is complaints that material is outdated. And even AI proponents seem to tell that things are still evolving and you need to do something in new way regularly.

          • By seanmcdirmid 2026-01-3122:35

            If the tech is already good enough to cheat with? Ya, I think the kids are ready to learn it, even if just keeps improving in the coming years. It also helps you reflect on the process of doing something when you instruct someone else to do it for you. Writing a good essay and getting AI to write a good essay for you are both useful things to do as students.

      • By idle_zealot 2026-01-3119:57

        But is that webscale?

  • By OsamaJaber 2026-01-3119:072 reply

    AI detectors punishing non native English speakers for writing too cleanly is the part nobody talks about enough -_-

    • By Rexxar 2026-01-3120:011 reply

      For example, native English speakers often make phonetic spelling errors (such as its/it’s, your/you’re) that non-native English speakers usually avoid. It’s probably a sign that someone speaks more fluently when he starts making these types of mistakes from time to time.

      • By Tade0 2026-01-3120:23

        Or picked up English before they learned to read and write properly.

        I'm cursed with this as I was put in an international environment right before turning five, went back to my home country to start grade school and only in fifth grade started having English classes.

    • By tempaccountabcd 2026-01-3120:23

      [dead]

HackerNews