This is getting complicated

I had coffee with a college teacher this morning. She had three dissertations to read today. I asked if she was having problems with AI and she rolled her eyes.
She said it was easy to spot the ones out of ChatGPT: everything is bullet lists (you certainly learn to recognize its writing style, too). She asks students if they’ve used AI and if they ‘fess up, she tells them she doesn’t mind – but they have to take the AI text and put it back together, in paragraphs, in their own words, so she can see they comprehend the material.
She said the usual result is, they’ve mixed their words with AI words, resulting in a clunky stylistic mess.
The worst, though, are those who simply will not admit they used AI, when she can see perfectly well they’re lying. That leaves her with a dilemma: she is supposed to judge their character as well as their grasp of the material. Now what?
Posted: April 27th, 2026 under personal.
Comments: 2
Comments
Comment from Mark Matis
Time: April 27, 2026, 7:15 pm
Are those last Preferred Species?
If so, she had better give them top marks if she wants tor emain employed and not in jail!!!
Comment from Some Vegetable
Time: April 27, 2026, 7:27 pm
That account highlights a real issue, but it’s less about “AI writing style” and more about how students are using the tool.
If a student copies output verbatim, the problem isn’t just academic integrity—it’s that they’ve skipped the thinking step. When they try to retrofit understanding afterward, the result is exactly what your colleague described: uneven voice, shallow grasp, and structural confusion.
Her approach—requiring students to restate ideas in their own words—is actually sound. It shifts the focus from production to comprehension. AI can generate text, but it cannot demonstrate understanding on behalf of a student.
The harder question is the one she’s facing: what to do when a student denies obvious AI use. That’s no longer just about writing quality; it becomes a question of honesty. In practice, though, proving intent is difficult and often unproductive.
A more durable approach may be to design assignments where understanding is visible and difficult to outsource: in-class writing, iterative drafts, oral defenses, or requiring students to explain and adapt their arguments under questioning. These make it less about catching misuse and more about making genuine engagement unavoidable.
AI isn’t going away. The task isn’t to detect it perfectly—it’s to structure learning so that using it superficially doesn’t work.











Write a comment
Beware: more than one link in a comment is apt to earn you a trip to the spam filter, where you will remain -- cold, frightened and alone -- until I remember to clean the trap. But, hey, without Akismet, we'd be up to our asses in...well, ass porn, mostly.<< carry me back to ol' virginny