Stay away from the brown acid
I needed to find out the earliest printed reference to a certain building in the town, for work. Naturally, I asked Grok. It gave me an absolutely perfect quote from a well-known early local history book. Alas, it was a complete hallucination.
I know this because I have a digital version of the book and had already searched it for all the right keywords.
I told Uncle B and he said, “you should tell Grok it’s lying.” So I did. It apologized and gave me a genuine quote – one that I had already found for myself – that wasn’t so helpful.
It lied to please me.
Computer scientists really do call it hallucination when AI pulls information out of its butt. It’s not only a problem, it’s an increasing problem as the models get more complex.
Recent studies and reports indicate that AI hallucinations are becoming more prevalent, especially in advanced reasoning models. OpenAI’s o3 and o4-mini models, which are designed to reason step-by-step, have been found to hallucinate at rates between 30% and 79% of the time. Similarly, other reasoning models from companies like DeepSeek and IBM have also shown higher hallucination rates compared to their earlier versions.
Why, yes, I did get that answer from AI (Leo, the one built into Brave, in this instance).
This is why lawyers who use AI keep getting into trouble – AI fabricates cases. According to this tweet, the Chicago Sun-Times published a Summer reading list that was obviously AI – eight of the books on the list don’t exist.
It’s all well and good with these examples, but what happens when it’s analyzing x-rays?
May 21, 2025 — 4:02 pm
Comments: 5