Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.


As I mentioned elsewhere (below) I am currently conducting similar testing across 4 different 4B models (Qwen3-4B Hivemind, Qwen3-4B-2507-Instruct, Phi-4-mini, Granite-4-3B-micro), using both grounded and ungrounded conditions. Aiming for 10,000 runs, currently at 3,500.
Not to count chickens before they hatch - but at ctx 8192, hallucination flags in the grounded condition are trending toward near-zero across the models tested (so far). If that holds across the full campaign, useful to know. If it doesn’t hold, also useful to know.
I have an idea for how to make grounded state even more useful. Again, chickens not hatched blah blah. I’ll share what I find here if there’s interest. I’m intending to submit the whole shooting match for peer review (TMLR or JMLR) and put it on arXiv for others to poke at.
I realize this is peak “fine, I’ll do it myself” energy after getting sick of ChatGPT’s bullshit, but I got sick of ChatGPT’s bullshit and wanted to try something to ameliorate it.