This week, GovTech published a piece by Roundtable Cofounder and CEO Madeleine Smith explaining why AI for government agencies requires contextual data to be truly useful. Below is an excerpt.
When we read and talk about AI colloquially, industry observers are often referring to “foundational” generative AI models like OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s Llama or Google’s Gemini. These are massive AI models pretrained on enormous data sets — oftentimes, the totality of all digital information on the Internet many times over — that can be used for a variety of tasks through natural language interactions.
In other words, you can chat with them and the AI executes tasks. But if you’ve spent time on the Internet, you know it’s rife with incorrect information. The problem with using AI models trained on the whole Internet is that sometimes they make mistakes or make up information — what AI providers call “hallucinations.”
Google’s Gemini 2.5 Pro hallucinates 1.1 percent of the time, whereas xAI’s Grok hallucinates 2.1 percent of the time. Other models hallucinate more often: Mistral’s Large 2 has a hallucination rate of 4.1 percent; DeepSeek’s R1 7.7 percent of the time.
Read the full article on GovTech here: https://www.govtech.com/voices/the-case-for-small-ai-models-in-government-agencies