Boost Chatbot Reliability: New System Eliminates AI Hallucinations for Accurate Business Answers

Invented by Wood; Michael, Acurai, Inc.
AI models like ChatGPT are smart, but they sometimes make things up. This is called “hallucination.” The patent application above introduces a new way to stop AI from hallucinating by changing how information is stored, processed, and sent to large language models (LLMs) like GPT-4. In this article, we’ll break down what makes this invention important, how it builds on what came before, and how it works in simple words.

Background and Market Context
Ever since computers started “talking back” to humans, there’s been a dream of a chatbot that always gives a correct answer. In the early 1960s, engineers built programs that could answer simple questions about baseball. By the mid-60s, chatbots like ELIZA could hold basic conversations by matching patterns and swapping words. But these systems were not truly smart—they just followed scripts.
Fast forward to today, and we have tools like OpenAI’s GPT models. They can answer almost any question, write essays, summarize articles, and even help with coding. But there’s a catch. These AI models still make mistakes. Studies show that even the best models, like GPT-4.5, make things up about one-third of the time on simple questions. On harder questions, the error rate is even higher.
Why does this matter? AI is now used in places where the right answer really matters—like medicine, law, science, and finance. If a chatbot makes up a law that doesn’t exist, gives the wrong medication advice, or invents a fake news headline, the results can be harmful. Businesses and researchers want AI systems they can trust. But right now, even the most advanced AI can’t always be trusted to tell the truth.
This patent tackles that problem head-on. It’s not just about making AI a little better—it aims to make AI responses 100% accurate, especially where it counts.
Scientific Rationale and Prior Art
To understand what’s new about this invention, let’s look at how AI has been made so far and where the problems come from.
Modern AI models are trained on huge amounts of text from the internet. They try to learn patterns so they can predict the next word, sentence, or answer to a question. This works well for many cases, but there’s a hidden flaw—these models can mix up similar-sounding words or phrases, especially names and facts. For example, they might confuse “Alfonso” with “Afonso,” or mix up “magnesium” and “calcium.” This mixing up is called a “noun-phrase collision.”
Another issue is how AI is trained to split sentences and extract facts. Old training methods use big datasets with lots of examples, but they let the AI pick from many “right” answers. This sounds good, but it means the model can “choose” any of those answers later, even when only one is correct. This leads to errors.

Some researchers have tried to fix hallucinations by adding logic rules or extra checks, like in the LP-LM system. But these systems are either too strict (they can’t handle real-world language) or too loose (they start hallucinating again). Other companies, like Microsoft, have tried “groundedness” APIs to check if AI’s answers match what’s in reliable sources. But studies show these checks miss many mistakes.
In the real world, companies like Apple stopped summarizing news articles with AI because too many mistakes slipped through. OpenAI and others tried fine-tuning models to fix specific errors, but this just caused the AI to “forget” other facts—a problem called catastrophic forgetting.
The big problem is that language is messy. AI models see words as numbers (tokens), and similar words get similar numbers. So, when the AI tries to answer a question, it sometimes picks the wrong fact because the numbers are close together. This is why even a super-smart AI can make silly mistakes.
This patent identifies that noun-phrase collisions are the root cause of most AI hallucinations. It also recognizes that trying to fix every mistake by training the AI more doesn’t work, because it just creates new mistakes elsewhere.
Invention Description and Key Innovations
The invention introduces a whole new way to make AI responses accurate. Here are the main ideas, explained simply:
1. Bounded-Scope Determinism (BSD):
Instead of letting the AI pick from many “right” answers, BSD says there should be only one correct answer for each question or transformation. The system teaches the AI to always do things the same way, using fixed rules. This makes the AI’s output predictable and removes confusion. For example, when splitting a complex sentence, the AI always breaks it the same way, using the full noun phrase if needed.
2. Formatted Facts (FFs):
AI works best with simple, self-contained statements. The invention uses a pipeline to turn complicated text into simple sentences called “formatted facts.” For example, “He married her” becomes “Tom Cruise married Katie Holmes.” Each formatted fact stands alone and is easy for AI to understand and work with.

This process includes:
- Breaking text into simple sentences (sentence splitting)
- Replacing pronouns with the actual names (coreference resolution)
- Turning relative dates (“in three days”) into absolute dates (“on Feb. 5, 2020”)
- Changing first-person to third-person (“I went” becomes “Michael went”)
These steps make sure every fact is clear and free of ambiguity.
3. Model Correction Interfaces (MCIs):
Even with all these steps, some AI processes are still a little random. So the invention introduces Model Correction Interfaces—special tools that check and fix AI’s answers before they go to the user. The most important one is the Formatted Facts MCI (FF-MCI), which compares the AI’s output to the formatted facts and swaps in the correct facts if needed. This means even if the AI makes a mistake, the system catches it and fixes it automatically.

4. Avoid, Bypass, Correct (ABCs) of Hallucinations:
The patent says there are only three ways to stop AI from hallucinating:
- Avoid noun-phrase collisions by splitting up questions and making sure similar words don’t get mixed up.
- Bypass collisions by turning the question into an “extractive” task—just pulling out facts, not making up new sentences.
- Correct mistakes by checking the AI’s answers against a list of trusted facts and fixing any errors before showing them to the user.
This “ABC” framework is simple, but powerful. If you use these three steps, you can get rid of AI hallucinations for good.
5. Intelligent Storage and Retrieval (ISAR):
The system uses a new way to store and fetch information. Instead of saving big chunks of text, it saves the simple, formatted facts. When a user asks a question, the system uses smart filters (like counting how many people or locations are mentioned) to find exactly the right facts. It can search using keywords, synonyms, related words, and even check if the answer is about the right time or place. This is much more precise than old-fashioned keyword or vector searches.
For example, if you ask “Who did Tom Cruise marry in 2007?” the system will only look at facts mentioning Tom Cruise, another person, and the right date range. It won’t send irrelevant text to the AI, so there’s much less chance for mistakes.
6. Everything Tied Together
The patent doesn’t just fix one piece of the puzzle—it connects all the steps into a pipeline:
User asks a question → System checks for confusing phrases or similar names → Breaks down the question if needed → Finds only the most relevant, simple facts → Extracts those facts → If the AI writes a new answer, the system checks it against the facts and swaps in the correct ones → Sends the right answer to the user.
This pipeline works for question-answering, summarizing long documents, giving step-by-step instructions, and more. If the knowledge base is built this way, even a small, cheap AI model can give perfect answers.
7. Works for Many Languages and Data Types
The system isn’t just for English. Any language that uses noun phrases can use these methods. It can also handle tables, lists, and other data, not just plain text.
8. Real-World Testing Shows It Works
The patent includes test results showing that this method removed 100% of hallucinations in standard datasets for both GPT-4 and GPT-3.5 Turbo. Even when summarizing hundreds of news articles, the system produced accurate summaries every time.
Conclusion
This patent introduces a complete solution for stopping AI hallucinations. By breaking down text into simple, clear facts and using strict, predictable rules, the system removes the root causes of AI mistakes. It uses smart tools to catch errors before they reach the user, and its new storage and search methods make sure AI only sees the facts it needs. The ABCs (Avoid, Bypass, Correct) framework gives a clear, actionable path for any company or developer who wants to build trustworthy AI.
In a world where businesses and people expect AI to tell the truth every time, this invention offers the missing piece. If you want AI that never makes things up and always gives the right answer, this patent shows the way.
Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250335709.


