Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

DISTRIBUTED LEDGER ENABLED LARGE LANGUAGE MODEL SECURITY PROTOCOL

Inventiv.org
July 27, 2025
Software

Invented by Palanki; Hiranmayi, Djeyassilane; Shankar

Patent documents can often feel dense and hard to understand. But they are packed with ideas that shape our technology and even our daily lives. In this article, we will break down a patent application for a system that protects the use of large language models (LLMs) using distributed ledger technology. We will explain why it matters, how it builds on earlier work, and what makes it special. You will see how this invention could change the way we think about AI security and privacy.

Background and Market Context

In the last few years, artificial intelligence, especially in the form of large language models (LLMs) like ChatGPT, has become a key part of business and personal life. LLMs can write text, answer questions, translate languages, and even help write computer code. They are used in all sorts of industries—from healthcare and law to finance and entertainment. People rely on them to get fast, smart answers. But with great power comes serious risks.

One big risk with LLMs is that they sometimes make mistakes. They can generate answers that sound correct but are actually wrong—this is called “hallucination.” Sometimes, they may create or share content that is rude, biased, or even illegal. LLMs can also leak private or sensitive information, either by accident or because of a poorly written prompt from a user. For companies and people, this means that using LLMs can lead to legal problems, damaged reputations, or even security breaches.

As the use of LLMs has exploded, so has the need to keep them in check. Businesses want to make sure that what these models generate does not break any rules or laws. They also want to stop the sharing of secrets, protected health data, or other private information. With more and more AI services being offered by third parties, it’s hard to know what happens to data once it leaves your own computer or company. This is where the need for smart, automated tools that can spot and stop risky or illegal actions becomes urgent.

Distributed ledger technology (DLT), which is most well-known for its use in blockchains like Bitcoin and Ethereum, offers a way to record and track transactions in a way that is hard to tamper with. When combined with LLMs, DLT could help track every question and answer, making it possible to spot problems and take action quickly. This new patent application is about creating a system that uses both these technologies together, to provide security, transparency, and control over the use of LLMs—especially when dealing with third-party AI services.

Scientific Rationale and Prior Art

To understand why this invention is important, it helps to look at how LLMs and distributed ledgers have been used before, and what the limits of those uses are.

LLMs are trained on huge amounts of data—books, websites, social media posts, and more. They learn to predict which words come next in a sentence, which lets them generate text that sounds natural. But because they don’t really “know” what they’re talking about, their answers can sometimes be made up, biased, or simply wrong. To address this, researchers have built tools to test LLMs for bias, hallucination, and other problems. These tools can look at the output of a model and check it against rules or even other models, but most of these tools work after the fact, or only on models you control directly.

Distributed ledger technology is another area that has seen lots of growth. At its core, DLT is a way to make sure that everyone can agree on what happened and when. Once something is recorded on a blockchain, for example, it’s almost impossible to change without everyone noticing. This makes it great for keeping track of who owns what, or what actions have been taken. In the past, blockchains have been used for things like tracking food safety, recording land titles, or running “smart contracts” that execute code only when certain conditions are met. But using blockchains to monitor AI output, and especially to track the use of LLMs in real time, is a new idea.

In earlier inventions, there have been attempts to check AI models for bad behavior. Some solutions have used special filters or rules to block certain words or phrases. Others have tried to keep records of what questions are asked and what answers are given. Some have used AI to watch over other AI models, but these systems often work inside a single company or do not use ledgers that everyone can trust.

Most previous attempts also have not combined the strengths of both LLMs and distributed ledgers. They might use ledgers only to record that something happened, but not to help in making decisions about what should be allowed. Or, they might use LLMs for filtering but without any way to prove what happened after the fact. This means that if something goes wrong, it can be hard to know who is to blame, or how to fix the problem.

This patent application offers a new approach by using a trained LLM to monitor transactions happening on a distributed ledger. It watches over the work of third-party LLMs, analyzing every question and answer in real time, and blocks anything that looks suspicious. It then records what happened on the ledger, making a permanent record that can be checked later. It also uses information about past problems to get better at spotting new ones, retraining itself over time. This combination of AI oversight, real-time blocking, and tamper-proof record keeping is what sets this invention apart from earlier work.

Invention Description and Key Innovations

Let’s break down how this invention works in simple terms, and why it matters.

At the center of the system is a computing device—think of it as a smart computer—that has a processor and memory. This device runs special instructions (software) that do the following:

First, it acts as a filter for data created during exchanges between users (or their devices) and third-party LLMs. Each exchange, like a question and answer, is called a “transaction.” The system watches for every trace of these transactions, which are recorded on a distributed ledger. The ledger is like a public notebook that keeps track of everything that happens but is stored in many places at once so it can’t be changed easily.

Second, the system uses a specially trained LLM—this is an AI model that has been taught to spot the difference between normal and abnormal transactions. It looks at each transaction, converts it into a special form called a “vector representation” (which is just a way to turn text into numbers that computers can compare), and checks it against records of past problems. Think of this as an AI watchdog that knows what to look for.

If the AI finds something that looks wrong—like a prompt that tries to get the model to leak secrets, share copyrighted material, or generate offensive content—it marks this as an “anomaly.” If the anomaly is serious enough or if there are enough of them, the system blocks the transaction. This means the bad answer never gets back to the user or out into the world. It can also block things based on how similar they are to past problems, using smart math (like cosine similarity) to compare the current transaction to those in its database.

What makes this system really special is that it does not just stop there. Every time it blocks something, it writes down the details—what was blocked, when, and why—on the distributed ledger. This record can be private (only certain people can see it) or public. This helps with audits, compliance, and proving what happened if there are any questions later.

The system can also learn from its own actions. If it blocks a transaction, it uses the details of that event to retrain its AI model, making it better at spotting similar problems in the future. This ongoing learning helps the system keep up with new types of attacks or misuse as they appear.

The invention also includes a series of tests that the AI can run on each transaction. These tests check for things like:

  • Hallucinations (is the content made up or wrong?)
  • Bias (is the content unfair or prejudiced?)
  • Copyright violations (does it share protected material?)
  • Code generation (could it be used to make harmful software?)
  • Harmful or offensive language
  • Sharing of sensitive data (like health records or personal info)
  • License violations (is the model breaking software rules?)

This testing can be done automatically and in real time, before anything is sent back to the user. If anything fails, the system can stop the transaction and alert the right people.

Another important feature is that the system works even when AI services are provided by third parties. Many companies now use LLMs hosted by outside providers, which means they have less control over how data is handled. By using the distributed ledger as a shared record, and by running the watchdog LLM as an independent overseer, this invention gives companies a way to stay in control, even when they rely on other people’s AI.

Finally, the invention is flexible. It can be set up to block transactions based on different rules. For example, it might block anything that looks even a little suspicious, or only block things that match serious known problems. It can also be set to block after seeing a certain number of small issues, or based on how closely something matches a past bad event. This allows organizations to choose the level of strictness that fits their needs.

In summary, the key innovations of this invention are:

  • Using a trained LLM to watch over third-party LLMs in real time
  • Analyzing every transaction using vector representations and smart similarity checks
  • Blocking risky or illegal content before it can cause harm
  • Recording every action on a distributed ledger for transparency and accountability
  • Continuously retraining itself to improve detection over time
  • Working with both private and public ledgers, and supporting flexible access controls

These features work together to provide a strong, automated shield against the risks of using LLMs, especially in environments where trust, privacy, and compliance matter most.

Conclusion

As AI becomes more powerful and more widely used, the risks of mistakes, misuse, and abuse grow as well. This patent application outlines a new way to keep large language models in check by combining the strengths of distributed ledgers and smart, self-improving oversight by another AI. It offers a practical solution to real problems faced by businesses and individuals who rely on LLMs for their work and lives.

By giving organizations the tools to monitor, block, and record risky transactions in real time—and to learn from every event—this invention helps make the promise of AI both safer and more trustworthy. If you are building systems that rely on LLMs or are worried about privacy, bias, or compliance, this approach offers a clear, actionable path forward. As this technology becomes more common, we can expect to see smarter, safer AI systems that help, rather than harm, the people who use them.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250217584.

Tags: Alphabet Patent Review
Previous Story
Query Refinement Using Optical Character Recognition
Next Story
PICKING METHOD, PICKING VEHICLE AND PICKING SYSTEM

Related Articles

SEMICONDUCTOR PACKAGE AND METHOD OF MANUFACTURING SEMICONDUCTOR PACKAGE

Invented by Shin; Seungwan, Lee; Junghyun Semiconductor packages are the...

Translation Barrier Instruction

Invented by Gonion; Jeff Modern computer chips are complicated. They...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs