Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

AI Tool Accurately Differentiates Brain Tumor Recurrence from Treatment Effects Using Advanced Medical Imaging

Inventiv.org
November 7, 2025
Software

Invented by Kundu; Bijoy, Qureshi; Zoraiz, Schiff; David, Muttikkal; Thomas

Understanding whether a brain tumor is growing back or if what’s showing up on a scan is just the result of treatment damage is a very big challenge in medicine. This new patent application proposes a smart way to solve this problem, using modern imaging and computer learning techniques. In this article, we’ll break down the background of this problem, the science and previous solutions, and finally, how this new invention works and why it’s special.

Background and Market Context

Brain tumors are some of the most serious cancers. Glioblastoma, a type of brain tumor, is very aggressive and hard to treat. After surgery and therapy, doctors use scans like MRI to check if the tumor has come back. But there’s a big problem: sometimes, the scans can look the same whether the tumor is growing again or if the tissue is just damaged from treatment (called necrosis).

This confusion creates a huge challenge for doctors and patients. If the doctor thinks the tumor is back, they might suggest more treatment, which can be hard on the body. If they think it’s just damage from treatment but it’s actually new tumor growth, they might miss the chance to treat it in time. Making the right call is crucial for patients’ health and for planning the next steps.

The tools most often used are MRI and PET scans. MRI gives detailed pictures of the brain’s structure. PET scans, using a special sugar tracer, show how active the brain tissue is. But both have limits. MRI can’t always tell if a bright spot is tumor or just scar tissue. PET scans can show activity, but even damaged tissue can sometimes light up in a way that looks like tumor.

Because of this, there’s a big need in hospitals and clinics for better ways to tell the difference. This isn’t just a problem for big research hospitals; it affects cancer centers, smaller hospitals, and imaging clinics around the world. The market for better brain imaging tools is large and growing, with millions of brain scans done each year for cancer follow-up. Technology that helps doctors make the right call could improve treatment, save money, and most importantly, improve lives.

In recent years, artificial intelligence (AI) and machine learning have started to change how imaging is used in medicine. These computer systems can look at complex images and patterns in ways people cannot. The hope is that by using AI, we can make smarter decisions from the same scans. But until now, no one has created a fully automated, reliable system that combines different types of imaging to answer this important question: is it tumor progression, or is it just treatment effect?

This is where the invention in this patent comes in. It’s designed to use both MRI and a special type of PET scan, together with advanced computer learning, to help doctors see the difference more clearly. The invention could fill a big gap in the market, helping both patients and care teams make more confident choices.

Scientific Rationale and Prior Art

To understand the new invention, it helps to know how doctors and scientists have tried to solve this problem before. Tumor progression means the cancer is coming back or growing. Treatment-related necrosis means the tissue has been hurt by therapy, like radiation, but it’s not active cancer. Both can look similar on regular MRI scans, which show the shape and structure of the brain.

Doctors have tried different types of MRI, like ones that look at blood flow or water movement in the tissue, to get more information. Sometimes, these extra images can help, but often, they’re not reliable enough. In many cases, the images overlap so much that even expert radiologists can’t tell for sure.

PET scans, which use a sugar-like tracer called FDG, show how much energy the tissue is using. Tumors often use more sugar, so they light up on these scans. But inflamed or damaged tissue can also use more sugar, making the PET scan less specific. There are other tracers for PET that can sometimes help, but none have solved the problem completely.

Some researchers have tried to use mathematical models to analyze the PET scans in more detail. For example, instead of just looking at a single snapshot, they watch how the tracer moves in and out of the tissue over time. This is called dynamic PET (dPET). By looking at the rates of sugar uptake, these models can sometimes do better at telling tumor from damage. But the process is complicated. It often requires drawing blood samples or doing tricky calculations that are hard to do in a busy clinic.

In the last few years, machine learning and deep learning (a type of AI) started being used for image analysis. These systems can learn patterns from lots of data. Some teams have tried using AI to look at MRI scans or PET scans separately to classify images as tumor or not. While promising, these methods often work best only in research studies, and they usually focus on just one type of scan at a time.

A few efforts have combined MRI and PET data, but it’s usually done by having a human expert put the pictures together, or by using simple mathematical combinations. No one has yet built a fully automated system that brings together detailed, time-based PET information (like dynamic PET) and high-resolution MRI, using a deep learning approach that actually learns from both kinds of images at once.

The patent’s inventors recognized that the missing piece was a smart, automatic system that could handle all the tricky processing steps: aligning the images, extracting the right features, and making sense of both the shape (from MRI) and the activity (from PET) of the tumor area. They also saw the value in using the most advanced deep learning tools, like convolutional neural networks (CNNs), to handle the complex data.

In addition, they realized that one of the biggest hurdles in using dynamic PET is calculating the so-called “blood input function” — a measure of how much tracer is actually available to the tissue. Traditionally, this required manual steps or even blood draws. The patent describes methods for doing this calculation automatically using AI, which is a big leap forward.

So, while previous solutions have tried using better imaging or AI separately, none have combined all these elements into one pipeline that is both practical and powerful. That’s what sets this invention apart from prior art.

Invention Description and Key Innovations

Let’s dig into what this invention actually does, step by step, and why it matters.

The invention is a computer-implemented method and system that helps tell if a spot seen on brain scans after cancer treatment is tumor progression or treatment-related necrosis. It works by combining two types of images: MRI and dynamic PET (dPET). Here’s how it works in simple terms:

First, the system collects MRI images of the brain. These images show the shape and structure of different parts, including where the tumor or abnormal area is. Next, it collects dynamic PET images. In this scan, a tracer (often FDG, a kind of sugar) is injected into the patient, and the scanner takes pictures over time to watch how the tracer moves into the brain tissue.

One of the tricky parts of using PET data is figuring out how much tracer is actually reaching the brain tissue — this is the “blood input function.” The invention uses smart computer models to automatically find and segment the right blood vessels in the PET images (the internal carotid arteries), and then calculates how much tracer is in the blood without needing a blood draw. It even corrects for errors that happen when the images are a bit blurry or when adjacent tissues affect the measurement (these are called “partial volume” and “spill over” effects).

Once the system knows how much tracer is in the blood, it uses mathematical models (like the Patlak model) to create detailed maps showing how fast the tracer is entering different parts of the brain. This gives a “Ki map,” where each spot in the brain gets a value showing how active it is in taking up the tracer. Tumors usually have higher Ki values than dead or damaged tissue.

Meanwhile, the MRI data is being used to segment (pick out) the area that looks abnormal. This can be done automatically using another AI tool, trained to find tumors in MRI scans. The system then lines up the MRI and PET images, so that each spot in the brain matches up across both scans.

Now, the real magic happens. The invention uses a dual-encoder convolutional neural network (CNN), which is a type of deep learning network. One part of the network focuses on the MRI data (the shape and structure), while the other part focuses on the PET data (the activity levels). The network learns to extract the most important features from each type of image. Then, these features are combined, and the system makes a prediction: is this area more likely to be tumor progression, or is it just treatment-related necrosis?

The system can also take in extra information, like radiomics features (which are special measurements pulled from the images, like texture and shape) and information about the patient (like age or sex), making the prediction even better.

The key innovations in this invention are:

1. Full automation: The whole process, from image alignment to blood input calculation to tumor classification, is done by the computer, requiring little or no manual work. This means it can work quickly and the same way every time.

2. Multimodal deep learning: Instead of looking at just MRI or PET alone, the system truly learns from both at once, using a special neural network that can handle both types of data.

3. Advanced blood input modeling: The invention uses AI to accurately measure how much tracer is available to the tissue, which is a big step forward for PET analysis. No need for blood draws or complicated manual steps.

4. Focused on the tumor: The system uses advanced segmentation tools to make sure it’s looking at the right area, ignoring background noise and making the result more reliable.

5. Adaptable and trainable: The system can be trained using labeled data, and can even use transfer learning (starting with a model trained on one set of tumors and adjusting it for new data), which means it can get better over time and adapt to new types of images.

6. Scalable: Because it’s all software-driven, the invention could be used in many hospitals, not just big research centers.

In early tests, the system was able to predict whether an area was tumor progression or necrosis with higher accuracy than using MRI or PET alone. The more data it gets, the smarter it becomes. This could lead to better decisions for patients, fewer unnecessary treatments, and more confidence for doctors.

The invention also describes a set of claims covering the method, the system, and the steps involved. These include how the images are collected, how the blood input function is calculated, how the images are aligned and segmented, how the neural networks are set up, and how extra data like radiomics and demographics are used.

Overall, this invention is a step forward in using artificial intelligence to solve a tough problem in brain cancer care. It brings together the best of imaging, computer science, and clinical knowledge into one tool that could have a real impact.

Conclusion

Distinguishing between tumor regrowth and treatment effects in brain cancer is one of the hardest problems faced by doctors today. The new invention described in this patent brings together advanced imaging, smart computer models, and deep learning to tackle this challenge. By combining MRI and dynamic PET data, automating the tricky parts of image analysis, and learning directly from examples, this system can give doctors better answers, more quickly and with greater confidence.

If adopted widely, this technology could improve outcomes for brain tumor patients around the world. It stands out by offering a fully automated, scalable solution, built on the latest advances in artificial intelligence and medical imaging. For anyone involved in cancer care, medical imaging, or AI in healthcare, this invention is worth watching — it represents a real leap forward in the field.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250336063.

Tags: Facebook/Meta Patent Review
Previous Story
Simplifying AR Glasses Design with Optimized Light Pathways for Clearer, More Compact Displays
Next Story
AI System Prevents Misplacement of Medical Tubes with Real-Time Imaging for Safer Patient Care

Related Articles

Headline: Prevent Duplicate Mobile Payments with QR Code Tracking for Enhanced Checkout Security Industry/Application Hint: Retail, Mobile Payments Why this works: This headline clearly conveys the innovation (“Prevent Duplicate Mobile Payments”), the method (“QR Code Tracking”), the key benefit (“Enhanced Checkout Security”), and hints at the industry (retail and payments). It addresses pain points for business leaders—lost revenue and customer trust—while being concise, actionable, and highly relevant for those considering new payment technologies.

Invented by TERO; Jacqueline Nicole, FRANKLIN; Keegan, BABCOCK; Patrick, Capital...

Predictive Fluid Pressure Control Boosts Safety in Endoscopic Procedures for Healthcare Leaders

Invented by Williams; Jessica, Maas; Randall, O'Donnell; John, Hailson; Cassandra...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs