FAST DIFFUSION-BASED IMAGE RESTORATION WORKFLOW VIA SHARING OF INITIAL DIFFUSION STEPS

Invented by LEE; Tzu-Cheng, CHEN; Xi, CAI; Liang, CANON MEDICAL SYSTEMS CORPORATION
Denoising images is very important for many fields, like medical imaging. This new patent application explains a better way to denoise lots of images quickly and well, by using a special step-by-step model. In this article, we will break down the ideas behind this invention. We’ll look at the background and the market, the science and older methods, and then we’ll walk through the new solution and its innovations. By the end, you’ll see why this patent matters and how it could change the way we restore image quality.
Background and Market Context
When you look at any digital image, you want it to be clear and easy to understand. But in real life, many images, especially those taken by machines like CT or MRI scanners in hospitals, are filled with noise. Noise looks like random specks or fuzziness that hides important details. This is a big problem for doctors, researchers, and anyone who needs to see what’s really in the picture. Removing this noise—called denoising—has been a goal for a long time.
In the past, doctors and scientists used simple tricks to make images clearer. They might blur the image a bit or use filters to try to remove the noise. While these tricks sometimes made pictures look better, they could also make important details disappear. With the rise of computers, more advanced tools came along. Deep learning, a kind of computer intelligence, started to help. These deep learning models can learn from lots of examples, figuring out how to spot and remove noise without losing the real picture underneath.
The need for good image denoising is especially important in the medical world. When doctors need to look at a scan of your body, they want every little detail to be clear. Noise can make it hard to spot tumors, broken bones, or signs of disease. But there’s a catch: in hospitals and clinics, time matters. Doctors can’t wait too long for computers to make each image perfect. They need tools that are fast and accurate.
Modern scanners often take many images in a row, either over time (like a video) or over different slices of the body. Each image might be a little different, but they are also very similar, especially if they are taken close together. This is true for scans of the brain, heart, or any other part of the body. If a computer could use the similarities between these images, it might denoise them faster and just as well as before—or even better.
Right now, hospitals spend a lot of money on computers and software to clean up these images. They also spend time training staff and waiting for results. The market for better, faster denoising is huge. If a new method can save time and money, while making images even clearer, it would be very valuable. This is where the new patent application comes in: it promises to do just that by using smart grouping of images and a clever step-by-step model.
The impact goes beyond just healthcare. Any field that uses lots of images—like security cameras, astronomy, even self-driving cars—could benefit from faster and better denoising. But the patent is especially focused on medical images, where every second and every detail can make a big difference.
Scientific Rationale and Prior Art
To understand what makes this new invention valuable, it helps to know what came before. Image denoising has always been a challenge. Early tools used simple math, like averaging the colors of nearby pixels. This worked okay when the noise was small, but when images were very noisy, too much detail got lost. More advanced methods, like wavelet filters or non-local means, tried to fix this by looking for patterns or repeating features in the image. These worked better, but still had limits, especially for very complex images.
Everything changed with deep learning. Instead of telling the computer exactly what to do, engineers started training neural networks—special computer programs that learn from examples. By showing the computer pairs of noisy and clean images, the model learns how to transform a noisy picture into a clean one. These models can be very powerful, but they usually work on one image at a time. When you have a whole set of images, you just run the model on each one, over and over. This takes a lot of time and computer power.
One of the most promising deep learning methods for denoising is called the Denoising Diffusion Probabilistic Model, or DDPM. This model works a bit differently. It takes a noisy image and, in many small steps, tries to “reverse” the noise, slowly making the image clearer at each step. The model learns how to do this by seeing many examples during training. Each step it takes brings the image a little closer to the clean, real thing. DDPMs became popular because they can make very high-quality images, and they are flexible. But they are also slow, because they do many steps for every single image.
Researchers noticed that when you have a series of images, like slices from a CT scan, many features do not change much from one image to the next. The main shapes and big details—like the outline of an organ—are the same. Only the small details, like tiny changes in texture or the movement of fluids, are different. So, if you could somehow share the work of denoising the big features across a group of images, you would save a lot of time.
Older methods did not take advantage of this idea. Even with deep learning, each image gets the full treatment, step by step, as if it were the only image. This is wasteful because the model repeats the same work for every image, even when they are almost identical. Some researchers tried to use “batch” methods, but these still mostly treated each image on its own.
The new patent application changes this. It uses the scientific insight that similar images can share the early steps of the denoising process. By grouping images and using a “representative” image for the first steps, the model can do the heavy lifting just once per group. Then, for each image in the group, it finishes with a few extra steps to capture the small, unique details. This is a clever way to reduce work without losing quality.
The model still uses the power of diffusion-based denoising, but it organizes the process to make it much faster. It also allows for further tricks, like resizing (downsampling) the representative image to save even more time, then resizing it back (upsampling) before finishing. This is a smart use of both old and new ideas, and it solves a real problem in the way we handle big sets of images.
Invention Description and Key Innovations
Let’s walk through what this patent actually claims and how it works in practice. The main goal is to denoise lots of images at once, faster than before, while keeping high quality. Here’s how the new method does it:
First, the method gets a special trained model, the DDPM. This model already knows how to take a noisy image and make it clearer, one step at a time, because it was trained on many pairs of noisy and clean images. The model is good at finding and removing noise, while keeping important details.
Next, instead of working on each image alone, the method groups the images into sets. These groups are made up of images that are close together in time or space—maybe slices from a scan taken one after another, or frames from a video. Because these images are similar, it makes sense to work on them together.
For each group, the method picks or creates a “representative image.” This could be the average of all the images in the group, or just one of them. Sometimes, before using this representative image, the method might make it smaller (downsampling) to make things go even faster.
Now comes the clever part. Instead of running all denoising steps for every image, the model first runs a short sequence of denoising steps—let’s call this T1 steps—using just the representative image. This part focuses on the big features, the things all images in the group share. It’s like cleaning up the outline of a shape that appears in every picture.
After these first T1 steps, the method has a “restored” version of the representative image. To make sure every image in the group gets its unique details, the model now takes this restored representative image and, for each original image in the group, runs a second, shorter sequence of steps—let’s call this T2 steps. Each T2 process is guided by the unique image it’s meant to restore. This is where the little differences and fine details are handled.
If you add up the steps, it means that for a group of images, you only have to do T1 steps once (for the group), and then T2 steps for each image. This is much faster than doing all T steps for every single image. The total time and computer power needed drops a lot, especially for big sets of images.
Once the T2 steps are done, you have a set of clean, denoised images, each one matching an original, but with the noise removed and details saved. If the method used downsampling before, it can now resize (upsample) the images back to their original size.
What makes this approach stand out is that it matches the way real images work. In medical scans, for example, most of the image stays the same from slice to slice. By sharing the early work, the model doesn’t waste time repeating itself. This leads to faster results, less computer use, and still gives good, sharp images.
The patent also covers how to set the group sizes, how to pick or make the representative image, and how to adapt the number of steps (T1 and T2) for different situations. For example, if the images are very similar, you can make the group bigger and do more shared steps. If the images are more different, you can use smaller groups or more unique steps.
This method can be used in many ways: as a software tool, stored on a computer, or as a part of a physical device—like the computer inside a CT scanner. It can work with different kinds of images, not just medical ones, and with different types of noise. It’s flexible and can be adjusted for different needs.
What does this look like in real life? Imagine a set of 12 brain scans. Using old methods, you might need 200 steps for each image, for a total of 2400 steps. With this new method, you could do 190 shared steps for the group, then just 10 steps for each image, for a total of 310 steps—a huge difference. The images at the end are just as good as before, but you save time and energy.
The patent also details how this method can be used on different devices, including computers, servers, or even special hardware inside a scanner. It describes how data moves through the system, how images are grouped and processed, and how the steps are carried out. The method can be put into software that runs on many types of machines, making it easy to adopt in different settings.
By making image denoising faster and more efficient, this invention opens the door to real-time, high-quality imaging in hospitals and beyond. Doctors will get clear images almost instantly, leading to faster diagnosis and treatment. Researchers will be able to handle much bigger data sets. And all this happens without giving up the clarity and detail that are so important.
Conclusion
This patent application introduces a smart and practical way to denoise many images at once. By grouping similar images and sharing the early steps of the denoising process, it saves time and computer power while keeping image quality high. The method is flexible, works with different kinds of images, and can be used in many settings, especially in medical imaging where fast and clear results matter most. By building on scientific insight and improving on past methods, this invention could change how we process images, making life easier for doctors, researchers, and many others who rely on clear pictures to do their work.
Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250217940.