Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

AI-Optimized Lighting for Ultra-Realistic Virtual Object Placement in Synthetic Images

Inventiv.org
November 11, 2025
Software

Invented by Wang; Zian, Acuna Marrero; David Jesus, Fiddler; Sanja, Gojcic; Zan, Liang; Ruofan, Sharp; Nicolas, Nimier-David; Merlin

Making Virtual Objects Look Real: A Deep Dive into Patent-Pending AI Lighting for Synthetic Images

Adding fake objects to real pictures has always been tough. The biggest problem? Lighting. If the shadows, brightness, and reflections on a virtual object don’t match the scene, everyone can tell it’s fake. This new patent application tackles that head-on with smart computer tricks. Let’s break down how this works, why it matters, and what makes it different from what came before.

Background and Market Context

Today, almost everything you see in games, movies, and even some ads is a mix of real and virtual. Placing digital cars, people, or furniture into photos and videos is everywhere: think of car ads, superhero movies, or even the way home design apps let you see new furniture in your living room. But making these fake objects look like they belong is really hard.

People notice when something looks off. If the sun is shining from the left, but your digital chair has its shadow on the right, everyone knows it’s fake. The same goes for shiny reflections, shiny floors, or the way light bounces off a person’s hair. Getting these right usually means a lot of manual work by artists or using special cameras to capture real-world lighting. Both are slow and expensive.

The demand for this technology is only growing. Video games want more realism. Virtual reality (VR) and augmented reality (AR) need to put digital things into the real world on the fly. Companies training self-driving cars need to use fake (synthetic) images to test how the car sees obstacles, and those fake images need to look just as real as real ones. Even robots in warehouses use digital twins—virtual copies of real spaces—to practice before working with real things.

So, the market wants fast, cheap, and believable ways to drop digital objects into any photo or video, with lighting and shadows that fool the human eye every time. That’s where this patent application comes in.

Scientific Rationale and Prior Art

In the past, making virtual objects blend in with real scenes meant lots of guesswork. Artists would eyeball where the light should be, or use some “rules of thumb” (called heuristics) to guess where shadows and highlights belong. Sometimes, special cameras (like light probes) were used to capture the real lighting, but you can’t always do that, especially with old photos or stock images.

Older computer tools tried to estimate the light from a single picture by looking at shadows or bright spots. But these tools often made mistakes, especially if the scene was complex, had many light sources, or if parts of the image were missing. The tools would sometimes use “priors”—pre-set ideas about how lighting works—like assuming the sun is always above or that shadows are always dark. But these rules don’t always work, and they don’t adapt well to new or tricky scenes.

More recently, machine learning (using neural networks) has helped a lot. Some systems use “discriminator” networks, which are trained to tell if an image is real or fake. Others use “diffusion models”—these are like art critics that look at a noisy version of the image and try to fix it, learning what real images look like as they go.

But even these newer systems hit limits. If you don’t know the exact way the light bounces in a scene, or how shiny or rough a surface is, it’s easy to get things wrong. And you usually need lots of data or special setups to train these networks. Also, older systems often couldn’t update their guesses as they went—they made one guess and were done, instead of learning and improving over time.

Another problem was that lighting isn’t just about brightness. It’s about how shadows fall, how reflections show up, how light bends through glass, and how shiny or matte a surface is. Earlier tools struggled with these details, especially when scenes had lots of complexity or unusual materials.

So, the need remained: a system that could look at any image—old or new, simple or cluttered—and figure out the exact lighting needed to make a virtual object fit right in, learning and updating as it goes, without lots of manual work or special gear.

Invention Description and Key Innovations

This patent application describes a smart, computer-based way to insert digital objects into any scene, giving them lighting that looks just right. Here’s how it works in simple terms:

First, the system takes a real image (say, a photo of a street). It then picks or creates a digital object (like a car, a chair, or even a cartoon character) to add into that scene. But before the object is shown, the system needs to decide how much light should hit it, where shadows should fall, and how shiny or matte it should look.

Here’s where the magic happens. The system starts with a good guess of the lighting—maybe it assumes the sun is overhead if it’s an outdoor scene. It inserts the digital object into the scene using this guessed lighting, creating what’s called a synthetic image.

Next, it asks a machine learning model—a kind of artificial “art critic”—to look at the new image. This model can be a “discriminator” (which tries to tell if the image is real or fake) or a “diffusion model” (which compares the image to a cleaned-up, denoised version it thinks looks real). If the critic says, “This looks fake!” the system knows it needs to tweak the lighting.

So, it changes the lighting parameters: maybe the shadow needs to be longer, or the object needs to be brighter, or the color of the light should be warmer. It makes these changes, creates a new synthetic image, and asks the critic again. This loop repeats, over and over, until the critic says, “This looks real!” Or, more precisely, until a certain “realism score” gets high enough.

This process can use different kinds of lighting models. Sometimes it uses an “environmental map,” which is like a globe showing how much light comes from every direction. Other times, it uses “spherical Gaussians,” which are math tools for spreading light around. Or it can use “neural radiance models,” which let neural networks learn how light works in a scene.

Along the way, the system can handle all sorts of lighting effects—not just shadows, but reflections (like shiny floorboards), refractions (like light through glass), and even the camera’s own quirks. It can consider the materials of the virtual object, so a glass cup and a metal car will look right under the same light. It can also adjust for the way the camera sees color, brightness, and focus.

If the scene is tricky—say, with lots of small lights or weird shapes—the system keeps learning. It can even use “physics-based rendering,” using real rules about how light bounces and bends, so the guesses get better over time. If the system is being used in a robot, a video game, a digital twin, or a VR headset, it can do all this in real time, on powerful chips or even in the cloud.

The real breakthrough is that this system doesn’t just take one shot at guessing the lighting. It learns, step by step, making small changes and checking if things look better. This feedback loop is what lets it get much closer to real-looking results, even for new scenes it’s never seen before.

The patent application covers many ways to use this idea. You can use it for self-driving car testing (so the cars see realistic obstacles), in making digital twins of real places, in AR/VR for games or training, for creating synthetic data for AI training, or for automating content creation in movies and ads. It works with all kinds of hardware, from powerful data centers to edge devices like phones and VR goggles.

This system can also be part of bigger pipelines, where synthetic images are made, checked, and improved in batches—great for companies needing lots of fake, but real-looking, images for training AI or testing robots. It can handle lots of scenes at once, and even share lighting models across similar scenes to speed things up.

In short, this invention gives computers a way to “see” and understand light almost like humans do, but at machine speed and scale. It makes adding digital things into real photos or videos look much more believable, with less work and less guesswork.

Conclusion

The world is moving fast toward blending real and digital stuff—whether it’s for games, movies, shopping, or teaching robots to see. But making those digital things look real has always been a pain, especially when it comes to lighting. This new patent application shows a clever way to let computers figure out lighting by themselves, using AI that learns from its own mistakes and keeps getting better.

By looping through guesses and checks, and using smart models to measure realism, this invention makes it much easier to add virtual objects into real scenes without the usual lighting mistakes. It saves time, cuts costs, and raises the bar for realism—no matter if you’re making a blockbuster movie or testing a robot in a digital warehouse. If you work in any field where digital and real worlds meet, this is a technology to watch.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250336146.

Tags: Amazon Patent Review
Previous Story
Boosting Cloud Storage Speed: Faster File Access Without Extra Caching for Enterprise Data Systems
Next Story
Unified Sensor Data Platform Enhances 3D Perception for Autonomous Vehicles and Robotics

Related Articles

Smart Charging Cases Boost Battery Efficiency and Protection for Wearable Devices

Invented by Schmanski; Robert F., Deutsche; Jonathan H., Apple Inc.Today’s...

Strengthening Wireless Network Security with Enhanced Protected Trigger Frames for Enterprise Devices

Invented by Sun; Yanjun, Batra; Anuj, Kneckt; Jarkko L., Epstein;...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs