Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

METHODS AND DEVICES FOR HANDLING MEDIA DATA STREAMS

Inventiv.org
August 6, 2025
Software

Invented by Phillips; Chris, Forsman, JR.; Robert Hammond, Cohen; Sarel

The world of 360-degree video is changing fast. People want more lifelike, smoother, and smarter ways to watch immersive videos, especially in virtual reality headsets. But delivering high-quality 360 videos is hard, especially when networks are slow or devices are not powerful. A new patent application introduces a smarter way to stream and show 360 videos using dynamic mesh data and mixed-scale tiles. This post will help you understand why this is important, what came before, and how this new method could make 360-degree video better for everyone.

Background and Market Context

Watching 360-degree videos is like looking around in every direction. You feel like you are right in the middle of the action. This is a big deal for games, movies, virtual tours, sports, and even video calls. With the rise of virtual reality (VR) headsets, more people than ever are using these immersive experiences. But there’s a problem: these videos need a lot of data. For the best quality, you need very high resolutions, like 8K or even 16K. But most internet connections and many devices cannot keep up with sending or showing these huge videos smoothly.

Let’s think about a VR headset. Today, many headsets can show 2K resolution for each eye, but to fill the whole 360-degree space, you need to send a much bigger video — maybe 8K or more. This means a lot of data must travel from the server to the headset. If the network slows down, the whole video gets blurry. If the device is not strong enough, it can’t show the video clearly. And if the user turns their head quickly, the video may take time to catch up, making it look jumpy or pixelated.

Companies have tried to fix these problems using something called Adaptive Bit Rate (ABR) streaming. This method sends video in small chunks at different quality levels. The headset or player picks the best quality it thinks it can handle. But with 360-degree video, this means the whole video is sent at one quality level, even for parts the user is not looking at. This wastes a lot of bandwidth, and sometimes the quality is not good enough where it matters most — right in front of the user’s eyes.

Newer headsets are coming out with even higher resolutions, which means the need for smarter streaming will only grow. In short, everyone wants better quality, less buffering, and smoother experiences, but the old ways of sending video are hitting their limits.

Scientific Rationale and Prior Art

To understand the new method, let’s look at how videos are usually sent and played, and where the problems come from.

360-degree videos are often stored and sent using a method called tiled encoding. The video is cut into smaller pieces called tiles. Each tile covers a part of the 360 scene. This makes it possible to focus only on parts of the video that matter most, like where the user is looking. This technique, known as HEVC tiled encoding (using the H.265 video standard), was first used to help computers and devices process video faster by letting different chips work on different tiles at the same time.

Later, people realized that you could use tiled encoding to improve 360 video streaming as well. For example, if a user is looking straight ahead, you can send the tiles in front at high quality and the tiles behind at lower quality to save data. Some methods select tiles based on where the user is looking (called the “gaze direction”), sending better quality tiles in the center of their view and lower quality tiles at the edges. This helps improve quality of experience (QoE) and saves bandwidth.

But there are problems with these methods. First, most of them work only if all the tiles come from videos of the same resolution, like all 8K or all 4K. This means you still have to send a lot of data, even if some parts are low quality. Also, older methods have a hard time scaling down for devices that can’t handle high-res videos. If a headset can only show 1080p, the whole 360 scene drops to 1080p, making the view blurry everywhere — even in the middle.

Another challenge is that the way videos are mapped onto a 360-degree scene is not simple. Videos are often stored in what’s called an equirectangular projection (a flat rectangle), but when displayed, they are wrapped into a sphere or other 3D shapes. Mapping the tiles from the flat video to the curved display inside the headset needs careful math, or you get seams, stretched images, or glitches.

Some patents and academic papers have suggested ways to select tiles more carefully, or even to predict where the user will look next. But these methods still have limits: they don’t solve the problem of mixing tiles from different resolutions (like using some 8K tiles and some 2K tiles in the same frame), and they don’t always handle the mapping well when the scene changes quickly.

There have been some efforts to send extra information — like metadata or mesh data — to help the player put the video together correctly. But these approaches often add a lot of complexity or work best only in certain setups. All in all, the field is ready for a solution that lets servers and headsets work together, mixing and matching tiles of different resolutions and mapping them smoothly for the best user experience.

Invention Description and Key Innovations

The new patent application sets out a smarter way to stream and show 360-degree video. It introduces the idea of “dynamic mesh data” and “mixed-scale tile streaming.” Here’s what that means in simple words.

Instead of sending the whole 360 scene at one quality, the server can pick tiles from different source videos, each with a different resolution. For example, the area in front of the user’s eyes might use sharp 8K tiles. The sides can use 4K tiles, and the back can use 2K or even 1K tiles. This means you only use more data where it matters most. The server puts together a “frame” made up of these mixed tiles, based on what the user is looking at and how fast their internet is.

But just sending different tiles is not enough. The player (the headset or device) needs to know exactly how to piece these tiles together on the 3D scene. That’s where dynamic mesh data comes in. The server sends extra data — the mesh — that tells the headset how to map each tile onto the curved surface inside the headset. This includes spatial coordinates (where each piece goes in 3D space), texture coordinates (how to stretch each tile onto the surface), and quad indices (which help split the surface into little blocks called “quads”).

This mesh data is dynamic — it changes for every frame, depending on which tiles are picked. If the user turns their head, the system can quickly switch in higher-quality tiles in the new direction, and the mesh updates so everything stays smooth. This keeps quality high in the user’s view and saves bandwidth elsewhere.

The mesh can be built in two ways. Either the server sends the mesh ready-made, or it sends “layout information” (like which tiles are used and where), and the client builds the mesh itself. This flexibility means the system can work with different types of devices and networks.

The invention also uses the ideas of “quads” and “metaquads.” A metaquad is like a block as big as the highest-res tile. Lower-res tiles might cover several metaquads. This helps the mapping work even if tiles are different sizes. The formulas for mapping tiles from the flat video to the sphere use simple math, based on where each tile starts and ends, and how big it is. This avoids seams and makes the image look natural, even when mixing tiles of different resolutions.

Another key point is that the system is designed to be fast. It can change which tiles are used in 2 to 3 video frames, so when the user looks around, the high-quality area follows them quickly. The mesh data is sent along with the video and audio, so everything stays in sync.

The benefits of this system are clear. Users get the best possible picture where they are looking, even if their device or network is limited. The system saves bandwidth by not sending high-res tiles where they are not needed. Devices that can’t handle full 8K video can still show sharp images in the center of the view. The method works with common video formats and can be added to existing streaming setups with some changes.

On a technical level, the invention covers both the methods and devices needed. On the server side, it selects tiles, builds layout info or mesh data, and sends the right data to the client. On the client side, it receives the data, builds or uses the mesh, and renders the video smoothly. The system can be used for live TV, movies, games, and even AR or mixed reality.

By giving both the server and the client a way to work together, and by using dynamic mesh data to control how the video is shown, this invention solves the big problems that have held back truly smooth, high-quality 360-degree video streaming.

Conclusion

The future of 360-degree video and virtual reality looks bright, but only if we can deliver smooth, high-quality experiences without overloading networks or devices. The dynamic mesh-based mixed-scale tile streaming method is a big step forward. It lets service providers send just the right amount of data, keeps the best quality where users need it, and adapts quickly as users move and look around. By using smart mesh data and mixing tiles from different resolutions, this approach could make virtual worlds clearer, sharper, and more lifelike for everyone — no matter their device or connection. As VR and immersive media keep growing, inventions like this will be at the heart of the next big leap in how we see and experience digital content.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250220253.

Tags: Amazon Patent Review
Previous Story
Multimedia Communication System And Method
Next Story
ITEM DETECTION METHOD APPLIED TO CAR REFRIGERATOR, AND DEVICE THEREFOR

Related Articles

HEALTHY ACTIVITY AND FAN ENGAGEMENT SYSTEM

Invented by Carlson; Rogério, Gillespie; TawniaEveryone knows that being active...

ITEM DETECTION METHOD APPLIED TO CAR REFRIGERATOR, AND DEVICE THEREFOR

Invented by Luo; Shaohui, Liu; Yang, Zhang; Yonggang, Han; MengruKeeping...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs