Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

REFINING GESTURE MODELS

Inventiv.org
July 9, 2025
Apple

Invented by Chambers; Laura Bliss, Solichin; Jonathan

Let’s take a deep dive into a patent application that could change how people interact with computers in extended reality (XR). This article explores the background, the past science, and the new ideas behind a system that lets users create and fine-tune their own hand gestures for XR—without needing to write code or know machine learning. If you’re interested in XR, AR, VR, or emerging ways to work with technology, this is for you.

Background and Market Context

Today, XR, which includes augmented reality (AR), virtual reality (VR), and their combinations, is growing fast. You see XR in games, education, training, design, and even remote work. Many XR experiences use head-worn devices—think smart glasses or headsets—that show digital objects on top of the real world or create whole virtual spaces. Big names like Meta, Apple, Microsoft, and Google are all investing in XR platforms.

But there’s a puzzle: how do you give commands to a computer in XR? Traditional input tools like keyboards, mice, or game controllers don’t fit well—you don’t want to look away from the scene or carry extra devices. Touchpads on the side of glasses or buttons help a little, but they break the sense of being inside the experience. Even voice controls have limits, especially in noisy places or when privacy matters.

This is where hand-tracking comes in. Many XR headsets now have cameras that watch your hands, letting you use gestures—like pinching, pointing, or waving—to control apps and objects. These natural movements feel more direct and can be learned quickly. But there’s a catch: right now, most XR systems only recognize a small set of pre-programmed gestures. Want to add your own? That can be tough. Building a custom gesture usually means programming, setting up machine learning, or dealing with tricky tools.

In the XR market, this is a big roadblock. Developers and creators want more freedom to design new ways of interacting. They want to make apps for all sorts of tasks—sign language translation, robot control, smart home commands, even special games. But if making a new gesture is too hard, only big companies can afford to do it. This leaves the XR world less creative and less open to new ideas from smaller teams or individuals.

The patent application we’re looking at tackles this gap. It describes a system that lets anyone create, fine-tune, and store new hand gestures for XR using just a camera and a simple editor. No need for coding or AI know-how. This opens up XR to more people, more apps, and more creative uses.

Scientific Rationale and Prior Art

Let’s talk about how computers have handled gestures in the past and why this new approach matters.

Early gesture systems relied on physical devices—like gloves with sensors, or markers you’d wear. These systems could track finger positions, but they were clunky and not something you’d use every day. Later, with advances in computer vision, cameras started to replace these devices. Systems like Microsoft Kinect or Leap Motion used depth cameras to track hands in 3D. While better, these still needed special hardware and often only worked in certain lighting or with careful setup.

In recent years, deep learning has made computer vision much smarter. AI models can now look at regular camera images and find where your hand is, what shape it makes, and how it moves—even if the background is messy or lighting changes. Many XR headsets use these advances to track hands in real time, building a “skeletal model” with points for each joint and bone. This lets the system know when you make a fist, point a finger, or open your palm.

Most XR platforms today use these models to recognize a fixed set of gestures—say, pinch to select, swipe to scroll, or fist to grab. Some let you define custom gestures by recording samples and training a machine learning model, but this usually takes programming and understanding how AI works. Other systems use rules—like, “if the thumb and forefinger are close together, call it a pinch”—but these rules can be hard to write for complex gestures and may not work for everyone’s hands.

Prior art in this space includes:
– Device-based tracking: Using gloves or markers to track fingers, which is accurate but not user-friendly.
– Static gesture libraries: Most XR headsets ship with a small set of gestures, and adding new ones is hard.
– Heuristic/rule-based recognition: Systems that use programmed rules to recognize gestures, which are hard to extend and may not be robust.
– Machine learning-based recognition: Systems that let you train new gestures with AI, but require data collection, coding, and technical skill.
– Some gesture editors: A few research projects have tried to make visual editors for gestures, but often they’re not integrated into XR platforms or are hard to use.

What’s been missing is a simple way for non-experts to create, see, and tweak gestures directly in an XR environment, turning their hand movements into computer-understandable models—without programming or AI expertise. This is the problem the patent addresses.

Invention Description and Key Innovations

Now, let’s break down what this patent application proposes and why it’s a leap forward.

The heart of the invention is a system that:
– Watches a user perform a hand gesture using one or more cameras (on a headset or other device).
– Builds a 3D “skeletal” model of the hand as it moves, capturing all the joint positions.
– Shows this 3D model to the user in a visual editor, where the user can look at it from different angles.
– Lets the user pick which joints matter for the gesture and which ones to ignore or delete (for example, maybe only the thumb and forefinger matter in a pinch).
– Lets the user move or tweak the positions of the joints in the model for more accuracy.
– Saves the cleaned-up, “refined” gesture model to a library, where it can be used by the system to recognize the gesture later on.

This process can be repeated for any number of gestures, and gestures can be chained together into sequences—like spelling out a word in sign language or creating a combo move in a game. The refined models can include extra info, like how fast the gesture should be, how precise it needs to be, or whether it’s repeated.

Here are the standout innovations:

1. No Coding Needed
Anyone can create a new gesture by just performing it in front of a camera and then editing the resulting model. All the technical work—tracking, modeling, saving—is handled by the system. This lowers the barrier for XR development.

2. Visual, Interactive Editor
The user sees a real 3D model of their hand as they performed the gesture. They can spin it, zoom in, and see if the system captured it right. If not, they can fix it by moving joints or deleting ones that don’t matter. This feedback loop makes it easy to get the gesture “just right.”

3. Focus on Important Joints
Not every joint in the hand matters for every gesture. By letting users delete unneeded joints, the model becomes simpler and the system can recognize the gesture faster and more accurately. This also helps avoid mistakes when hands are in slightly different poses.

4. Support for Gesture Sequences
Some tasks require a string of gestures—think spelling, commands, or multi-step controls. The system can capture and manage gesture sequences, with start and end signals, and store them in a library for later use.

5. Modifiable Parameters for Expressiveness
Gestures can have extra features—like speed, precision, or number of repetitions. The editor lets users set these so that, for example, a fast swipe means “next” and a slow swipe means “back.”

6. Machine Learning Integration (Under the Hood)
While the user doesn’t need to know about AI, the back end can use machine learning to track hands and match performed gestures against the saved models. The editor helps create clean training data and models, improving recognition accuracy and speed.

7. Broad Applications
This system isn’t just for XR headsets. It could be used for screen-based AR, smart mirrors, robot control, sign language translation, smart home devices, gaming, and more. Anywhere a camera can see your hands, this system can be used to define and detect gestures.

8. Accessibility and Customization
Because gestures can be created and refined by anyone, the system supports users with different abilities, hand sizes, or preferences. It’s easy to make gestures that work for a specific user or adapt to special needs.

In practice, here’s how it works. Imagine you’re wearing a smart headset. You want to create a new “thumbs up” gesture to approve something in an app. You open the gesture editor, perform the thumbs up in front of your headset’s cameras, and the system builds a 3D model of your hand in that pose. You see the model, notice the pinky is a bit bent, so you adjust it in the editor. Maybe you decide only the thumb and palm matter, so you delete the other finger joints. You save the gesture, and now the system can recognize it whenever you do it—no programming, no data files, just natural interaction.

For developers, this means they can build XR apps with rich, custom gesture vocabularies. For users, it means more natural, responsive, and personalized ways to control technology.

Conclusion

This patent application marks a major step in making XR more open, creative, and user-friendly. By moving gesture creation out of code and into a visual, hands-on process, it empowers more people to shape their XR experiences. The system connects the latest in AI-driven hand-tracking with a simple editor, making custom gestures possible for everyone. Whether for games, accessibility, smart homes, or new kinds of work, this invention points toward an XR future where your hands—and your ideas—are truly in control.

If you’re building XR applications or exploring new ways to interact with computers, this technology could unlock new paths for your projects. Keep an eye on this space as gesture-based interfaces become more natural, flexible, and open to all.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250216947.

Tags: Patent Review
Previous Story
DETERMINING CRITICAL LOGS FOR NETWORK APPLICATIONS
Next Story
INSIDER THREAT REPORTING MECHANISM

Related Articles

INSIDER THREAT REPORTING MECHANISM

Invented by Khanna; Sameer, Fortinet, Inc.Insider threats are one of...

DETERMINING CRITICAL LOGS FOR NETWORK APPLICATIONS

Invented by Gupta; Rahul, Shan; Alexander Zhang, Sridhar; Thayumanavan, Banka;...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs