Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Accelerate Real-Time Transaction Analysis with Fast, Scalable Graph Learning for Financial Leaders

Inventiv.org
November 13, 2025
Software

Invented by BONO; JACOPO, COLAÇO FERREIRA; HUGO RICARDO, NASER EDDIN; AHMAD, SANTOS RODRIGUES BIZARRO; PEDRO GUSTAVO, FEEDZAI – CONSULTADORIA E INOVAÇÃO TECNOLÓGICA, S.A.

Unlocking real-time insights from streams of data is a big challenge, especially when you have a huge graph of transactions and not enough memory to keep all the details at hand. This article breaks down a new patented method—Deep-Graph-Sprints (DGS)—that enables fast, memory-friendly learning from continuous streams of transactions. We will explore why this is important, how the science has evolved, and what makes this invention stand out.

Background and Market Context

Graphs are everywhere. They help us understand everything from social networks and financial transfers to web page connections and biological systems. At their core, graphs are just collections of points (called nodes) connected by lines (called edges), but they let us map out how things interact in very complex systems.

In recent years, the focus has shifted from static graphs—where all the data is frozen in time—to dynamic graphs, where the connections and nodes change as time passes. This is especially important in today’s world where data streams in non-stop. Think about a bank tracking millions of money transfers every day, or a social media platform watching how users connect and interact in real time. In these cases, the graph never sits still.

Dynamic graphs come in two flavors. The first is the discrete kind, where you take snapshots at regular moments (like daily or hourly) and see how things have changed. The second, and more challenging, are continuous-time graphs, where every event (like a transaction or a message) updates the graph immediately as it happens. This is called a Continuous Time Dynamic Graph (CTDG).

Businesses and researchers need to make sense of these dynamic graphs fast. For example, a bank might need to spot fraud as soon as it happens, or an online platform might want to recommend a friend or product based on the latest activity. To do this, they need a way to turn the messy, fast-changing graph into simple numbers (called embeddings) that machine learning models can use to make quick decisions.

But there’s a big catch: making these embeddings in real time is hard, especially if you don’t have enough memory to hold the entire graph. If your data center or processing chip only has a little memory, you can’t just keep every detail. Plus, the faster you need your answers, the less time you have to do all the work. This is where the new DGS method comes in, offering a way to keep up with the data stream without running out of memory or falling behind.

The market demand for such a solution is huge. Financial companies want real-time fraud detection. Social media sites want instant, up-to-the-second recommendations. Even scientific researchers want to track how networks of genes or proteins change over time. All of these need a method that can handle big, fast graphs in real time, using only a small amount of memory.

Until now, most methods have either been too slow, used too much memory, or required expert tuning and setup. The DGS invention aims to change that, offering a fast, low-memory, and automatic way to keep up with the flow of data.

Scientific Rationale and Prior Art

To see why this invention matters, let’s look at how the science of graph learning has developed so far.

A few years ago, people used to make hand-crafted features for graphs. Experts would pick out things like how many connections a node has, or how often it appears in certain patterns. This took a lot of time and required a lot of domain knowledge. It was slow, and it didn’t always capture the full picture.

Then, machine learning researchers came up with graph representation learning. The idea is simple: let the computer figure out how to turn each node or edge into a small set of numbers (an embedding) that captures all the important relationships and features. One popular tool for this is the Graph Neural Network (GNN), which learns these embeddings by looking at how nodes are connected.

Most early GNN methods worked only on static graphs. They would look at a fixed snapshot, gather information from nearby nodes (often called the k-hop neighborhood), and use message-passing to update each node’s embedding. This works well when the graph doesn’t change too fast, but falls short when every new event changes the graph. If you try to apply these static methods to a stream of transactions, you have to keep stopping and starting, which is slow and inefficient.

Researchers tried to adapt GNNs to dynamic graphs by taking frequent snapshots or by adding time as an extra feature. Some methods, like DeepCoevolve and Jodie, used special kinds of neural networks called RNNs to track how nodes change over time. Other methods, like TGAT and TGN, introduced time encodings or special memory modules that remember past events.

A different approach uses random walks: you pick a node, wander around the graph in a random way, and see what you bump into. If two nodes have similar random walks, they might be similar in the real world too. Methods like DeepWalk, Node2vec, and CTDNE use this idea. But random walks are slow and use a lot of memory, especially on big graphs.

Other optimizations tried to speed things up by using tricks like graph partitioning (breaking the graph into smaller chunks), sampling only a part of the graph, or using special hashing techniques. Some, like Graph-Sprints, focus on making fast, low-latency embeddings by using simple but powerful feature engineering and quick updates. But even these have limits—like needing lots of tuning, or not being flexible enough for all kinds of graphs.

A major technical problem in all these methods is the way gradients are calculated for training the models. Training neural networks involves figuring out how to adjust the weights to minimize errors. In regular neural nets, this is done using backpropagation (reverse-mode automatic differentiation), which works well but takes a lot of memory when dealing with long sequences or deep models. This makes it hard to use on real-time streams where memory is tight.

Some methods tried to save memory by cutting off the gradient calculation after a certain point (truncated backpropagation), but this means the model can’t “remember” long-term dependencies. Others looked at forward-mode automatic differentiation, which is more memory-friendly, but can be slow if not used carefully.

Overall, the prior art has given us many clever ideas, but none that fully solve the puzzle of real-time, low-latency, low-memory graph learning for continuous streams of transactions. Existing methods either use too much memory, are too slow, or need too much manual setup. There is a clear gap for something that is fast, efficient, automatic, and works well even when you can’t fit the whole graph in memory.

Invention Description and Key Innovations

The DGS method—Deep-Graph-Sprints—was created to solve the problems above. Its main goal is to make real-time, low-latency embeddings for transactional graphs, even when you don’t have enough memory to store the whole graph. Let’s walk through how it works, why it’s different, and what makes it special.

At the heart of DGS is the idea of breaking the process into two main parts:

1. Embedding Recurrent Component (ER): This part takes the incoming data (such as a new transaction) and updates the embeddings (state arrays) for the nodes involved. It uses a smart update rule that mixes the previous state of the node, the state of the other node in the transaction, and features of the transaction itself. Instead of using hand-tuned features or fixed rules, it learns how to weigh new and old information for each feature using trainable parameters. Each feature gets its own “forgetting factor” so the model can decide how quickly to forget old information and how much to value new events.

2. Neural Network Classifier (NN): Once the embeddings are updated, they are passed to a small neural network that makes task-specific decisions. For example, it might predict if a transaction is suspicious, or what kind of action a user is likely to take next.

The ER component uses a new update rule that combines information in a flexible way. It uses two sets of trainable vectors (called α and β) to decide, for each feature, how much to blend the old and new information. This is much more expressive than using a single global forgetting rate. The new method also uses a learnable embedding matrix and a softmax function to turn incoming features into embeddings, borrowing ideas from attention mechanisms in transformers (which are popular in natural language processing).

One of the key innovations is how DGS handles the training process. Instead of relying only on backpropagation, which stores a lot of intermediate states and uses up memory, DGS uses a mix of forward-mode automatic differentiation (AD) for the ER part and standard backpropagation for the NN part. This hybrid approach means it can learn long-term patterns without blowing up memory use. The forward-mode AD is made efficient by careful design: most operations are simple, element-wise multiplications, and the use of multiple softmaxes keeps the computation manageable.

DGS is also built for both real-time (online) and batch processing. In real-time mode, every new transaction triggers an immediate update of the embeddings for the nodes involved. In batch mode, it can process groups of transactions together, updating all the relevant node states at once. This flexibility makes it suitable for many real-world setups, from streaming fraud detection to periodic social network analysis.

Because DGS is memory-constrained by design, it never tries to store the entire graph in memory. It keeps only the needed embeddings for the active nodes and updates them as new data arrives. This makes it a perfect fit for situations where memory is tight, like embedded devices, edge computing, or large-scale cloud systems with many users.

During inference (when the model is making predictions but not learning), DGS is even faster. It skips the gradient calculations and simply updates the embeddings and runs the NN classifier. This means it can keep up with very fast data streams.

The DGS method has been tested on several real and synthetic datasets, including public social and education networks, as well as private banking data. The results show that DGS is faster than other state-of-the-art GNN methods, often by a wide margin, and can handle much larger graphs without slowing down. It matches or beats traditional methods in accuracy, showing that you don’t have to trade speed for quality.

To make the method even more flexible, the inventors suggest ways to adapt it for more complex graphs (like those with different types of nodes or edges) and to make the forgetting factors depend on the input data. They also discuss options for using advanced optimizers like Adam, and implementing the method efficiently on GPUs or neural processing units.

In summary, the DGS invention stands out for its:

– Ability to process continuous streams of transactions in real time, even with limited memory.
– Use of learnable, feature-wise forgetting factors and embedding matrices for flexible, data-driven updates.
– Hybrid training strategy that combines forward-mode and reverse-mode differentiation for speed and memory efficiency.
– High accuracy on real-world tasks, with much lower latency than previous methods.
– Built-in support for both real-time and batch processing, and easy adaptation to new types of graphs or tasks.

Conclusion

The world of data is moving faster every day, and the need for real-time insights from streams of transactions is only growing. The DGS patent application introduces a new way to keep up, offering memory-efficient, low-latency, and accurate embedding generation for dynamic graphs. By smartly combining forward-mode differentiation, flexible feature updates, and a modular design, DGS sets a new standard for scalable, real-time graph learning. It’s a leap forward for industries and researchers who need to make quick, smart decisions based on fast-moving data.

In practical terms, if you’re building systems for fraud detection, recommendation, anomaly spotting, or any task where the relationships between things change all the time, DGS gives you the tools to stay ahead. It works even when your memory is limited, your data is huge, and your answers need to be fast. That’s why this invention is poised to become a core technology in the future of streaming data analysis.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250335772.

Tags: Amazon Patent Review
Previous Story
AI Platform Enhances Executive Function and Self-Awareness Skills Through Interactive Gamified Assessments
Next Story
Protect Business Phone Lines with Real-Time Fraud Call Detection and Threat Alerts

Related Articles

Smart Charging Cases Boost Battery Efficiency and Protection for Wearable Devices

Invented by Schmanski; Robert F., Deutsche; Jonathan H., Apple Inc.Today’s...

Strengthening Wireless Network Security with Enhanced Protected Trigger Frames for Enterprise Devices

Invented by Sun; Yanjun, Batra; Anuj, Kneckt; Jarkko L., Epstein;...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs