Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

IN-MEMORY COMPUTING MACRO AND METHOD OF OPERATION

Inventiv.org
July 18, 2025
Software

Invented by KWON; Soon-Wan, YUN; Seok Ju, LEE; Jaehyuk, Samsung Electronics Co., Ltd.

Neural networks are everywhere these days, powering everything from your smartphone’s voice assistant to the latest driverless cars. But as these networks get bigger and smarter, they also need more power and speed. That’s why people are always looking for better ways to build and run them. One new idea is using in-memory computing (IMC), which lets computers do math right where the data lives, inside the memory. This article explains a new type of IMC macro that can switch between two different modes, making it work for both spiking neural networks (SNNs) and regular artificial neural networks (ANNs). Let’s break down where this idea came from, the science and older inventions behind it, and what makes this new IMC macro special.

Background and Market Context

Computers have come a long way since the early days. Most computers today use something called the von Neumann architecture. In simple words, this means the computer has a separate place to store data (memory) and a separate place to do math (processor). When the computer needs to do something, it has to move data back and forth between these places. This works okay for small jobs, but for big tasks like running neural networks, it becomes a problem.

Imagine you want to solve a huge math puzzle where you have to look up numbers in a book, then write down the answer on another page, over and over again. If you have to keep flipping pages, it takes a lot of time. That’s what happens when computers use this old method. The data movement slows things down and uses a lot of power.

Now, think about neural networks. These are like big webs of tiny math problems all linked together. Each part, called a “node,” does some math using numbers it has learned (called weights) and then passes the answer to other nodes. Some networks, called spiking neural networks (SNNs), try to act like real brain cells, sending little “spikes” of information when something important happens. Other networks, called artificial neural networks (ANNs), use more regular math, like multiplying numbers and adding them up. Both types do a lot of multiply-accumulate operations (MAC), which means they multiply a number by a weight, then add the answer to a running total.

Running all these math problems in the old way, moving data in and out of memory, is slow and wastes energy. That’s why people started using in-memory computing. In IMC, the computer does math right where the data is stored, like solving the puzzle in the book without flipping pages. This saves time and power. Companies are now pushing this idea for things like smart speakers, phones, medical devices, cars, and even drones, because everyone wants smart, fast, and energy-friendly devices.

But there’s a problem: most IMC chips today are made for just one type of neural network. Some chips are built for SNNs, which are good for low-power jobs and can handle time-based patterns, like sounds. Others are made for regular ANNs, which are good for things like recognizing pictures or faces. If you want to build a device that needs both, you might have to put two chips inside, which takes up space and uses more energy.

The market is now asking for chips that can do both jobs, switching between SNN and ANN modes as needed. This would be like having one book that helps you solve both kinds of math puzzles. Making this happen means finding new ways to design IMC macros that are flexible, fast, and efficient. That’s where this new invention comes in.

Scientific Rationale and Prior Art

To understand why this new IMC macro matters, let’s look at how things worked before.

First, in the old von Neumann computers, as we said, moving data around is slow and takes power. For big neural networks, this becomes the main bottleneck. People tried to fix this by making the connections faster or adding more memory, but there’s a limit to how much you can improve.

Next, regular IMC chips were built. These chips put both the memory and some simple math circuits together, so they could do things like multiply and add numbers without moving them. A common way to do this is by using special memory cells, like SRAM (static RAM), set up in a crossbar array. This means you have a grid of memory cells, and you can send signals down rows and columns to do math with the numbers stored inside.

Usually, these IMC chips are designed for one job: either running SNNs or ANNs. SNNs need to handle “spikes” (quick changes over time) and keep track of something called the “membrane potential,” which is kind of like the energy stored in a real brain cell. ANNs, on the other hand, just need to multiply and add numbers.

Some old IMC designs tried to add a bit of flexibility by letting you change the weights or add a simple bias (an extra number added to the answer), but they still couldn’t do both types of networks well. If you wanted to run SNNs, you’d need a chip with extra parts to handle the spikes and membrane potential. If you wanted ANNs, you needed something that could handle bigger numbers, sometimes using more bits.

There were also some attempts to use bit-serial computation, where you send in one bit at a time instead of the whole number, which can save space and power. But this made things slower for some types of networks and didn’t solve the problem of switching between modes.

Another issue was handling the feedback in SNNs. These networks need to remember what happened last time (the membrane potential), and sometimes this number needs to be updated and used in the next step. Old IMC chips didn’t have a good way to do this inside the memory, so they had to send the number out and back in, losing the power and speed benefits.

So, what was missing in the prior art? Chips that could:

  • Switch between SNN and ANN modes on the fly
  • Handle both spiking signals and regular numbers
  • Store and update the membrane potential inside the memory
  • Support bias values for ANNs
  • Do all this without extra hardware that makes the chip big or slow

Many patents and research papers talked about improving one part or another, but none put all these ideas together in a single, flexible, and efficient IMC macro.

The scientific push behind this new invention is simple: if we can make one IMC macro that works for both SNNs and ANNs, and can quickly switch between them, we can build smarter, smaller, and more power-friendly devices. This helps everything from wearables and smartphones to cars and robots.

Invention Description and Key Innovations

This new IMC macro brings several fresh ideas to solve the problems we just discussed. Let’s walk through how it works and what makes it stand out.

1. Dual-Mode Operation
The main feature is the ability to switch between two modes: one for SNNs and one for ANNs. You can think of this like flipping a light switch. The chip listens to a command from the main computer (host), and then sets itself to the right mode for the job. In SNN mode, it handles spiking signals and membrane potentials; in ANN mode, it works with regular numbers and bias values.

2. Smart Input Control Circuit
At the front of the macro is an input control circuit. This part decides what kind of signal to send into the memory cells. In SNN mode, it sends a simple pattern (often just a “1”) and can also send back a processed version of the last membrane potential. In ANN mode, it sends in a pattern that matches the number of bits in the input signal, and also handles bias values. This circuit has an extra port so it can send or receive these special numbers, depending on which mode the macro is in.

3. Crossbar Array with Additional Row
The heart of the IMC macro is the crossbar array of memory cells (often SRAM). But this design adds something new: an additional row in the array. This extra row can store either a processed membrane potential (for SNNs) or a bias value (for ANNs), depending on the mode. The extra row lets the chip keep track of important numbers without sending them in and out of memory, which saves time and power.

In SNN mode, whenever the chip gets a new membrane potential, it does a simple math trick: arithmetic negation (flips the sign). This helps model the “leak” that brain cells have, so the potential drops over time unless another spike comes along. The macro writes this updated value right into the extra row, ready for the next round. In ANN mode, the extra row just holds the fixed bias value for each column.

4. Column-wise Adder Tree
To do the math, the macro uses an adder tree for each column. This adder tree can handle two jobs at the same time: it multiplies input signals by their matching weights (the basic neural network math), and it also multiplies the pattern or processed value from the extra row. Then it adds the answers together. This setup works for both SNNs and ANNs, without needing extra hardware for each.

5. Flexible Post Arithmetic Circuit
After the adder trees, the macro has a post arithmetic circuit. This circuit can do different things depending on the mode. In SNN mode, it takes the answer from the adder tree and does a right shift (which is like dividing by a power of two), matching the way SNN math works with time constants and membrane leaks. It also passes through the membrane potential from the extra row. The results get stored in an accumulator, which is like a running total.

In ANN mode, the circuit can handle multi-bit numbers by doing a left shift (like multiplying by two) and adding the bias value from the extra row. This lets the macro build up bigger numbers from bit-serial inputs and supports multi-bit math for deeper networks.

6. Mode Switching with Minimal Overhead
Switching between SNN and ANN modes is quick and doesn’t need any extra hardware or complicated setup. The main computer just sends a command, and the macro sets the right patterns, shifts, and memory row uses. This makes it easy to use one chip in devices that might need to run both types of neural networks, sometimes even at the same time.

7. Broad Integration
The macro is designed to fit into all kinds of devices, from phones, tablets, and laptops to cars, drones, medical tools, and more. This flexibility comes from its simple design and ability to switch modes without extra parts.

8. Actionable Impact
For engineers and device makers, this means:

  • They can save space and power by using one chip instead of two or more.
  • They don’t have to redesign their hardware every time they want to support a new kind of neural network.
  • They can deliver smarter products that adapt to user needs, running the best type of network for the job (SNN for low-power, time-based tasks; ANN for high-accuracy, data-heavy tasks).
  • They can update or switch network types with just a software command, not a hardware change.

Simple Example
Imagine a smartwatch. When it’s just checking your heart rate, it might use an SNN for low power. When it wants to recognize your voice or a picture, it switches to an ANN. Thanks to this dual-mode IMC macro, the same chip can run both jobs, making the watch last longer and feel snappier.

Real-World Use
This invention can be used in phones that handle always-on listening (SNN) and then switch to heavy-duty image or speech recognition (ANN), or in cars that need super-fast, low-power sensors (SNN) and more complex decision-making systems (ANN), all using the same chip.

Why This Matters
As devices get smarter, they need to do more with less power and space. This dual-mode IMC macro is a step in that direction, giving engineers a tool that is both powerful and flexible, without the usual trade-offs.

Conclusion

The new in-memory computing macro outlined here is more than just a small upgrade; it’s a big leap in how we design chips for smart devices. By letting a single macro switch between SNN and ANN modes, handle all the needed math inside the memory, and update special values like membrane potentials and biases on the fly, this invention solves real problems that have slowed down the adoption of in-memory computing.

If you are building the next generation of wearables, cars, robots, or medical devices, using a macro like this means you can run more types of neural networks, with less power, in a smaller space, and with much more flexibility. You don’t have to choose between power savings and performance, or between SNN and ANN. Now, you can have both.

As more devices need to get smarter and more efficient, inventions like this dual-mode IMC macro will become the building blocks for the future. If you want to stay ahead, keep an eye on this technology—it’s paving the way for a new era of truly adaptive, energy-friendly computing.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250217623.

Tags: Patent Review Samsung
Previous Story
LOW LAYER TRIGGERED MOBILITY WITH SIDE INFORMATION
Next Story
RESOURCE-BASED ASSIGNMENT OF BEHAVIOR MODELS TO AUTONOMOUS AGENTS

Related Articles

PHOTONIC INTEGRATED CIRCUITS WITH SUBSTRATE NOISE COUPLING MITIGATION

Invented by Wang; Zhechao, Vazimali; Milad Gholipour, Liu; Qing Photonic...

RESAMPLER FOR ELECTRONIC DISPLAY HAVING MULTIPLE PIXEL LAYOUTS

Invented by Boo; Hyun H., Miscuglio; Mario, Hu; Jenny, Guan;...

Menu

  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs

Disclaimer Communications between you and Inventiv Foundation are protected by our Privacy Policy but not by the attorney-client privilege or as work product. Inventiv Foundation, Inc. can connect you to independent attorneys and self-help services at your specific direction. We are not a law firm or a substitute for an attorney or law firm. We cannot provide any kind of advice, explanation, opinion, or recommendation about possible legal rights, remedies, defenses, options, selection of forms or strategies. Your access to the website is subject to our Terms of Use.

Tags

Alphabet Amazon Facebook/Meta Microsoft Patent Review Samsung
  • Home
  • About
  • Inventiv’s Daily
  • Inventiv Cloud
  • Blogs
  • Contact
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs
Inventiv.org
  • Home
  • About
  • Resources
    • USPTO Pro Bono Program
    • Patent Guide
    • Press Release
  • Patent FAQs
    • IP Basics
    • Patent Basics
      • Patent Basics
      • Set up an Account with the USPTO
      • Need for a Patent Attorney or Agent
    • Provisional Patent Application
      • Provisional Patent Application
      • Provisional Builder
      • After you submit a PPA
    • Utility Patent Application
      • Utility Patent Application
      • File a Utility Patent Application
      • What Happens After Filing Utility Application?
    • Respond to Office Actions
    • Patent Issurance
  • ProvisionalBuilder
  • Login
  • Contact
  • Blogs