Transforming Touch Gestures into Smarter Multi-Screen Controls for User-Friendly Device Management

Invented by Perea-OcHoa; Jesus

Technology never stands still. Every day, we use phones, tablets, laptops, and even smart glasses to manage more and more tasks. But with all these screens and programs, switching between them can feel slow, awkward, and overwhelming. What if your device could keep up with your brain—letting you control many programs at once, with simple gestures, on any screen? That’s what this patent application is all about. In this article, we’ll break down what’s wrong with today’s devices, how this invention changes the game, and why it matters for anyone who uses a screen.
Background and Market Context
Think about how we use our devices every day. We check email, chat with friends, edit photos, write documents, watch videos, and more—all on one screen. Sometimes we want to see two or three apps at once, like reading a recipe while watching a cooking video, or comparing two documents side by side. But most devices only let you control one app at a time. Even if you split the screen, you usually have to tap and click through extra steps just to switch control between apps. It feels clumsy and slows you down.
Now imagine you’re a photographer editing photos. You use pinch-to-zoom to enlarge a picture. You have to repeat that gesture again and again just to get to the zoom level you want. And you never really know the exact zoom percentage, so you keep guessing. Or think about a student, toggling between a textbook app and a notebook app. Each time, you lose your place, and your train of thought gets interrupted.
The problem isn’t just in the gestures themselves. It’s also in how devices handle multitasking. Most laptops and tablets only let you control one window or app with your fingers or mouse. If you want to type in a different app, you have to click away or rearrange your windows. On phones, two-app split screen is awkward and limited. If you have a foldable or dual-screen device, it’s still a hassle to control both sides at once.
What’s worse, physical keyboards and mice haven’t changed much. You can’t move your keyboard around the screen to suit your hands. Trackpads and mouse pads are stuck in place. And if you’re using a virtual reality or augmented reality headset, it’s even harder to manage multiple programs—your hands are in the air, and you can’t use a mouse or keyboard at all.

Security is another headache. Logging in with a password, face scan, or fingerprint sounds easy, but these methods aren’t always reliable or secure. Photos can fool face scanners, and fingerprints don’t always work if your hands are sweaty or dry.
Meanwhile, our devices are getting smarter and more connected. We have cloud servers, virtual reality, and brain-computer interfaces. But the way we control our devices hasn’t caught up. There’s a real need for a better, faster, and more flexible way to control many programs, on any device, using natural gestures.
Scientific Rationale and Prior Art
Let’s look closer at how gestures and multitasking work on today’s devices—and why they fall short.
The most common gestures, like pinch-to-zoom or swipe-to-scroll, are easy to use. But they have big limitations. For example, if you want to zoom in on a picture as much as possible, you have to pinch and spread your fingers over and over. There’s no clear feedback about how far you’ve zoomed in, or how much more you can zoom. You might even cover the thing you want to see with your hand.
Scrolling works better, but it’s still just moving up and down or side to side. And if you want to see two things at once, most devices only let you control one window at a time. You can’t scroll two apps independently with one gesture. If you try to use a physical keyboard or mouse, you’re stuck with their fixed positions and layouts.

Some devices try to solve multitasking with split-screen or windowed modes. For example, Windows and macOS let you snap windows side by side. Some tablets allow limited split-screen. But you usually have to click or tap extra buttons, and you can’t control both apps at the same time. There’s no way to adjust the size of each window with a simple gesture, and you can’t assign custom gestures to different programs.
Virtual reality and augmented reality devices face even bigger problems. When you’re wearing a headset, you can’t use a physical mouse or keyboard. You might have to wave your hands in the air, which can be tiring and inaccurate. You might want to control a virtual window and a real-world app at the same time—but the system can’t tell which gesture controls which app. There’s no good way to split the screen, organize programs, or personalize your gestures.
In security, fingerprint and face scanners are supposed to make logging in faster and safer. But they can be fooled by pictures, or fail if something changes in your appearance. There’s no way to link your unique way of moving or gesturing to your identity.
Some research has been done on pairing gestures with specific programs, or creating custom gesture dictionaries. But these systems are usually limited, hard to set up, or don’t let you multitask in a smooth way. There’s no universal approach that works across devices, screens, and input types—let alone one that can learn and adapt to each user’s habits.
Invention Description and Key Innovations
This invention introduces a whole new system for controlling one or more display screens—on any device—using a combination of smart gestures, flexible screen dividers, and intelligent adjusters. The goal is to let you control multiple programs at the same time, with simple, personal gestures, on any kind of screen.

Here’s how it works:
The heart of the system is a set of computer programs, running either on your device or in the cloud. These programs watch for your gestures—touches, swipes, pinches, and even signals from your brain or muscles. The system can also use cameras, light sensors, or special gloves to detect your hand movements, even if you’re not touching a physical screen.
When you start using your device, the system lets you split your screen into two or more sections, using special lines called “sideline elements.” These lines are not just visual dividers—they are smart, interactive borders. You can move them around with a gesture, make them larger or smaller, or even change their color and thickness. They separate your screen into different areas, so you can run different programs side by side. Each area is fully interactive—you can scroll, zoom, type, or draw in each one independently.
To control what you see in each section, the system uses “adjusters.” These are like virtual sliders or rulers that appear next to your content. If you want to zoom in on a picture, just drag the adjuster. If you want to scroll through a document, move the adjuster up or down. There are precise marks and numbers on each adjuster, so you always know exactly how far you’ve zoomed or scrolled. No more guessing or repeating the same gesture over and over.
The system can show you reservation areas—special spots on the edge of the screen where you can keep programs you’re not using right now, but don’t want to close. With a simple gesture, you can move a program into the reservation area, or bring it back to the main screen. You can even pop up menus or notifications in their own areas, without blocking your main work.
What makes this system truly smart is how it understands your gestures. It doesn’t just look for basic moves like swipes or pinches. It analyzes the location, pressure, speed, angle, size, and even which hand you’re using. It can tell the difference between a light tap and a hard press, or between a right-handed and left-handed gesture. It can even recognize imagined gestures from brain signals, for users with special devices.
All of this is customizable. You can create your own gesture dictionary—pairing any gesture with any program or function. Want to open your calendar with a quick circle motion? Or switch to a different email with a zig-zag? You decide. And the system uses artificial intelligence to learn your habits, remember your favorite gestures, and even spot when someone else is trying to use your device. If the system doesn’t recognize the way you move, it can lock the device until you prove your identity.
This invention also supports advanced devices like foldable screens, sliding screens, and virtual reality headsets. You can split, adjust, and control programs across multiple screens, or even in projected virtual spaces. The adjusters and sideline elements can move seamlessly from one screen to another, letting you multitask like never before.
For security, the system uses your unique way of moving and gesturing as a kind of digital fingerprint. It can learn your personal style, so even if someone knows your password, they can’t log in unless they move like you do.
The system is designed to work with any kind of input: touch, mouse, keyboard, voice, camera, brain signals, and even wearables like smart gloves. It can run locally on your device, or remotely in the cloud. And it’s flexible enough to let you undo actions, swap programs, or rearrange your work with just a gesture.
Whether you’re editing photos, writing emails, watching videos, or running a virtual meeting, this system lets you control everything at once, with fewer steps and less frustration. It adapts to your needs, your hands, and your habits.
Conclusion
The way we use our devices is changing fast. We want to do more, see more, and control more—without slowing down or getting lost in menus and windows. This patent application points the way to a future where your gestures, habits, and preferences shape your device, not the other way around. By combining smart dividers, precise adjusters, and an AI that learns from you, this system makes multitasking natural and effortless. It opens new doors for people of all ages and abilities, on any screen or device. If you’ve ever felt stuck switching between apps, or wished your device understood you better, this invention is what you’ve been waiting for. The future of screen control is here, and it fits in the palm of your hand.
Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250335086.


