In the last week in-air gesturing has been a technology in the spot light. On December 28th Patently Apple posted a report titled " Apple could be onboard the 2019 3D Camera trend for Smartphones that will introduce In-Air Gesture Controls," followed by a report days later titled "Samsung Invents an In-Air Gesturing System Designed for a Next-Gen VR Headset." News coming to light today reveals that Google has not only been granted a major patent for such technology but that the FCC has granted Google a waiver to operate their new Soli sensors at higher power levels than currently allowed. A sign that Google's in-air gesturing system is moving to the next level and perhaps closer to future products than anyone imagined.
The U.S. Patent and Trademark Office published a granted patent for Google in Q4 2018 for their invention related to wide-field radar based in-air gesture recognition that could apply to future Chromebooks, desktop computers, smartwatches, to kitchen appliances, to entertainment devices from TV to stereos.
What makes this granted patent interesting is that The Federal Communications Commission (FCC) said in an order late on Monday that it would grant Google a waiver to operate the Soli sensors at higher power levels than currently allowed.
The FCC said the decision 'will serve the public interest by providing for innovative device control features using touchless hand gesture technology."
The FCC said the Soli sensor captures motion in a three-dimensional space using a radar beam to enable touchless control of functions or features.
Google says the sensor can allow users to press an invisible button between the thumb and index fingers or a virtual dial that turns by rubbing a thumb against the index finger.
The company says that “even though these controls are virtual, the interactions feel physical and responsive” as feedback is generated by the haptic sensation of fingers touching.
Google says the virtual tools can approximate the precision of natural human hand motion and the sensor can be embedded in wearables, phones, computers and vehicles. The FCC also noted that the sensors can also be operated aboard aircraft. For more on the FCC story, read the full Reuters report here.
You could review the FCC's order that granted Google a waiver for using short-range interactive motions sensing devices here.
Google's Granted Patent for Wide-Field Radar Based In-Air Gesturing
Google's patent background notes that small-screen computing devices continue to proliferate, such as smartphones and computing bracelets, rings, and watches. Like many computing devices, these small-screen devices often use virtual keyboards to interact with users. On these small screens, however, many people find interacting through virtual keyboards to be difficult, as they often result in slow and inaccurate inputs. This frustrates users and limits the applicability of small-screen computing devices. This problem has been addressed in part through screen-based gesture recognition techniques. These screen-based gestures, however, still struggle from substantial usability issues due to the size of these screens.
To address this problem, optical finger- and hand-tracking techniques have been developed, which enable gesture tracking not made on the screen. These optical techniques, however, have been large, costly, or inaccurate thereby limiting their usefulness in addressing usability issues with small-screen computing devices.
One other manner has recently been developed where gestures are tracked using radar. Current radar techniques, however, often require a large antenna array and suffer from numerous practical difficulties. These large antenna arrays use thin-beam scanning techniques to locate a large number of points in space, including points of a human action (e.g., fingers, arm, or hand). These techniques track these points of a human action and the other points in space and then determine which points are associated with the human action and which are not.
With these action points determined, the techniques track their movement and, based on these movements of the points of the action, reconstruct the action throughout the movement. With this reconstructed movement, the techniques then determine a gesture associated with those movements. This permits some rudimentary gesture recognition but is limited by the large antenna array and the computational difficulties and resource requirements inherent in using thin-beam scanning techniques.
Google's invention addresses the problems of the past with new techniques and devices for wide-field radar-based gesture recognition. These techniques and devices can accurately recognize gestures that are made in three dimensions, such as non-screen or "in-the-air" gestures.
These in-the-air gestures can be made from varying distances, such as from a person sitting on a couch to control a television, a person standing in a kitchen to control an oven or refrigerator, or centimeters from a computing watch's small-screen display.
The system could apply to home automation and control systems, entertainment systems, audio systems, other home appliances, security systems, netbooks, and e-readers. Note that computing device can be wearable, non-wearable but mobile, or relatively immobile (e.g., desktops and appliances).
Google later notes that radar fields can be invisible and penetrate some materials, such as textiles, thereby further expanding how the wide-field radar-based gesture-recognition system can be used and embodied.
Google's wide-field radar based in-air gesture system could handle both simple and complex gestures and more importantly, provide superior accuracy and robust recognition.
Google's patent FIG. 1 below illustrates an example environment in which wide-field radar-based gesture recognition can be implemented; FIG. 2 illustrates the wide-field radar-based gesture-recognition system and computing device of FIG. 1 in detail.
Google's patent FIG. 4 below illustrates gestures made and signal elements determined based on those gestures; FIG. 7 illustrates an example of gestures made and signal elements determined using type-specific hardware abstraction modules.
Google's patent FIG. 3 illustrates an example method for determining signal elements for a gesture.
Google filed for this patent in the spring of 2016. The U.S. Patent Office granted Google this patent in Q4 2018.
One of Google's top engineers was given credit for this invention. He came to Google via Motorola's advanced technology group five years ago. At Google he founded, built and directed an agile technology innovation and product development team to invent and develop two novel technologies for wearables and IoT: a 60Ghz pico radar for touchless gesture interaction (Project Soli) and a smart apparel computing platform (Project Jacquard). His latest project tackled wide-field radar-based gesture recognition.
Google has been working on this technology since 2014. Their first patent application on this was reported on by Patently Mobile back in 2016 with the patent figures below from that patent filing. To view more on the 2016 invention, click here.
Comments