Earlier this month the U.S. Patent & Trademark Office published a patent application from Microsoft that reveals a next-generation 3D gesture recognition system that doesn't require multitouch commands. The new gesture system is designed to work with both traditional smart devices such as tablets, watches and smartphones while being able to work with future devices like kitchen appliances with or without a display. The system will be able to recognize finger and also larger two-arm gestures when working with whiteboards and beyond. Some applications will work with a 3D camera system while others not. Microsoft has been working on such systems for some time now as noted in our January 2015 report here. In the big picture, a 3D gesture system will just be one of many new Natural User Interfaces that we'll be using more of in the future. Microsoft's Virtual Assistant known as 'Cortana,' for instance, now works with Windows 10 based desktops and notebooks, ahead of Apple's use of Siri beyond iOS. Microsoft needs to deliver high-end next-gen solutions such as noted in today's report if they're ever to win back mind share that they lost to Apple ever since the rise of the iPhone. Today's invention is likely beyond a mere idea, as they have extensive experience with both large touch display systems via PixelSense and distant-air-gesturing via Kinect's 3D scanning.
Microsoft's Patent Background
Human interaction with touch-enabled devices is generally constrained to the surface of these devices through touch sensors. However, the size range of computing devices, ranging from tiny wearable devices (e.g., smart watches) to huge displays, can limit the touch sensor as the primary input medium. In the case of small screens, touching the screen can be inherently problematic as human fingers can cover a portion of the screen, obstructing visibility of the display. On the other hand, interacting with touch sensors on large displays can be cumbersome. In some cases, it may be advantageous to be able to detect a user gesture without relying on touch sensors.
Microsoft's Invention: 3D Gesture Recognition
Microsoft's invention relates to 3D gesture recognition. One example gesture recognition system can include a gesture detection assembly. The gesture detection assembly can include a sensor cell array and a controller that can send signals at different frequencies to individual sensor cells of the sensor cell array. The example gesture recognition system can also include a gesture recognition component that can determine parameters of an object proximate the sensor cell array from responses of the individual sensor cells to the signals at the different frequencies, and can identify a gesture performed by the object using the parameters.
The present concepts offer a novel approach to touchless interaction with digital displays. In some implementations, a two-dimensional (2D) array of radio frequency (RF) sensor cells can be used to detect the proximity of an object (e.g., a human body part). By monitoring changes in a frequency response from different sensor cells in the sensor cell array over time, an object near the sensor cell array can be tracked in three dimensions, enabling gesture recognition (e.g., gesture identification). By reducing a number of frequencies selected for distance and position classification, accurate real-time 3D gesture recognition can be performed with fewer resources. The RF sensor cell array can work without line-of-sight, can be embedded behind any type of surface, can be scaled, and/or can have relatively low power consumption compared to other proximity-sensing technologies.
In some implementations, a 3D gesture recognition system can include a 3D gesture detection assembly and a gesture recognition component. The 3D gesture detection assembly can include the 2D array of RF sensor cells mentioned above. Individual sensor cells in the RF sensor cell array can act as near-field RF proximity sensors to detect the proximity of an object. The detected proximity of the object can be analyzed by the gesture recognition component and identified (e.g., recognized) as a gesture.
System Example
Microsoft's FIG. 9 illustrated below shows us a first user (#900) interacting with refrigerator #602(2) and a second user (#902) interacting with digital whiteboard #602(4). In this case, a gesture detection assembly #100(1) can be embedded or otherwise associated with refrigerator.
While the system is design to work with tablets and various other types of devices with displays, the gesture detection device can be employed in other scenarios. In the example above with the refrigerator, the upper refrigerator door could be the gesture detection device #602 positioned behind a traditional panel (e.g. opaque surface).
Alternatively, a sensor cell array could be embedded in a portion of the refrigerator door that also functions as a display. In still other implementations, gesture detection assemblies can be embedded or associated with another appliance, device, or other hardware in a person's home, office, etc.
In FIG. 9 above we've highlighted the sensor cell assemblies behind the surface of two devices in yellow. Below in patent FIG. 5 we're able to see the sensor cell assembly in more detail. In patent FIG. 5 you're able to see how an open hand gesture is being tracked by the gesture recognition component #508 so as to be able to recognize the gesture/command.
In patent FIG. 5 Microsoft illustrates an example 3D gesture recognition scenario #500. In this example, a sensor cell array #502 can include multiple individual sensor cells #504. The sensor cell array can be similar to the sensor cell array 102 illustrated further below in patent FIG. 1.
In some implementations, the sensor cell array can be included on an RF-based 3D gesture detection assembly. In this case, the user's hand #506 can interact with the sensor cell array. For example, the user will be able to perform a gesture represented by three Instances 1, 2, and 3. In this example, a gesture recognition component #508 is able to identify the gesture performed by the user's hand. .
In general, gesture identification by the gesture recognition component #508 can include the ability to sense an object and/or parameters of the object. Parameters can include an orientation of the object in 3D space, a profile (e.g., shape, outline) of the object, and/or movement of the object over a duration of time, among other parameters.
In one example, the object can be a user's hand. As the user's hand hovers over a 3D gesture detection assembly, individual sensor cells can sense different parts of the hand, an orientation of the hand, a profile of the hand (e.g., a change in a position of one or more fingers), and/or other parameters. The gesture recognition component can use some or all of the sensed parameters to identify a gesture or other input of the user.
Beyond traditional hand gestures now used on displays, the system is designed to acknowledge much larger and customized 3D gestures. For instance, the gesture recognition components may be able to detect 3D movement of both of a user's outstretched arms and identify the movement or other parameters as a two-arm user gesture.
The RF-Based 3D Gesture Detection Assembly
Microsoft's patent FIG. 1 is a schematic diagram of an example RF-based 3D gesture detection assembly #100. In this example, the assembly includes a sensor cell array. The sensor cell array can entail multiple individual sensor cells #104. The assembly can also include a controller #106, a sensor cell driver #108, a switching network #110, and/or a power detector #112, among others.
The Gesture Recognition Component: Creating Custom Gestures
Microsoft later notes that in some implementations, the gesture recognition component #508 as shown in patent FIG. 5 above, can utilize methods and learn to recognize a position, distance, gesture, and/or other parameter based on a calibration by a user and/or from corrections by a user.
For example, a user can be prompted to calibrate a gesture recognition component when first using or setting up a gesture recognition device.
In another example, if a user corrects an identification of a gesture made by a gesture recognition component, the gesture recognition component can learn to recognize the user's identification of the gesture. This aspect can also allow the user to create their own gestures. For instance, the user may be able to access a graphical user interface (GUI) via a 'settings' control that allows the user to perform their new gesture. The GUI may request that the user repeat the gesture multiple times, say 10 times, to increase accuracy. The gesture recognition component can capture the associated frequency response and then allow the user to tie the gesture to a command (e.g., this gesture means 'play video,' for instance). In some implementations, a user's calibration results and/or a user's identification of a gesture can be added to a frequency response mapping table.
Microsoft filing notes that they filed their patent application back in July 2014. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
A Note for Tech Sites covering our Report: We ask tech sites covering our report to kindly limit the use of our graphics to one image. Thanking you in advance for your cooperation.
Patently Mobile presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Posting Comments: Patently Mobile reserves the right to post, dismiss or edit any comments.
Comments