Earlier this month the US Patent Office published a new Microsoft patent application covering an invention that relates to using wearables in a new way to control other devices. The wearable could be a ring or wristband, for example, and it would be able to control other smart devices around you such as a TV, smartphone or HMD (think HoloLens). The wearable device, being a small form factor, can't use a touch display. So to use the wearable as a control device, it incorporates a microphone. The user could then, in one example, scratch the arm of their family room leather chair and that unique sound wave would be able to control changing channel or audio of a TV, or advance a tune on your stereo and so forth.
Microsoft's Patent Background
Gesture-based user interaction allows a user to control an electronic device by making gestures such as writing letters to spell words, swatting a hand to navigate a selector, or directing a remote controller to direct a character in a video game. One way to provide for such interaction is to use a device such as a mobile phone or tablet computing device equipped with a touch screen for two-dimensional (2-D) touch input on the touch screen. But, this can have the disadvantage that the screen is typically occluded while it is being touched, and such devices that include touch screens are also comparatively expensive and somewhat large in their form factors.
Another way is to use depth cameras to track a user's movements and enable three-dimensional (3-D) gesture input to a system having an associated display, and such functionality has been provided in certain smart televisions and game consoles. One drawback with such three-dimensional gesture tracking devices is that they have high-power requirements which present challenges for implementation in portable computing devices, and another drawback is that they typically require a fixed camera to observe the scene, also a challenge to portability.
For these reasons, there are challenges to adopting touch screens and 3-D gesture tracking technologies as input devices for computing devices with ultra-portable form factors, including wearable computing devices.
Microsoft's Invention: Worn Device for Surface Gesture Input
Microsoft's invention relates to energy efficient gesture input on a surface. Microsoft's patent figure 5 noted below provides you with an illustration of a hand-worn device like a ring or bracelet that could include a microphone configured to capture an audio input and generate an audio signal. An accelerometer configured to capture a motion input and generate an accelerometer signal, and a controller comprising a processor and memory.
Further, Microsoft's patent figure 5 shows us a surface upon which the user is gesturing is a counter, as an example. The wide arrow indicates the movement of the user dragging their finger along the countertop to provide a gesture input, and their entire hand, including the hand-worn device, may move in nearly or exactly the same manner as their finger, such that the accelerometer in the hand-worn device may generate an accelerometer signal with accuracy. The friction generated between the countertop and the user's finger may produce sound waves as visually represented in the figure above.
The sound waves may serve as an audio input and the thin arrows in patent figure 5 may demonstrate the microphone in the hand-worn device capturing the audio input.
In patent figure 2 noted below we're able to see the hand-worn device is a ring, which may be the size and shape of a typical ring worn as jewelry. However, other manifestations may be possible, such as a watch, wristband, fingerless glove, or other hand-worn device. In this instance, the user is gesturing the letter "A" with their finger on a table, providing a gesture input #48.
In order for the microphone to capture an audio input #50, skin is typically dragged across a surface, and in order for the accelerometer to capture a motion input #52, the accelerometer is typically placed near enough to where the user touches the surface to provide a usable accelerometer signal #54.
To account for different users, surfaces, and situations, the accelerometer may be further configured to determine a tilt of the hand-worn device. The surface may not be perfectly horizontal as well for the signal to work.
Gesture input may take place in many different situations with different surroundings, as well as on a variety of different types of surfaces. The audio input detected by the microphone may be the sound of skin dragging across a surface, as one example. Sound may be produced in the same frequency band regardless of the composition of the surface, thus the surface may be composed of wood, plastic, paper, glass, cloth, skin, etc.
As long as the surface generates enough friction when rubbed with skin to produce an audio input detectable by the microphone, virtually any sturdy surface material may be used. Additionally, any suitable surface that is close at hand may be used, such that it may not be necessary to gesture on only one specific surface, increasing the utility of the hand worn device in a variety of environments.
Energy Harvesting
Microsoft later notes in their filing that the hand-worn device may further comprise a battery configured to store energy and energy harvesting circuitry including an energy harvesting coil. The energy harvesting circuitry may include a capacitor. The energy harvesting circuitry may be configured to siphon energy from a device other than the hand-worn device via a wireless energy transfer technique such as near-field communication (NFC) or an inductive charging standard, and charge the battery with the siphoned energy.
The energy may be siphoned from a mobile phone with NFC capabilities, for example. Simply holding the mobile phone may put the hand-worn device in close proximity to an NFC chip in the mobile phone, allowing the hand-worn device to charge the battery throughout the day through natural actions of the user and without requiring removal of the hand-worn device.
Actions/Applications
The application input may be letters, symbols, or commands, for example. Commands may include scrolling, changing pages, zooming in or out, cycling through displayed media, selecting, changing channels, and adjusting volume, among others. The API may provide context to the stroke decoder such that the stroke decoder may only recognize, for example, strokes of letters for text entry or scrolls for scrolling through displayed pages. Such gestures may be difficult to disambiguate without context from the API.
One application could be, for example, be a device that controls a television. The hand-worn device may receive gesture input that corresponds to application input to change the channel on the television or adjust the volume. In this case, the surface may be a couch arm or the user's own leg.
In another example, the computing device may control a television and allow a user to stream movies. In this case, the hand-worn device may receive a swipe or scroll application input to browse through movies, or it may allow the user to input letters to search by title, etc. In another example, the computing device may control display of a presentation. The user may control slides without holding onto a remote which is easily dropped.
In another example, the computing device may allow a user to access a plurality of devices. In such a situation, the user may be able to, for example, turn on various appliances in a home, by using one hand-worn device.
In yet another example, the computing device may control a head-mounted display (HMD) or be a watch or mobile phone, where space for input on a built-in surface is limited.
For instance, if the computing device is a mobile phone, it may ring at an inopportune time for the user. The user may frantically search through pockets and bags to find the mobile phone and silence the ringer. However, by using the hand-worn device, the user may easily interact with the mobile phone from a distance by tapping to mute the ring tone.
Finally Microsoft notes that Natural User Interface (NUI) componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
Microsoft filed their patent application back in May 2014. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Back in 2010 Patently Apple reported on the use of sound waves from scratches that was illustrated in a video from Chris Harrison, a then third year Ph.D. student at Carnegie Mellon University. Harrison later went to Microsoft as a research intern as noted here.
A Note for Tech Sites covering our Report: We ask tech sites covering our report to kindly limit the use of our graphics to one image. Thanking you in advance for your cooperation.
Patently Mobile presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Posting Comments: Patently Mobile reserves the right to post, dismiss or edit any comments.
Comments