This will be the year that we begin to see Intel's RealSense camera come to market in a wide range of products from PCs to smartphones and tablets. Intel introduced their 3D camera back at CES 2014. Since then Intel has shown prototypes for smartphones and tablets. The sexy Asus iMac-like desktop and HP's 34" desktop both feature Intel's RealSense camera to acknowledge in-air hand gestures and instant login with 3D facial recognition. Intel continues to advance their camera technology with an emphasis on in-air gestures for video gaming as revealed in a patent that surfaced at the U.S. Patent and Trademark Office last week.
According to Intel's patent filing, "The three-dimensional space can be characterized by targets, obstructions, and fields in, for example, a computer gaming environment in which, due to the physics characteristics of those objects, they interact with user gestures that are applied to virtual objects. Three-dimensional physics effects can be represented in this three-dimensional space. In this three-dimensional space, games and other applications can combine forces from targets, obstructions, and fields with forces from air gestures to provide a more complex, interactive, or realistic interaction with a user."
In Intel's patent FIG. 1A noted below we're able to see a diagram of an air gesture system having a display coupled to an array of cameras #103 and an array of microphones #105. In the illustrated example, there are two cameras and two microphones – though a larger or smaller number of cameras or microphones may be used for more or less accurate sensing of position and direction.
The display may be a direct view or projection display on any type of display technology. As shown the camera microphone arrays are position over and attached to the display. However, any other position may be used. The camera and microphone may be positioned apart from each other and apart from the display. The arrays can be calibrated for or configured with knowledge of the position of the display in order to compensate for offset positions. The display may be a part of a portable computer, a gaming console, a handheld smartphone, personal digital assistant, or media player. Alternatively, the display may be a large flat panel television display or computer monitor
In the example shown, the display shows three submarines #109 from a side view progressing through an undersea environment. A user shown as a hand #107 performs air gestures to direct torpedoes #111 at the displayed submarines. The user air gestures are detected by the cameras to execute a command to fire torpedoes. The system uses a gesture library for the undersea environment that contains possible gestures. When the hand performs a gesture, the system compares the observed gesture to the gesture library, finds the closest gesture, then looks up the associated command, such as fire torpedoes.
In Intel's patent FIG. 2B noted below we're able to see an additional screen #131 has been added. This screen is shown as a portable device such as a smartphone or portable gaming system.
The smartphone or tablet's smaller display is placed, in this example, in front of the main large display. The system can determine the position of the smaller screen and present a portion of the three dimensional space that lies in the plane of the small screen. So for example in FIG. 2B, the user has thrown a space ship #127 toward the planet #121 and in particular at the target #125 on that planet. After the spaceship has been thrown it first appears on the smaller screen.
As shown, we're able to see object #129 on the smaller screen that isn't visible on the main screen. This object 129 is in the form of another moon which can exert a gravitational or other force on the spaceship #127. As the spaceship continues through the three dimensional space, it will leave the smaller display and after some time show up on the large display.
The addition of the small screen adds a new dimension to this particular type of game play. The main camera array or some other proximity sensing system can determine the position of the small screen in real time. The user can then move the small screen around to see objects that are not displayed on the main screen. As a result, upon throwing a space ship #127 in the example of FIG. 2A, if the course and velocity of the spaceship is significantly altered, the user can use the small screen to find which objects have influenced its path and compensate accordingly. The small screen can be moved around in different planes on the z-axis to see what is in front of the large screen. A similar approach can be used to see what is beside or behind the large screen.
I could definitely see this aspect of the RealSense camera technology adding a very cool dimension to gaming in the future. For more on this report, see Patently Apple's full report here.
Comments