On January 21, Microsoft introduced the new Surface Hub for enterprise collaboration and meetings during their Windows 10 event. Microsoft noted that their new Surface Hub, as noted in the video above, would introduce Skype for the enterprise, be able to manipulate 3D CAM files using multitouch, use their Surface pen to make annotations and mark-up materials on screen and more. In yesterday's presentation Microsoft introduced two features or apps called "Meetings" and "Brainstorming." As the New Year began, the U.S. Patent & Trademark Office published one of Microsoft's patent applications covering the basics of their new Surface Hub interactive display that we discovered today. The patent filing shows us some new customized gestures that the Surface Hub may introduce along with a few other interesting facts. While I'm sure that there will be a number of future patent filings on this invention over time, for today, the timing of this initial patent application on the Surface Hub couldn't have come at a better time.
Microsoft's Patent Background
In their patent background, Microsoft notes that it is common for people to work on whiteboards, where they could collaborate with others, brainstorm lists of important questions, and sketch simple charts. However, if the topic of discussion involves large amounts of data, it is soon necessary to make use of computational power available in some tools (e.g., a spreadsheet). Using pen, touch, or a combination of pen and touch on a digital displays has great potential to lead to new and more natural interactions with data, which were not possible with the traditional whiteboard or with the typical desktop environment with mouse and keyboard. The present concepts offer a novel approach to interaction with data visualizations for data combination, data analysis, data communication, and/or brainstorm ideas on digital displays. This approach could be applied to various forms of interactive digital displays, such as pen- and/or touch-enabled tablets, notebooks, digital whiteboards, etc.
Microsoft's New Surface Hub Solution Revealed
Microsoft's invention directly links to the all-new Surface Hub introduced yesterday during the Windows 10 event. Microsoft states that "One example includes a display device configured to receive input from a user relative to data visualizations and automatically generate a new way of viewing the data. The system also includes a graphical user interface configured to be presented on the display device that allows a user to interact with the graphical user interface via user commands."
In Microsoft's patent figure 42 noted above we're able to see an example the interactive experience system 4200 that we now know is the new Surface Hub.
According to Microsoft, the system includes an interactive digital display #4202 which elsewhere in the patent it's also referred to as a digital whiteboard. One or more users could participate in an interactive experience with the interactive digital display.
The interactive digital display could include a screen 4206 (e.g., interactive surface) and multiple sensors 4207. This example includes four sets of sensors 4207. Graphical user interface (GUI) 4208 (e.g., display area, canvas) could be presented on the screen.
Many types of sensors could be utilized in various implementations. In the example found in patent FIG. 42, we're seeing optical sensors noted as sensors 4207(1)-4207(3) and pressure sensors 4207(4) which could be integrated into the screen. The varying orientations of the sets of sensors could be intended to detect a user engaging the interactive digital display with user commands.
Integrated Kinect Technology
The first, second, and third sets of optical sensors 4207(1), 4207(2), 4207(3) could be cameras, such as arrays of cameras. The cameras could be configured for visible light, infrared, and/or other frequencies. The cameras may operate in cooperation with an infrared pattern projector that could aid the cameras to distinguish objects from one another.
Microsoft notes that other camera configurations may employ time of flight or other techniques to enhance information captured by the cameras about the user(s) and/or the environment around the interactive digital display. In one implementation, the sets of sensors could include Microsoft's Kinect 3-D sensing technology.
Biometrics, high-level Gestures & Gaze Controls
In some implementations (not shown), a first set of cameras could point away from the screen and a second set of cameras could point parallel to the screen surface to sense user input (e.g., gestures). The second set of cameras could allow the screen to function as a touch screen without actually having a touch sensitive surface (e.g., that senses a user's physical touch).
Microsoft further notes that yet other camera configurations could be employed such as those that image through the screen. One suitable camera for such a configuration is a wedge type camera that could be positioned in front of or behind the screen or to the side of the screen. This type of configuration could detect the user's fingers touching the screen and could also look at the user's hands, arms, eyes, etc.
Biometric information obtained by the cameras could be interpreted as a user command. For instance, where the user is looking at on the shared canvas (e.g., user gaze) could be interpreted as a user command relative to content at that location, for example.
Presence Recognition using Far Field
The second subgroup could be configured to capture a "far field" space that is more distant from the interactive digital display. The far field subgroup could be used to detect a user(s) entering a room in which the interactive digital display is located. The near field subgroup could be configured to capture biometric data from a user(s) engaging the GUI.
Once a user has been detected, the sets of sensors or cameras could track the user's position relative to the screen and movements could be tracked to determine if the user is attempting a user command, such as writing on the screen, making a control gesture, etc.
Smartphone Interaction with Surface Hub
Microsoft points to the users noted in patent FIG. 42 as both having smartphones that could be utilized for entering commands on the Surface Hub and for user identification. The smartphones could also work alternatively or additionally to the sets of sensors/cameras to help detect user commands and/or identify the user(s).
For example, user 4204(1) noted in the patent figure above could enter a command on their smartphone which could be used by Surface Hub to manipulate data visualizations on the displays GUI. In another example, the smartphone could send out an identification signal, like a beacon signal that could be used by the system to identify the user. Microsoft also notes that the Surface pen could also serve dual purposes as a digital writing instrument and to identify users.
Special Surface Hub Gestures
Microsoft describes a new gesture that could be used by the Surface Hub. The gesture known as the "Bump Gesture" would allow an individual to bump two different data sets together and have the data sets merge into a single data set.
In FIG. 36 noted below, the user can perform lasso gesture (#2900) by drawing a circle around the cards 3300(1), 3300(2), and 3300(3) using the Surface Pen (#910).
Microsoft's Surface Hub could have an option to separate combined visualizations, for example with a "shake" gesture (#1700) noted below in patent figure FIG. 17.
In the example noted above, the shake gesture entails the user swiping his/her index finger back-and-forth across the screen, effectively "shaking" the scatter plot apart. The shake gesture separates data in contrast to the "bump gesture" bring to two data sets together.
Random Patent Figures
Microsoft's patent FIG. 44 noted below illustrates the architecture behind the new Surface Hub. . The system 4400 includes a digital interactive component 4402 that receives user interaction 4404 and processes the user interaction as, for instance, a stroke. It is to be understood that the stroke can be the result of many different types of input modalities.
Yesterday Microsoft revealed that the new whiteboard was powered by their OneNote application, a digital note-taking application, so users can easily save notes in the app or email them. As highlighted in patent FIG. 44 above, annotations plays a huge part in this new system. Microsoft drill deeply into annotation in this patent covered in over 25 paragraphs.
And lastly, Microsoft notes that the present implementations are not limited to a specific type of screen. In contrast, workable implementations can be accomplished with projection screens, light emitting diode (LED) screens, liquid crystal screens, electroluminescent screens, plasma screens, and/or other developing or yet to be developed display and screen types.
Microsoft filed their patent application back in July 2014. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
A Note for Tech Sites covering our Report: We ask tech sites covering our report to kindly limit the use of our graphics to one image. Thanking you in advance for your cooperation.
Patently Mobile presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Posting Comments: Patently Mobile reserves the right to post, dismiss or edit any comments.
Comments