In May Microsoft gave customers a peek at Surface Hub 2 as noted in the video below. Hub 2 can turn any space into a teamwork space. Built for teams to work seamlessly together wherever they are, it scales and adapts to any space. Microsoft states that Surface Hub 2 is coming in 2019.
The Hub 2 could be on portable stand or set on a wall like a mirror that will bring video conferencing to gigantic displays. Once you've watched Microsoft's promotional video, you'll be able to appreciate one of Microsoft's latest patent wins that perhaps is describing where Hub 3 could be going in the future.
This week the U.S. Patent & Trademark Office published a patent granted to Microsoft describing a tele-immersive collaboration system enabling real-time interaction among two or more participants who are geographically separated from each other. The patent may be providing us with a peek at some of the ideas for Microsoft Hub 3. The patent shares the same vision of team collaboration of Hub 2 but advances it in terms of communicating immersively.
This kind of system differs from a conventional video conferencing system by giving each participant the impression that he or she is working in the same physical space as the other remote participants.
One tele-immersive collaboration system provides a shared-space experience using a window metaphor. That is, this type of system gives a first participant the impression that he or she is looking through a transparent window at a second participant, who is located on the opposite side of the window. But there are drawbacks and Microsoft's invention is to overcome these drawbacks.
Microsoft's invention covers a tele-immersive environment that includes two or more set-ups. A local participant corresponds to a participant who is physically present at a particular local set-up; a remote participant corresponds to a participant who is physically present at a set-up that is remote with respect to the local set-up. Each set-up, in turn, includes mirror functionality for producing a three-dimensional virtual space for viewing by a local participant. That virtual space shows at least some of the participants as if the participants were physically present at a same location and looking into a mirror.
In one illustrative implementation, the mirror functionality provided by each set-up includes a physical semi-transparent mirror placed in front of a display device. The semi-transparent mirror presents a virtual image of the local participant, while the display device presents a virtual image of the remote participant(s).
In another illustrative implementation, the mirror functionality includes a display device that simulates a physical mirror. That is, the display device in this embodiment presents a virtual image of both the local participant and the remote participant(s), without the use of a physical semi-transparent mirror.
According to another illustrative aspect, each set-up includes functionality for constructing a depth image of its local participant.
According to another illustrative aspect, each set-up includes a physical workspace in which the local participant may place a physical object. The set-up produces a virtual object which is the counterpart of the physical object. In one implementation, the physical workspace includes a workspace table on which the local participant may place physical objects.
According to another illustrative aspect, the mirror functionality at each set-up provides functionality that allows participants to jointly manipulate a virtual object. The virtual object may or may not have a counterpart physical object in the workspace of one of the set-ups.
Microsoft's patent FIG. 1 below shows an overview of a tele-immersive environment that uses a mirror metaphor using three participants.
Microsoft's patent FIG. 2 depicts a tele-immersive experience that the environment (of FIG. 1) provides to a local first participant #202. This tele-immersive session involves just two participants, although, as noted in FIG. 1, the session can involve more than two people.
Microsoft's patent FIG. 3 below shows a first implementation of an environment that may produce the experience illustrated in FIG. 2. This implementation provides mirror functionality that uses a physical semi-transparent mirror in conjunction with a display device, which is placed behind the mirror.
3D Virtual Spaces and Displays
The display device #318 noted above receives the 3D scene information provided by the local processing system #312. Based on that information, the display device displays a three-dimensional virtual space that is populated by one or more virtual images.
The display device can be implemented using any display technology, such as an LCD display. In another implementation, the display device may be implemented as a stereo display device, or as a three-dimensional projection device which casts stereo information onto any surface (such as a wall). The participant P.sub.1 may view the output of such a stereo display uses shutter glasses or the like such as HoloLens; this gives the impression that objects in the virtual space have a depth dimension.
A set-up may provide multiple cameras at different locations around the local participant to capture a representation of the participant from different vantage points. Each camera produces a separate instance of camera information. The local image construction module may merge the different instances of camera information into a single composite representation of the objects in real space, e.g., by applying appropriate coordinate transformations to each instance of camera information.
Touch Display with Stylus Capabilities, Manipulation of Documents
Further, the display device may include a touch sensitive surface. The display device may produce writing information when a participant interacts with the touch sensitive surface, e.g., using a stylus, finger, or some other implement.
Alternatively, or in addition, a camera can be placed in front of and/or behind the display device to detect the local participant's interaction with the display device and to produce writing information as a result.
Similarly, in the case of FIG. 3, the semi-transparent mirror #316 can include a touch sensitive surface which produces writing information when a participant makes contact with that surface. Alternatively, or in addition, a camera can be placed in front of and/or behind the semi-transparent mirror to detect the local participant's interaction with the semi-transparent mirror, and produce writing information as a result.
The management module can also manage the retrieval and manipulation of documents. For example, the management module can receive a command from the local participant using any input mechanism. The management module can then retrieve a document that is specified by the command, e.g., by retrieving a spreadsheet document for a file named "tax return 2012" when the local participant speaks the voice command "retrieve tax return 2012," or when the local participant inputs this command through any other input mechanism. The environment can then allow any participant of the tele-immersive session to manipulate the document in any manner described.
System may use Microsoft's Kinect
A tracking module can track the position of various objects in the real space associated with a set-up. The tracking module can use one or more techniques to perform this task. In one case, the tracking module uses Microsoft's Kinect device to represent each participant's body as a skeleton that is, as a collection of joints connected together by line segments. The tracking module can then track the movement of the joints of this skeleton as the participant moves within the real space. Alternatively, or addition, the tracking module can use any head movement technology to track the movement of the participant's head. Alternatively, or in addition, the tracking module may use any eye gaze recognition technology to track the participant's eye gaze.
Physics Simulation Engine
Microsoft notes that the physics simulation engine can rely, at least in part, on known simulation algorithms to manipulate 3D virtual objects in realistic or nonrealistic ways, including models that take into account rigid body dynamics, soft body dynamics, etc. Illustrative known physics simulators include PhysX, provided by Nvidia Corporation of Santa Clara, Calif.; Havok Physics, provided by Havok of Dublin Ireland; Newton Game Dynamics, produced by Julio Jerez and Alain Suero, and so on.
Other Patent Figures
Microsoft's patent FIG. 5 below shows one implementation of a local processing system that can be used to provide three-dimensional (3D) scene information. The mirror functionality of FIG. 3 and display the 3D scene information.
Microsoft's patent FIG. 6 shows mirror functionality that uses a display device having a curved display surface.
Microsoft's patent FIG. 8 below shows a tele-immersive experience that involves presenting a virtual space that is composed of a virtual-reflected space and a virtual-actual space.
Microsoft's patent FIG. 9 shows an illustrative procedure that explains one manner of operation of a local processing system.
Microsoft's granted patent was filed in March 2017 and published last week by USPTO. The timing of such a product to market is unknown at this time.
A Note for Tech Sites covering our Report: We ask tech sites covering our report to kindly limit the use of our graphics to one image. Thanking you in advance for your cooperation.
Patently Mobile presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Posting Comments: Patently Mobile reserves the right to post, dismiss or edit any comments. Those using abusive language or behavior will result in being blacklisted on Disqus.