Earlier in September the U.S. Patent & Trademark Office published a patent application from Samsung that reveals a Google-Glass-like Invention that indicates a real depth that could signal that they're in fact miles ahead of Google in regards to this form of wearable computer. Although not listed as one of the inventors, Samsung has a secret weapon behind their project; an individual who worked on a similar project at MIT well before 2009 or at minimum four years ahead of Google Glass that first came to light in 2013. Whether or not Samsung will be able to bring their Glass device to market ahead of Google with any success is of course unknown at this time. Will Samsung be able to strike first or will they decide to partner with others? Only time will tell.
Samsung's Patent Background
The real world is a space consisting of 3-dimensional (3D) coordinates. People are able to recognize 3D space by combining visual information obtained using two eyes. However, a photograph or a moving image captured by a general digital device is expressed in 2D coordinates, and thus does not include information about space. In order to give a feeling of space, 3D cameras or display products that capture and display 3D images by using two cameras have been introduced.
Meanwhile, a current input method of smart glasses is limited. A user basically controls the smart glasses by using a voice command. However, it is difficult for the user to control the smart glasses by using only a voice command if a text input is required. Thus, a wearable system that provides various input interaction methods is required.
Samsung's Solution
Samsung's patent covers methods and apparatuses consistent with exemplary embodiments include a method and wearable device which is in the form of a Google-Glass-like device for setting an input region in the air or on an actual object based on a user motion, and providing a virtual input interface in the set input region.
According to one or more exemplary embodiments, a wearable device includes: an image sensor configured to sense a gesture image of a user setting a user input region; and a display configured to provide a virtual input interface corresponding to the user input region set by using the sensed gesture image.
The sensed gesture image may correspond to a figure drawn by the user, and the virtual input interface may be displayed to correspond to the sensed figure.
The virtual input interface may be displayed to correspond to a size of the user input region.
The virtual input interface may be determined based on a type of an application being executed by the glasses type wearable device.
The display may include a transparent display configured to provide the virtual input interface on a region of the transparent display corresponding to the user input region as observed through the transparent display.
The image sensor may be configured to capture a first image of the user input region, and the display may be configured to display a second image of the virtual input interface over the user input region of the first image.
The glasses type wearable device may further include: a depth sensor configured to sense a first depth value corresponding to a distance from the wearable device to the user input region, and a second depth value corresponding to a distance from the wearable device to an input tool; and a controller configured to determine whether an input is generated through the virtual input interface based on the first depth value and the second depth value.
Samsung's Glass-like Wearable Computer
Samsung's patent FIGS. 1A, 1B and 1E noted below are diagrams describing a system relating to a Google-Glass-like device.
Samsung's patent FIGS. 12A and 12B noted above are diagrams describing examples of virtual input interface types that could be used such as a keyboard for text or music. Other examples not shown include a virtual notebook working with a future version of the S-pen. The virtual images via the depth sensor, ensures that the image in the glass is in the right proportions for the user at all times.
In patent FIG. 32 above we see Samsung's Glass (the wearable device), recognizing a gesture from the user setting an input region on a desk (#3210), to display a virtual piano keyboard #3220 on a transparent or opaque display to overlap the desk.
Samsung's patent FIG. 33 above provides us with a basic overview showing the image and depth sensors required to work with the optical display to make the virtual images as real as possible to scale.
As shown in FIG. 15A above, when the user draws a circle (#1530) on one of their palms (#1510) as observed through the optical display by using a finger (#1520) Samsung's Glass (the wearable device 100) may recognize a gesture of drawing the circle by using the image sensor and set a region corresponding to the circle as an input region.
At this time, as shown in FIG. 15B, the Samsung's Glass may display a virtual dial pad (#1550) on the optical display such that the virtual dial pad overlaps a circular region observed through the optical display. For example, Samsung's Glass may display the virtual dial pad on the optical display to match a size of the circular region.
As such, Samsung's Glass, according to an exemplary embodiment, may provide a virtual input interface having different shapes according to types of a gesture setting an input region, and information about types, dimensions, and shapes of a virtual input interface, provided according to types of a gesture, may be stored in Samsung's Glass.
Samsung's patent FIGS. 28A and 28B noted above are diagrams describing a method of obtaining a first depth value of an input region and a second depth value of an input tool.
The example of the optical display displaying a numeric pad on the palm of a user's hand noted above in figure 15B isn't by chance. Oh no. There's a deep history associated with this image going all the way back to 2009, way ahead of Google Glass. Our parent site Patently Apple first covered this invention that was introduced in a TED presentation. The graphic illustrated below is a compilation of different scenes from that presentation.
Instead of using a complex device that looked like an iPod with a lanyard as noted above, a Google-Glass-like product would actually be a superior form factor to deliver the SixthSense. No longer would the user have to 'frame' what they wanted to focus on in a photo with their hands as noted above for example, but rather have the natural Glass frame framing the photo that a user wants to take automatically.
And who gave this presentation at TED? Pranav Mistry, the same Mistry who works for Samsung. Yes, Samsung, not Google, even though Google tried to rip off Mistry's idea in a patent filing made public in January 2013. Mistry first made his public debut as a major Samsung engineer when he formally introduced the first generation of Samsung Gear in late 2013.
This is why I'm saying that Samsung could be miles ahead of Google for a Glass-like consumer product because Mistry had been working on this concept prior to the 2009 TED presentation when he was still at MIT.
Setting Up Input Regions
Samsung's Patent FIG. 2 noted in the center above is a flowchart illustrating a method of providing, by a wearable device, a virtual input interface.
In Samsung's patent FIGS. 3A and B, the wearable device, according to an exemplary embodiment, may set an input region by recognizing a figure drawn by a user in the air or on an actual object.
For example, as shown in FIG. 3A, the user may draw a figure, such as a rectangle, in the air by using an input tool #310, such as a pen, a stick, a stylus or a finger. The wearable device may recognize the figure and set a region corresponding to the figure as an input region #320. For example, a region having a depth value of the figure (a distance from the wearable device to the figure), a shape of the figure, and a size of the figure may be set as the input region.
As shown in FIG. 3A, the figure may be a rectangle, but a shape of the figure is not to be limited. Examples of the figure include figures having various shapes and sizes, such as a circle, a polygon, and a free looped curve, a 2D figure, and a 3D figure.
Alternatively, as shown in FIG. 3B, the user may draw a Figure (#340), such as a rectangle, on an actual object (#330), such as a palm, by using an input tool (#345), such as a pen, a stick, a stylus or a finger. The wearable device may recognize the Figure drawn by the user and set a region corresponding to the figure as an input region.
Next, as shown in Samsung's patent FIG. 4A, the wearable device may recognize a palm (#410) by using the image sensor. Here, information about a shape or size of the palm may be pre-stored in the wearable device. Accordingly, the wearable device may compare the shape and size of the palm with the pre-stored information, and determine whether to set the palm as the input region.
Samsung further notes in FIG. 4A, that the wearable device may also set the input region by recognizing any one of various objects, such as a desk and a notepad as shown in the FIG. 4B above.
Types of Content
Samsung notes that the device will work with different content including photos, videos, text, webpages, educational content, movie content, broadcast content, game related content, commercial content, and news content but to name a few.
Samsung's Fuller System Overview
Today the Hollywood Reporter revealed that "Lionsgate and Samsung Electronics have teamed up to allow fans of The Hunger Games: Mockingjay – Part 2 to enter an immersive world by donning an Oculus-powered Samsung Gear VR headset."
At the end of the day, TechRadar said that Samsung's VR has yet to strike the right chord in advancing VR but are definitely pushing VR forward. And with this experience, Samsung could bring more to their future Glass device supported by one of their most recent patent filings. In fact, there are more inventions from Samsung on their new Glass device which we hope to present to you in the coming weeks.
Samsung filed their U.S. patent application back in March 2015 and a year earlier in Korea. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
A Note for Tech Sites covering our Report: We ask tech sites covering our report to kindly limit the use of our graphics to one image. Thanking you in advance for your cooperation.
Patently Mobile presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Posting Comments: Patently Mobile reserves the right to post, dismiss or edit any comments.
Comments