This week the U.S. Patent and Trademark Office published a Google invention that was granted to them that reveals a possible future Pixelbook with a motorized hinge structure capable of moving the lid between an open and closed position. The movement of the lid is based on input from a plurality of sensors. One sensor may be configured to determine whether the user is within a predetermined threshold distance. Another sensor may be capable of detecting whether the user has made direct contact with the laptop. In one embodiment, the computer may have an image sensor configured to detect the user's face and continuously adjust the angle and position of the lid to keep the face in the field of view of the camera and/or keep the lid in the optimum viewing position.
Our cover graphic highlights an area at the top of the Pixelbook's lid in yellow for illustrative purposes only to show the area of the notebook that a user will touch to have the new notebook hinge motor go into action and open the lid mechanically, effortlessly. To close the lid will simply require a certain touch on the Pixelbook's touchpad to have the lid automatically close via the motorized hinge.
The touch may be set to a certain pressure or duration of touch to trigger the action to begin so that an accidental touch won't open the notebook lid when a user doesn't want it to.
Beyond opening the lid automatically, there's a second feature that follows wherein the Pixelbook's face side camera is designed to track the user's face and auto-adjust the display angle for perfect viewing.
Technically the patent states that the "Computer may include a sensor that is an image sensor and that can function as a proximity sensor for detecting the user. The image sensor may be a forward-facing camera capable of capturing an image of the user when the computer is in an open position.
There may also be a rear-facing camera capable of capturing an image of the user when the computer is in the closed position. The image received from the camera(s) may be used to detect a potential user or recognize a specific user as well as calculate or estimate the distance of a target (e.g., user or object).
The Computer may have multiple cameras that face in a similar direction and provide a stereoscopic image so as to be able to make such a calculation or estimate."
As shown in Google's patent FIG. 5A above, the computer may include front-facing camera #235 on lid assembly #12 adjacent to screen #16. The front-facing camera has field of view #237 that may be relative to the position of the lid assembly As the lid assembly opens, the camera's field of view may rotate upward and as the lid assembly closes the camera's field of view may rotate downward. The camera is configured to capture an image, or a series of images in the form of a video, and communicate the image(s) to the processor.
Google notes that "The processor may be configured to analyze the image(s) and perform digital image processing to detected objects in the image. For example as seen in FIG. 5A, the processor has detected a portion #308 of the user's body, e.g., chin. When the portion is detected, the processor may instruct the motorized hinge to move the lid assembly such that the user's entire face #306 is within the center portion of camera's field of view as shown in FIG. 5B.
Once the processor locates and centers the camera's field of view on the user's face the processor may continuously adjust the position of the lid in order to maintain that centering.
For example, if the user is initially sitting down and the user then stands up, the processor will detect the change in the location of the user's face and adjust the position of lid assembly 12 by rotating toward the fully open position in order to have the user's face remain in the middle portion of the camera.
Conversely, if the user is initially standing and the user then sits down, the processor will detect the change in the location of the users face and adjust the position of the lid assembly by rotating toward the closed position in order to have the user's face remain in the middle portion of the camera. This may be particularly useful, for example, during a video conference.
If the processor determines the face of the user is not currently within the field of view of the camera, the processor may use object detection to classify what is currently in view and predict the location of the user's face.
For example, if the image processing detects a body part (e.g., torso, shoulder, arm), article of clothing, and/or accessory (e.g., hat, belt, shoe) it may use this to predict the location of the face, e.g., above the torso or below the hat. It may then instruct the motorized hinge to rotate the lid toward the open position or closed position in order to alter the cameras field of view.
It will continue to adjust the lid until the face is in the center portion of the cameras field of view. If the processor is not able to predict the location of the face it may instruct that a searching mode be implemented by panning movement of the lid. This can be done by utilizing a motorized hinge to adjust or rotate the lid assembly throughout at least a portion of the rotational range of motion thereof in an effort to locate the face of the user.
The panning motion may cover the entire range of motion capable by the hinge or only a portion of the range above or below the current position (e.g., as little as a fraction of a degree to as much as 180.degree.).
In another example, if the user is not detected (e.g. after a predetermined amount of time spent in the searching mode or after a predetermined number of panning cycles) the computer may close and/or lock itself.
Google's patent FIG. 8 illustrates an isometric view of the hinge structure and FIG. 10 illustrates a flow chart of steps executed by the processor to automatically adjust position of the lid assembly.
More on Locking and Unlocking the Pixelbook
As seen in steps 810 and 820 of patent FIG. 10 above, the processor may detect direct contact of a user via a touch-sensitive surface and subsequently execute an open procedure to open the lid assembly and execute an unlock procedure to unlock the computer.
The unlock procedure may include waking up the computer from sleep or standby mode, restoring from hibernation, powering up the computer, or logging the user into the operating system or application. The unlock procedure may involve accessing the users credentials (e.g., user name and password) and automatically inserting them where appropriate.
The open and unlock procedures may have different levels of security. For example the open procedure may require detection of only a potential user e.g., any person, whereas the unlock procedure may require a specific user be identified or recognized.
The processor may detect a potential user by using a rear-facing camera and performing general object detection or by using a microphone and performing sonar or acoustic detection.
Prior to unlocking the computer, the processor can be configured to require authentication of the user. The authentication may be performed using NFC, bluetooth pairing, voice recognition, facial recognition, iris or eye recognition, or gesture recognition via the touch-sensitive surface or camera. The authentication may be based on a single method or a combination of methods.
The open procedure and unlock procedure may be done simultaneously or one procedure may be executed first and the other procedure done later. In an example, the open procedure can be implemented prior to the unlock procedure so that once the computer lid assembly is opened other features of the Pixelbook could be exposed for use by the processor or user in the authentication step. For example, there may be a front-facing camera that may have a higher resolution or better view of the user's face, which may assist with facial recognition. In addition, the computer's keyboard may be exposed which would allow the user manually insert their credentials.
The processor may use data from additional sensors to dynamically adjust when the open or unlock procedure is executed. In one example, the Pixelbook may include additional sensors such as an accelerometer and/or an ambient light sensor. The processor may use these sensors in conjunction with other sensors to detect characteristics or aspects of the surroundings of the Pixelbook. For example, the processor may detect, via an accelerometer, that the Pixelbook is being moved by comparing the pattern of movement to a movement signature associated with being carried while user is walking.
It may also utilize an ambient light sensor to detect that it has been transported from a bright environment to a dark environment and infer that the computer has been relocated to a portable storage container e.g., computer bag, backpack. In response the processor may deactivate the automatic unlock or open procedure.
Google's patent was granted this past week and originally filed in Q4 2013. One of the inventors noted on the patent is Ken Loo, Senior Product Design Engineer who worked on Google's self-driving car as lead engineer for vehicle sensors, as well as working on the Pixel smartphone and the Pixelbook.
Patently Mobile presents only a brief summary of granted patents with associated graphics for journalistic news purposes as each Granted Patent is revealed by the U.S. Patent & Trademark Office. Readers are cautioned that the full text of any Granted Patent should be read in its entirety for full details. About Comments: Patently Mobile reserves the right to post, dismiss or edit comments.
Comments