Display content updating method, head-mounted display device and computer readable medium
1. A display content updating method is applied to a head-mounted display device with a display screen, and comprises the following steps:
determining a three-axis attitude angle of a target device and a three-axis attitude angle of a head-mounted display device based on first sensing data sent by a first sensor and aiming at the target device and second sensing data sent by a second sensor and aiming at the head-mounted display device;
determining whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device meet an attitude angle deviation detection condition;
updating the display content in the display screen of the head-mounted display device in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition.
2. The method of claim 1, wherein after the determining whether the three-axis pose angles of the target device and the three-axis pose angles of the head-mounted display device satisfy a pose angle deviation detection condition, the method further comprises:
determining whether the target device and the head mounted display device are in an opposing state in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head mounted display device satisfy the attitude angle deviation detection condition.
3. The method of claim 2, wherein the determining whether the target device and the head mounted display device are in an opposing state comprises:
controlling a front-facing camera in the target equipment to shoot an image to obtain a first target image;
determining whether a target image area is included in the first target image, wherein the target image area includes an image of a human face wearing the head-mounted display device;
in response to determining that the target image region is included in the first target image, determining an angle between a center point of the target image region and a position point of a front camera in the target device when the first target image is captured.
4. The method of claim 2, wherein the determining whether the target device and the head mounted display device are in an opposing state comprises:
controlling the target equipment to display at least one mark graph in a currently displayed page of the target equipment;
controlling a front camera in the head-mounted display equipment to shoot an image to obtain a second target image;
in response to determining that the at least one marker graphic is included in the second target image, determining an angle between the target device and a location point of a front facing camera in the head mounted display device when capturing the second target image from the at least one marker graphic included in the second target image.
5. The method of claim 3 or 4, wherein the determining whether the target device and the head mounted display device are in an opposing state further comprises:
in response to determining that the angle is within a preset angle range, determining that the target device and the head mounted display device are in an opposing state.
6. The method of claim 5, wherein the updating the presentation content in the display screen of the head mounted display device in response to determining that the three-axis pose angles of the target device and the three-axis pose angles of the head mounted display device satisfy the pose angle deviation detection condition comprises:
in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition and that the target device and the head-mounted display device are in an opposing state, updating presentation content in a display screen of the head-mounted display device.
7. The method of claim 4, wherein prior to said controlling said target device to present at least one tag graphic in a page currently presented by said target device, said method further comprises:
and controlling the target equipment to light screen in response to the fact that the target equipment is determined to be in the screen-turning-off state.
8. The method of claim 1, wherein the first sensory data and the second sensory data comprise triaxial angular acceleration data; and
the determining the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device based on the first sensing data sent by the first sensor and aiming at the target device and the second sensing data sent by the second sensor and aiming at the head-mounted display device comprises:
filtering and integrating triaxial angular acceleration data included in the received first sensing data to obtain a triaxial attitude angle of the target device;
and filtering and integrating the triaxial angular acceleration data included in the received second sensing data to obtain a triaxial attitude angle of the head-mounted display device.
9. The method of claim 1, wherein the updating the presentation in the display screen of the head mounted display device comprises:
removing the display content in the display screen of the head mounted display device.
10. The method of claim 6, after the updating the presentation content in the display screen of the head mounted display device in response to determining that the three-axis pose angle of the target device and the three-axis pose angle of the head mounted display device satisfy the pose angle deviation detection condition, the method further comprising:
in response to detecting that the subtended state between the target device and the head mounted display device is eliminated, displaying the previously removed presentation in a display screen of the head mounted display device.
11. A head-mounted display device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the display screen is used for displaying the display content;
a second sensor to transmit second sensory data for the head-mounted display device;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-10.
Background
Head-mounted display devices, such as AR (Augmented Reality) glasses or MR (Mixed Reality) glasses, provide a way for users to view virtual scenes in real scenes. Meanwhile, the head-mounted display device can be in communication connection with target devices such as a mobile phone. At present, when a user wearing a head-mounted display device has a demand for directly browsing or operating a target device such as a mobile phone, an interaction mode generally adopted is as follows: the user needs to take off the head-mounted display device to browse or operate the target device such as the mobile phone, and then wears the head-mounted display device again when the user needs to watch the interface in the head-mounted display device.
However, when the above-mentioned method is adopted for interaction, the following technical problems often exist: if the user repeatedly takes off or wears the head-mounted display device, the operation steps are complicated, and the user experience is poor. If the user directly browses or operates a target device such as a mobile phone through the head-mounted display device, the display content in the head-mounted display device can block the interface of the mobile phone.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a presentation content updating method, a head-mounted display device, and a computer-readable medium to solve the technical problems mentioned in the above background section.
In a first aspect, some embodiments of the present disclosure provide a method for updating display content, the method including: determining a three-axis attitude angle of a target device and a three-axis attitude angle of a head-mounted display device based on first sensing data sent by a first sensor and aiming at the target device and second sensing data sent by a second sensor and aiming at the head-mounted display device; determining whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy an attitude angle deviation detection condition; updating the display content in the display screen of the head-mounted display device in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition.
In a second aspect, some embodiments of the present disclosure provide a head-mounted display device, comprising: one or more processors; a storage device having one or more programs stored thereon; the display screen is used for displaying the display content; a second sensor for transmitting second sensing data for the head-mounted display device; when the one or more programs are executed by the one or more processors, the one or more processors implement the method described in any implementation manner of the first aspect.
In a third aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the display content updating method of some embodiments of the present disclosure, when a user wearing the head-mounted display device has a demand for directly browsing or operating target devices such as a mobile phone, the user can browse or operate the target devices such as the mobile phone without shielding through the head-mounted display device without taking off the head-mounted display device. Particularly, cause the operating procedure comparatively loaded down with trivial details, user experience is relatively poor or browse, operating target equipment such as cell-phone occasionally shelters from the reason lie in: whether a user wearing the head-mounted display device has a need to browse or operate a target device such as a mobile phone cannot be judged, so that display content in the head-mounted display device cannot be actively removed. Based on this, the display content updating method according to some embodiments of the present disclosure determines whether a user wearing the head-mounted display device has a need to browse or operate a target device such as a mobile phone according to an included angle between a three-axis attitude angle of the target device and a three-axis attitude angle of the head-mounted display device. When the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device meet the attitude angle deviation detection condition, it can be shown that the user has a demand for browsing or operating the target device such as a mobile phone. At the moment, the display content in the display screen of the head-mounted display device is updated, so that a user can directly browse or operate target devices such as a mobile phone without shielding without taking off the head-mounted display device, the operation flow is simplified, and the user experience is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
FIG. 2 is a schematic diagram of one application scenario showing a content update method of some embodiments of the present disclosure;
FIG. 3 is a flow diagram illustrating some embodiments of a content update method according to the present disclosure;
FIG. 4 is a flow diagram illustrating further embodiments of a method for display update according to the present disclosure;
FIG. 5 is a schematic diagram of determining whether a target device and a head mounted display device are in an opposing state in some embodiments of a display content update method of the present disclosure;
FIG. 6 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied.
As shown in fig. 1, an exemplary system architecture 100 may include a head mounted display device 101, a target device 102, a first sensor 103, and a second sensor 104.
The head mounted display device 101 may include at least one display screen 1011. The display screen 1011 described above may be used to display presentation content. In addition, the head mounted display apparatus 101 further includes a frame 1012 and a frame 1013. In some embodiments, the processing unit, memory, and battery of head mounted display device 101 may be placed inside frame 1012. In some alternative implementations of some implementations, one or more of the processing unit, memory, and battery may also be integrated into another separate accessory (not shown) that connects to the frame 1012 via a data cable.
The target device 102 may communicate with the head mounted display device 101 through a wireless connection or a wired connection.
The first sensor 103 may be provided in the target device 102 described above. In some embodiments, the first sensor 103 and the head mounted display device 101 may communicate through a wireless connection or a wired connection.
The second sensor 104 may be provided in the head mounted display device 101 described above. In some embodiments, the second sensor 104 and the head mounted display device 101 may communicate through a wireless connection or a wired connection. The second sensor 104 may be provided in a frame 1012 of the head mounted display apparatus 101.
It should be understood that the number of head mounted display devices, target devices, first sensors, and second sensors in fig. 1 are merely illustrative. There may be any number of head mounted display devices, target devices, first sensors, and second sensors, as desired for implementation.
Fig. 2 is a schematic diagram showing one application scenario of the content update method of some embodiments of the present disclosure.
In the application scenario of fig. 2, first, the computing device 201 may determine the three-axis attitude angle 208 of the target device 203 and the three-axis attitude angle 209 of the head-mounted display device 206 based on the first sensing data 204 sent by the first sensor 202 for the target device 203 and the second sensing data 207 sent by the second sensor 205 for the head-mounted display device 206. Then, the computing device 201 may determine whether the three-axis attitude angle 208 of the target device 203 and the three-axis attitude angle 209 of the head-mounted display device 206 satisfy the attitude angle deviation detection condition 210. Finally, the computing device 201 may update the presentation content in the display screen of the head mounted display device 206 in response to determining that the three-axis attitude angle 208 of the target device 203 and the three-axis attitude angle 209 of the head mounted display device 206 satisfy the attitude angle deviation detection condition 210.
The computing device 201 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above, or in the target device and the head mounted display device. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 2 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 3, a flow 300 is shown demonstrating some embodiments of a content update method in accordance with the present disclosure. The process 300 of the display content updating method includes the following steps:
step 301, determining a three-axis attitude angle of the target device and a three-axis attitude angle of the head-mounted display device based on first sensing data sent by the first sensor and aiming at the target device and second sensing data sent by the second sensor and aiming at the head-mounted display device.
In some embodiments, an executing subject (e.g., the computing device 201 shown in fig. 2) of the presentation content updating method may determine the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device based on first sensing data sent by a first sensor for the target device and second sensing data sent by a second sensor for the head-mounted display device. Wherein the first sensor may be provided in the target device. The target device may be a device connected to the head-mounted display device by a wired connection or a wireless connection, such as a mobile phone. The second sensor may be provided in the head-mounted display apparatus. The first sensing data and the second sensing data may include a current three-axis attitude angle of the target device and a current three-axis attitude angle of the head-mounted display device, respectively. The first sensing data may be directly determined as the three-axis attitude angle of the target device, and the second sensing data may be determined as the three-axis attitude angle of the head-mounted display device. The first sensor and the second sensor may be sensors for measuring a change in angle. It should be noted that the first and second sensors may include, but are not limited to, gyroscopes and other now known or later developed angle measurement sensors.
The three-axis attitude angle of the target device may represent an included angle between the target device and three coordinate axes in the first three-dimensional rectangular reference coordinate system. The three-axis attitude angle of the head-mounted display device can represent the included angles between the head-mounted display device and three coordinate axes in a second three-dimensional rectangular reference coordinate system. The first three-dimensional rectangular reference coordinate system may be a coordinate system in which the center of gravity of the target device is an origin, an axis passing through the origin and perpendicular to the horizontal plane is a vertical axis, an axis passing through the origin and in a north direction is a vertical axis, and an axis passing through the origin and perpendicular to the vertical axis and the vertical axis is a horizontal axis. The second-dimensional rectangular reference coordinate system may be a coordinate system in which a center of gravity of the head-mounted display device is an origin, an axis passing through the origin and perpendicular to a horizontal plane is a vertical axis, an axis passing through the origin and in a north-positive direction is a vertical axis, and an axis passing through the origin and perpendicular to the vertical axis and the vertical axis is a horizontal axis. The positive directions of the vertical axis, the vertical axis and the horizontal axis of the first three-dimensional rectangular reference coordinate system are respectively consistent with the positive directions of the vertical axis, the vertical axis and the horizontal axis of the second three-dimensional rectangular reference coordinate system.
As an example, the three-axis attitude angle included in the first sensing data may be: (vertical axis: 10, vertical axis: 20, horizontal axis: 30). The three-axis attitude angle included in the second sensing data may be: (vertical axis: 8, vertical axis: 19, horizontal axis: 33). The three-axis attitude angle of the target device may be: (vertical axis: 10, vertical axis: 20, horizontal axis: 30). The three-axis attitude angle of the head-mounted display device may be: (vertical axis: 8, vertical axis: 19, horizontal axis: 33).
In some optional implementations of some embodiments, the first sensory data and the second sensory data may include three-axis angular acceleration data. The executing body may determine a three-axis attitude angle of the target device and a three-axis attitude angle of the head-mounted display device based on first sensing data for the target device transmitted by a first sensor and second sensing data for the head-mounted display device transmitted by a second sensor, and may include:
the method comprises the following steps of firstly, filtering and integrating triaxial angular acceleration data included in the received first sensing data to obtain a triaxial attitude angle of the target device. The triaxial angular acceleration data included in the first sensing data may be filtered by a filtering algorithm. The filtering algorithm may include, but is not limited to, a clipping filtering algorithm, a kalman filtering algorithm, a median filtering algorithm, and the like. And respectively integrating the filtered triaxial angular acceleration data to obtain the triaxial attitude angle of the target equipment.
And secondly, filtering and integrating the triaxial angular acceleration data included in the received second sensing data to obtain a triaxial attitude angle of the head-mounted display device.
Step 302, determining whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device meet an attitude angle deviation detection condition.
In some embodiments, the execution body may determine whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy an attitude angle deviation detection condition. The attitude angle deviation detection condition may be that differences between three-axis attitude angles of the target device and three-axis attitude angles of the head-mounted display device are within a preset difference interval. In practice, the preset difference interval may be set according to practical applications, and is not limited herein.
As an example, the preset difference interval may be [ -5 °, 5 ° ]. The three-axis attitude angle of the above-mentioned target device may be (vertical axis: 10 °, vertical axis: 20 °, horizontal axis: 30 °). The three-axis attitude angle of the head-mounted display device described above may be (vertical axis: 8 °, vertical axis: 19 °, horizontal axis: 33 °). The difference between the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device is (vertical axis: 2 °, vertical axis: 1 °, horizontal axis: 3 °), and is within the preset difference interval [ -5 °, 5 ° ].
Thus, after the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition, it can be determined that the target device and the head-mounted display device are in an opposing state. At this time, it may be determined that the user wearing the head mounted display device has a demand to directly view the content in the target device.
Step 303, updating the display content in the display screen of the head-mounted display device in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition.
In some embodiments, the execution subject may reduce the presentation content in the display screen of the head-mounted display device to be presented in a specific area in the display screen in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition. The specific area may be an upper left corner or a lower right corner of the display screen.
In some optional implementations of some embodiments, the execution body may remove the presentation content in the display screen of the head-mounted display device. Wherein, the display content can be directly removed from the display screen. The display content can be removed in a preset action. The preset animation effect may be a preset animation style when the display content is removed. For example, the preset animation may be an animation that causes the display content to fly outward. The preset action may be to gradually reduce the display content to disappear.
Therefore, after the target device and the head-mounted display device are determined to be in the opposite state, the display content in the display screen of the head-mounted display device can be reduced and displayed, or the display content can be removed. Therefore, the occluded content in the head-mounted display device is actively reduced or removed, so that a user wearing the head-mounted display device can use the target device or browse the content in the target device without obstacles.
The above embodiments of the present disclosure have the following advantages: by the display content updating method of some embodiments of the present disclosure, when a user wearing the head-mounted display device has a demand for directly browsing or operating target devices such as a mobile phone, the user can browse or operate the target devices such as the mobile phone without shielding through the head-mounted display device without taking off the head-mounted display device. Particularly, cause the operating procedure comparatively loaded down with trivial details, user experience is relatively poor or browse, operating target equipment such as cell-phone occasionally shelters from the reason lie in: whether a user wearing the head-mounted display device has a need to browse or operate a target device such as a mobile phone cannot be judged, so that display content in the head-mounted display device cannot be actively removed. Based on this, the display content updating method according to some embodiments of the present disclosure determines whether a user wearing the head-mounted display device has a need to browse or operate a target device such as a mobile phone according to an included angle between a three-axis attitude angle of the target device and a three-axis attitude angle of the head-mounted display device. When the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device meet the attitude angle deviation detection condition, whether the user has the requirement for browsing or operating the target device such as a mobile phone can be indicated. At the moment, the display content in the display screen of the head-mounted display device is updated, so that a user can directly browse or operate target devices such as a mobile phone without shielding without taking off the head-mounted display device, the operation flow is simplified, and the user experience is improved.
With further reference to fig. 4, a flow 400 is shown illustrating further embodiments of a content update method. The process 400 of the display content updating method includes the following steps:
step 401, determining a three-axis attitude angle of the target device and a three-axis attitude angle of the head-mounted display device based on first sensing data sent by the first sensor and aiming at the target device and second sensing data sent by the second sensor and aiming at the head-mounted display device.
Step 402, determining whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device meet an attitude angle deviation detection condition.
In some embodiments, the specific implementation manner and technical effects of steps 401-402 can refer to steps 301-302 in those embodiments corresponding to fig. 3, which are not described herein again.
In step 403, in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition, determining whether the target device and the head-mounted display device are in an opposite state.
In some embodiments, the execution subject of the presentation content updating method (e.g., the computing device 201 shown in fig. 2) may determine whether the target device and the head-mounted display device are in an opposing state in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition, and may include the steps of:
in a first step, in response to determining that the target device and the head-mounted display device are both provided with infrared sensors, the infrared sensors in the target device are controlled to emit infrared signals.
And a second step of determining that the target device and the head-mounted display device are in an opposite state in response to determining that the infrared sensor in the head-mounted display device receives the infrared signal.
And thirdly, in response to determining that the infrared sensor in the head-mounted display device does not receive the infrared signal, determining that the target device and the head-mounted display device are not in an opposite state.
In some optional implementations of some embodiments, the performing main body determining whether the target device and the head mounted display device are in an opposite state may include:
the method comprises the steps of firstly, controlling a front camera in the target equipment to shoot an image to obtain a first target image.
And secondly, determining whether the first target image comprises a target image area. The target image area comprises an image of a human face wearing the head-mounted display equipment. The first target image may be input into a target detection model to determine whether a target image area is included in the first target image. The target detection model may include, but is not limited to, an R-CNN (Region CNN) model, a Fast R-CNN (Fast Region CNN) model, and an SPPNet (Spatial Pyramid Pooling Net) model.
And thirdly, in response to the fact that the first target image comprises the target image area, determining an angle between a center point of the target image area and a position point of a front camera in the target equipment when the first target image is shot. The above-mentioned angle can be determined using an object localization model. The target location model may include, but is not limited to, CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), and the like.
As an example, referring to fig. 5, first, a front camera in the target device 501 may be controlled to take an image, resulting in a first target image 502. Then, it may be determined whether the first target image 502 includes a target image area 503, where the target image area 503 includes an image of a human face wearing the head-mounted display device. Finally, in response to determining that the first target image 502 includes the target image area 503, the first target image 502 may be input into a target location model 504, and an angle 505 between a center point of the output target image area 503 and a position point of a front camera in the target device 501 when the first target image 502 is captured may be obtained.
In some optional implementations of some embodiments, the executing body may further determine whether the target device and the head-mounted display device are in an opposite state by:
the method comprises the following steps of firstly, controlling the target equipment to display at least one mark graph in a current displayed page of the target equipment. The mark pattern may include, but is not limited to, a two-dimensional code, a special polygon, and the like.
And secondly, controlling a front camera in the head-mounted display equipment to shoot images to obtain a second target image.
And thirdly, in response to the fact that the at least one mark pattern is determined to be included in the second target image, determining an angle between the target device and a position point of a front camera in the head-mounted display device when the second target image is shot according to the at least one mark pattern included in the second target image. The second target image may be input to the target location model to obtain the angle.
Optionally, before the target device is controlled to display at least one mark graphic in the page currently displayed by the target device, the target device may be controlled to be on screen in response to determining that the target device is in a screen-turning-off state.
Alternatively, the executing body may determine that the target device and the head-mounted display device are in an opposite state in response to determining that the angle is within a preset angle range. In practice, the preset angle range may be set according to actual conditions, and is not limited herein. As an example, the above-mentioned preset angle range may be [0 °, 40 ° ].
Therefore, whether the target device and the head-mounted display device are in an opposite state can be further determined by judging the angle, and misjudgment caused by determining whether the target device and the head-mounted display device are in the opposite state according to the attitude angle deviation between the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device is avoided. Therefore, the misjudgment rate is reduced, and the user experience is further improved.
Step 404, in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition and that the target device and the head-mounted display device are in an opposite state, updating the display content in the display screen of the head-mounted display device.
In some embodiments, the specific implementation manner and the technical effect of updating the display content in the display screen of the head-mounted display device in step 404 may refer to step 303 in those embodiments corresponding to fig. 3, and are not described herein again.
In response to detecting that the subtended state between the target device and the head mounted display device is eliminated, a previously removed presentation is displayed in a display screen of the head mounted display device, step 405.
In some embodiments, in response to detecting that the subtended state between the target device and the head mounted display device is eliminated, a previously removed presentation is displayed in a display screen of the head mounted display device. Wherein the above-mentioned elimination of the subtended state may refer to a transition from satisfying the attitude angle deviation detection condition to not satisfying the attitude angle deviation detection condition of the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device.
Thus, after the facing state between the target device and the head-mounted display device is eliminated, the need to wear the head-mounted display device to not view the content in the target device directly through the head-mounted display device can be determined. Then, the display content removed in advance is displayed in the display screen of the head-mounted display device, so that the consistency of the content viewed by the user in the display screen of the head-mounted display device is ensured, and the user experience is further improved.
As can be seen from fig. 4, compared with the description of some embodiments corresponding to fig. 3, the flow 400 of the display content updating method in some embodiments corresponding to fig. 4 embodies a step of further determining the subtended state. Therefore, the solutions described in the embodiments can further determine whether the target device and the head-mounted display device are in an opposite state by determining the angles, and avoid misdetermination caused by determining whether the target device and the head-mounted display device are in an opposite state only according to the attitude angle deviation between the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device. Therefore, the misjudgment rate is reduced, and the user experience is further improved.
With further reference to FIG. 6, a schematic structural diagram of a head mounted display device 600 suitable for use in implementing some embodiments of the present disclosure is shown. The head mounted display device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the head-mounted display apparatus 600 may include a processing device (e.g., a central processing unit, a graphics processor, etc.) 601, a memory 602, an input unit 603, and an output unit 604. Wherein the processing means 601, the memory 602, the input unit 603 and the output unit 604 are connected to each other via a bus 605. The processing means 601 in the head-mounted display device specifically implements the exhibition content update function defined in the method of the present disclosure by calling the above-mentioned computer program stored in the memory 602. In some implementations, the input unit 603 may include a sensor signal receiving device. Thus, the first sensing data for the target device transmitted by the first sensor can be received by the sensor signal receiving device in the input unit 603. The output unit 604 may include a display screen for displaying the presentation content.
While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs and stored in the memory 602. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining a three-axis attitude angle of a target device and a three-axis attitude angle of a head-mounted display device based on first sensing data sent by a first sensor and aiming at the target device and second sensing data sent by a second sensor and aiming at the head-mounted display device; determining whether the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy an attitude angle deviation detection condition; updating the display content in the display screen of the head-mounted display device in response to determining that the three-axis attitude angle of the target device and the three-axis attitude angle of the head-mounted display device satisfy the attitude angle deviation detection condition.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:显示方法、移动终端及存储介质