Wearable protective tool and scene presenting method for wearable protective tool

文档序号:9220 发布日期:2021-09-17 浏览:26次 中文

1. A wearable brace, comprising:

an augmented reality component for a target object to observe a real scene;

the infrared detection component is used for outputting an infrared image of the real scene, and a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of the augmented reality component are overlapped in the same direction, wherein the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component;

the processing component is used for processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matched with the optical field angle of the augmented reality component, and sending the target infrared image to the augmented reality component;

the augmented reality component is further configured to present the target infrared image and enable a scene target existing in the target infrared image to correspond to a pose of the scene target in the real scene.

2. The wearable pad of claim 1, wherein the processing component is specifically configured to:

processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly;

wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to the difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.

3. The wearable pad of claim 2, wherein the processing component is specifically configured to:

determining a calibration reference object in the real scene;

calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship;

and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.

4. The wearable pad of claim 1,

the processing component is further to:

performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component;

alternatively, the first and second electrodes may be,

the wearable protective gear further comprises a brightness detection component, wherein the brightness detection component is used for detecting the ambient brightness in the real scene;

the processing component is further to: generating a backlight brightness adjusting instruction according to the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component;

the augmented reality component is further to: adjusting backlight brightness according to the backlight brightness adjusting instruction to present the target infrared image based on the adjusted backlight brightness;

alternatively, the first and second electrodes may be,

the wearable protective gear further comprises a position sensing component and a wireless communication component;

wherein the position sensing component is used for sensing spatial position information of the wearable protective tool;

the wireless communication component is used for remotely transmitting the spatial position information so that a remote server can maintain a movement track of the wearable protective tool by using the spatial position information, and the remote server is also used for presenting a visual navigation instruction generated based on the movement track in a rear wearable protective tool, wherein the starting movement time of the rear wearable protective tool is later than the current starting movement time of the wearable protective tool;

alternatively, the first and second electrodes may be,

the wearable gear further comprises a distance detection component for detecting a distance of a scene target in the real scene relative to the wearable gear;

the processing component is further to: the distance detected by the distance detection component is associated with the scene target, and an associated processing result is sent to the augmented reality component;

the augmented reality component is further to: and performing association presentation on the distance and the scene target based on the association processing result.

5. The wearable pad of claim 1,

the processing component is further to: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component;

the augmented reality component is further to: and presenting the target infrared image according to an image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.

6. The wearable pad of claim 5,

the processing component is further to: generating the switching instruction according to the temperature information of the real scene detected by the infrared detection assembly;

or

The wearable protective equipment further comprises a mode switch, and the mode switch is used for responding to external touch operation, generating a switching request of the image presentation type of the target infrared image and sending the switching request to the processing component;

the processing component is further to: and generating the switching instruction according to the switching request.

7. The wearable pad of claim 1,

the wearable protective equipment further comprises a gas detection component, wherein the gas detection component is used for detecting gas composition parameters in the environment where the wearable protective equipment is located;

the processing component is further to: generating a breath switching instruction according to the gas composition parameters, and sending the breath switching instruction to a breather valve;

the breather valve is used for: and determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle.

8. The wearable pad of claim 7,

the processing component is further to: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component;

the augmented reality component is further to: presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located;

alternatively, the first and second electrodes may be,

the processing component is further to: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component;

the augmented reality component is further to: and presenting the gas residual amount prompt information.

9. A scene presenting method for a wearable protector, comprising:

acquiring an infrared image of a real scene output by an infrared detection component, wherein a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of an augmented reality component are overlapped in the same direction, the augmented reality component is used for a target object to observe the real scene, and the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component;

processing the infrared image based on the conversion between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly;

and sending the target infrared image to the augmented reality assembly so that the augmented reality assembly presents the target infrared image, wherein the augmented reality assembly is also used for enabling a scene target existing in the target infrared image to correspond to the pose of the scene target in the real scene.

10. The scene representation method according to claim 9, wherein the processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matching the optical field angle of the augmented reality component specifically includes:

processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly;

wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to the difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.

11. The scene representation method according to claim 10, wherein the processing the infrared image based on a conversion relationship between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain the target infrared image matched with the optical field angle of the augmented reality component specifically includes:

determining a calibration reference object in the real scene;

calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship;

and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.

12. The scene rendering method of claim 9,

further comprising: performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component;

alternatively, the first and second electrodes may be,

further comprising: generating a backlight brightness adjusting instruction according to the environment brightness in the real scene and the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is used for adjusting backlight brightness according to the backlight brightness adjusting instruction and presenting the target infrared image based on the adjusted backlight brightness;

alternatively, the first and second electrodes may be,

further comprising: remotely transmitting spatial position information of wearable protectors, so that the remote server side can maintain a movement track of the wearable protectors by using the spatial position information, and the remote server side is further used for sending visual navigation instructions generated based on the movement track to rear wearable protectors, wherein the starting movement time of the rear wearable protectors is later than the current starting movement time of the wearable protectors;

alternatively, the first and second electrodes may be,

further comprising: associating the distance between the scene target in the real scene and the wearable protective tool with the scene target, and sending an association processing result to the augmented reality component; and the augmented reality component is used for performing associated presentation on the distance and the scene target based on the associated processing result.

13. The scene rendering method according to claim 9, further comprising:

generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; and the augmented reality component is used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.

14. The scene presentation method according to claim 13, wherein the instruction for switching the image presentation type of the target infrared image includes:

generating the switching instruction according to the temperature information of the real scene; alternatively, the first and second electrodes may be,

generating the switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.

15. The scene rendering method according to claim 9, further comprising:

generating a breathing switching instruction according to the gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve; the breather valve is used for determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle.

16. The scene rendering method of claim 15,

further comprising: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is used for presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located;

alternatively, the first and second electrodes may be,

further comprising: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component; the augmented reality component is used for presenting the gas residual amount prompt message.

Background

In some disaster scenarios, such as a fire scene, a large amount of smoke is often diffused, and may be accompanied by dazzling fire, and these environmental factors may cause visual interference, which makes it difficult to clearly know the real environment, and may result in rescue failure.

Therefore, how to solve the problem that a real scene (e.g., a disaster scene) is difficult to be clearly known becomes a technical problem to be solved in the prior art.

Disclosure of Invention

In an embodiment of the present application, in order to solve the technical problem or at least partially solve the technical problem, a wearable supporter and a scene presenting method for the wearable supporter are provided to enhance the visual recognition of a real scene.

Wearable gear provided in one of the embodiments can include: an augmented reality component for a target object to observe a real scene; the infrared detection component is used for outputting an infrared image of the real scene, and a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of the augmented reality component are overlapped in the same direction, wherein the optical visual field angle is the visual field angle of a target object for observing the real scene through the augmented reality component; the processing component is used for processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matched with the optical field angle of the augmented reality component, and sending the target infrared image to the augmented reality component; the augmented reality component is further configured to present the target infrared image and enable a scene target existing in the target infrared image to correspond to a pose of the scene target in the real scene.

Optionally, the processing component is specifically configured to: processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly; wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to the difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.

Optionally, the processing component is specifically configured to: determining a calibration reference object in the real scene; calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.

Optionally, the processing component is further configured to: and performing enhancement processing on the outline of the scene target appearing in the target infrared image, so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component. Optionally, the wearable shield further comprises a brightness detection component for detecting ambient brightness in the real scene; the processing component is further to: generating a backlight brightness adjusting instruction according to the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is further to: and adjusting the backlight brightness according to the backlight brightness adjusting instruction to present the target infrared image based on the adjusted backlight brightness. Optionally, the wearable shield further comprises a position sensing component and a wireless communication component; wherein the position sensing component is used for sensing spatial position information of the wearable protective tool; and the wireless communication component is used for remotely transmitting the spatial position information so that a remote server can maintain the movement track of the wearable protective tool by using the spatial position information, and the remote server is also used for presenting a visual navigation instruction generated based on the movement track in a rear wearable protective tool, wherein the starting movement time of the rear wearable protective tool is later than the starting movement time of the wearable protective tool. For example, the position sensing assembly includes a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetic sensor. Optionally, the wearable gear further comprises a distance detection component for detecting a distance of a scene object in the real scene relative to the wearable gear; the processing component is further to: the distance detected by the distance detection component is associated with the scene target, and an associated processing result is sent to the augmented reality component; the augmented reality component is further to: and performing association presentation on the distance and the scene target based on the association processing result. Optionally, the processing component is further configured to: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; the augmented reality component is further to: and presenting the target infrared image according to an image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.

Optionally, the processing component is further configured to: and generating the switching instruction according to the temperature information of the real scene detected by the infrared detection assembly. Optionally, the wearable supporter further includes a mode switch, and the mode switch is configured to respond to an external touch operation, generate a switching request of an image presentation type of the target infrared image, and send the switching request to the processing component; the processing component is further to: and generating the switching instruction according to the switching request.

Optionally, the wearable shield further comprises a gas detection component for detecting a gas composition parameter in an environment in which the wearable shield is located; the processing component is further to: generating a breath switching instruction according to the gas composition parameters, and sending the breath switching instruction to a breather valve; the breather valve is used for: and determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle.

Optionally, the wearable shield further comprises a gas detection component for detecting a gas composition parameter in an environment in which the wearable shield is located; the processing component is further to: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is further to: and presenting the breathing warning information, wherein the breathing warning information comprises information content for representing that dangerous gas exists in the environment where the wearable protective equipment is located.

Optionally, the processing component is further configured to: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component; the augmented reality component is further to: and presenting the gas residual amount prompt information.

A scene presentation method for a wearable brace provided in another embodiment can include: acquiring an infrared image of a real scene output by an infrared detection component, wherein a visual field corresponding to an imaging visual field angle of the infrared detection component and a visual field corresponding to an optical visual field angle of an augmented reality component are overlapped in the same direction, the augmented reality component is used for a target object to observe the real scene, and the optical visual field angle is the visual field angle of the target object for observing the real scene through the augmented reality component; processing the infrared image based on the conversion between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly; and sending the target infrared image to the augmented reality assembly so that the augmented reality assembly presents the target infrared image, wherein the augmented reality assembly is also used for enabling a scene target existing in the target infrared image to correspond to the pose of the scene target in the real scene.

Optionally, the processing the infrared image based on the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain a target infrared image matched with the optical field angle of the augmented reality component specifically includes: processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain the target infrared image matched with the optical field angle of the augmented reality assembly; wherein the conversion relationship is at least used for defining an infrared image cropping size corresponding to the difference between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component.

Optionally, the processing the infrared image based on a conversion relationship between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component to obtain the target infrared image matched with the optical field angle of the augmented reality component specifically includes: determining a calibration reference object in the real scene; calibrating a predetermined calibration conversion relationship between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly based on a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; and processing the infrared image based on the calibration conversion relation to obtain the target infrared image matched with the optical field angle of the augmented reality component.

Optionally, the method further comprises: performing enhancement processing on the outline of the scene target appearing in the target infrared image so that the outline of the scene target is displayed in an enhanced manner in the target infrared image presented by the augmented reality component; alternatively, the method further comprises: generating a backlight brightness adjusting instruction according to the environment brightness in the real scene and the environment brightness, and sending the backlight brightness adjusting instruction to the augmented reality component; the augmented reality component is used for adjusting backlight brightness according to the backlight brightness adjusting instruction and presenting the target infrared image based on the adjusted backlight brightness; alternatively, the method further comprises: remotely transmitting spatial position information of wearable protectors, so that the remote server side can maintain a movement track of the wearable protectors by using the spatial position information, and the remote server side is further used for sending visual navigation instructions generated based on the movement track to rear wearable protectors, wherein the starting movement time of the rear wearable protectors is later than the current starting movement time of the wearable protectors; alternatively, the method further comprises: associating the distance between the scene target in the real scene and the wearable protective tool with the scene target, and sending an association processing result to the augmented reality component; and the augmented reality component is used for performing associated presentation on the distance and the scene target based on the associated processing result.

Optionally, the method further comprises: generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality component; and the augmented reality component is used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, wherein the image presentation type comprises a thermal image type or a gray image type.

Optionally, the instruction for switching the image presentation type of the target infrared image includes:

generating the switching instruction according to the temperature information of the real scene; or generating the switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.

Optionally, the method further comprises: generating a breathing switching instruction according to the gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve; the breather valve is used for determining a communication state according to the breathing switching instruction, wherein the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle. Optionally, the method further comprises: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to the augmented reality component; the augmented reality component is used for presenting the breathing warning information, wherein the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located; alternatively, the method further comprises: generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component; the augmented reality component is used for presenting the gas residual amount prompt message.

There is also provided in another embodiment a non-transitory computer readable storage medium storing instructions that when executed by a processor cause the processor to perform the scene presentation method for wearable gear as described in the previous embodiments.

Based on the above embodiment, the wearable protector may have an augmented reality component for allowing the target object to observe the real scene, and an infrared detection component for outputting an infrared image of the real scene, wherein the infrared image output by the infrared detection component by sensing the temperature of the scene target in the real scene may be converted into a target infrared image matching the optical field angle of the augmented reality component by processing by the processing component, and therefore, the augmented reality component may enhance correct reproduction of the real scene in the field of the target object by presenting the converted target infrared image, thereby enhancing the visual recognition degree of the real scene, and further helping to alleviate or even eliminate the trouble that the real scene is difficult to be clearly known.

Drawings

The following drawings are only schematic illustrations and explanations of the present application, and do not limit the scope of the present application:

fig. 1 is an exemplary partial structural schematic of a wearable brace in one embodiment;

fig. 2 is a schematic diagram of a simplified presentation mechanism suitable for use with the wearable brace shown in fig. 1;

fig. 3 is a schematic diagram of a preferred presentation mechanism of the wearable shield as shown in fig. 1;

FIGS. 4a and 4b are schematic diagrams of the preferred rendering mechanism shown in FIG. 3 with further introduction of a backlight adjustment function;

FIG. 5 is a schematic diagram of the preferred presence mechanism shown in FIG. 3 further incorporating mode switching functionality;

FIG. 6 is a schematic diagram of the preferred rendering mechanism shown in FIG. 3 further incorporating local enhancement functionality;

FIGS. 7a and 7b are schematic diagrams of the preferred rendering mechanism shown in FIG. 3 with further introduction of distance visualization functionality

Fig. 8 is a schematic diagram of a position reporting mechanism of the wearable supporter shown in fig. 1;

fig. 9 is a schematic diagram of a trajectory navigation mechanism of the wearable brace shown in fig. 1;

fig. 10 is a schematic diagram of a respiratory protection mechanism of the wearable suit shown in fig. 1;

fig. 11 is a perspective view of an example structure of the wearable brace shown in fig. 1;

FIG. 12 is an exploded view of the exemplary structure shown in FIG. 11;

fig. 13 is an exemplary flow diagram of a scene presentation method for a wearable brace in another embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail below by referring to the accompanying drawings and examples.

Fig. 1 is an exemplary partial structural schematic of a wearable brace in one embodiment. Referring to fig. 1, in this embodiment, the wearable brace may have an augmented reality component 30, an infrared detection component 51, and a processing component 500.

The augmented reality component 30 is used for the target object to observe a real scene. Specifically, the augmented reality assembly 30 may include a frame 31 and an optical waveguide lens 32, the optical waveguide lens 32 may be embedded in the frame 31, and since the optical waveguide lens 32 includes an optical transparent medium substrate and a medium film covering both side surfaces of the optical transparent medium substrate, the optical waveguide lens 32 has a light transmission characteristic, so that a target object may observe a real scene through the optical waveguide lens 32. Illustratively, the target object may include a rescuer and may also include other devices having a scene observation function. Real scenes may include disaster scenes such as fire, smoke environments, and other scenes where environmental detection may be performed using infrared detection components.

The augmented reality module 30 has a pre-configured optical field angle FOV _ ar, which is a field angle at which the target object observes the real scene through the augmented reality module 30 (i.e., the optical waveguide lens 32), that is, an intersection portion of the original field angle of the target object and the light sensing range of the optical waveguide lens 32 when the target object wears the augmented reality module 30. Taking the target object wearing the wearable protector as the rescuer as an example, the binocular of the rescuer can observe a real scene (e.g., a disaster scene such as a fire scene environment) through the augmented reality assembly 30 (i.e., the optical waveguide lens 32) in the visual field corresponding to the optical field angle FOV _ ar. The optical field angle FOV _ ar of the augmented reality module 30 may be configured by setting the dimensional specification of the optical waveguide lens 32 and the distance of the optical waveguide lens 32 from the target object (i.e., the dimensional specification of the lens frame 31).

The augmented reality assembly 30 may further include a display driving module 33, the display driving module 33 may be installed outside the frame 31 and electrically connected to the optical waveguide lens 32, wherein the display driving module 33 at least includes a light engine, the light engine may generate light waves to the optical waveguide lens 32, the light waves are constrained by a dielectric film of the optical waveguide lens 32 to propagate in the optically transparent dielectric substrate, so that a specific image that does not obstruct a target object from observing a real scene may be presented in the optical waveguide lens 32, thereby implementing an enhancement of reality.

The infrared detection assembly 51 is used for outputting an infrared image of a real scene, for example, acquiring an infrared image in a dense smoke environment. Specifically, the infrared detection component 51 may include an infrared detector, which may be integrated with a sensor array for infrared imaging, and through sensing the temperature in the real scene by the sensor array of the infrared detector, an infrared image may be obtained, and the pixel value of the infrared image is determined according to the temperature. The mounting position of the infrared detection unit 51 is not particularly limited, and may be mounted on one side of the lens frame 31, for example.

Wherein the infrared detection component 51 has a preconfigured imaging field angle FOV _ inf, which is determined by the field of view range of the lens of the infrared detector, and in general, the imaging field angle FOV _ inf of the infrared detection component 51 may be larger than the optical field angle FOV _ ar of the augmented reality component 30.

Moreover, the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection component 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality component 30 overlap in the same direction. The field of view corresponding to the imaging field of view FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field of view FOV _ ar of the augmented reality assembly 30 are in the same direction, which can be understood as that the field of view corresponding to the imaging field of view FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field of view FOV _ ar of the augmented reality assembly 30 face the same side of the wearable supporter; in addition, the "overlap" between the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality assembly 30 may be understood as that the field of view corresponding to the imaging field angle FOV _ inf of the infrared detection assembly 51 and the field of view corresponding to the optical field angle FOV _ ar of the augmented reality assembly 30 overlap with each other within a specified distance range (specifically, related to the preset calibration distance of the wearable protector) with respect to the wearable protector.

As shown in fig. 2, the infrared image 60 output by the infrared detection component 51 may include the scene object 40 in the real scene, and if the infrared image 60 is presented on the augmented reality component 30, that is, the infrared image 60 is presented in a fusion manner together with the real scene observed at the target object, the optical engine included in the display driving module 33 may generate optical frequency electromagnetic waves to the optical waveguide lens 32 according to the infrared image 60, so as to generate the visual effect of enhanced presentation on the scene object 40 in the real scene observed by the target object. The scene objects in the real scene mentioned in the embodiments of the present application may include any objects appearing in the real scene, and may be subdivided according to the specific type of the real scene. For example, for disaster scenes such as fire and smoke environments, scene objects in real scenes may include people, road obstacles, vehicles, furniture or home appliances, and building structures (e.g., entrances, windows, walls, etc.), etc.

However, if the infrared image 60 is directly provided to the augmented reality component 30 for presentation, due to the difference between the optical field angle of the augmented reality component 30 and the imaging field angle of the infrared detection component 51, a scene object in the infrared image 60 presented by the augmented reality component 30 may deviate from the pose of the scene object in the real scene, thereby possibly interfering with or even misleading the correct recognition of the real scene by the target object.

As described above, the optical field angle FOV _ ar (including the horizontal field angle and the vertical field angle) of the augmented reality module 30 may be set according to the size specification of the optical waveguide lens 32 and the distance from the target object, and such setting will generally take into consideration the wearing comfort of the target object and will generally be smaller than the imaging field angle FOV _ inf (including the horizontal field angle and the vertical field angle) of the infrared detection module 51. Accordingly, if the infrared image 60 formed in the visual field range corresponding to the imaging field angle FOV _ inf is directly presented on the augmented reality module 30, the entire image size of the infrared image 60 is compressed by the adaptation of the optical field angle FOV _ ar, which may cause an inappropriate deformation or displacement of the scene object due to the compression of the entire image size. That is, if the infrared image 60 is directly provided to the augmented reality component 30 for rendering, the scene object 40 in the infrared image 60 is rendered to the target object with distortion due to the difference between the optical field angle FOV _ ar of the augmented reality component 30 and the field angle FOV _ inf of the infrared detection component 51.

Fig. 2 is a schematic diagram of a simplified rendering mechanism suitable for the wearable supporter shown in fig. 1, and particularly, for example, human eyes observe a real scene through the augmented reality component 30, which should not be construed as a specific limitation to the embodiment of the present application. Referring to fig. 2 in conjunction with fig. 1, fig. 2 shows an effect of directly providing the infrared image 60 to the augmented reality assembly 30, as can be seen from fig. 2, the augmented reality module 30 can present an image contour 41 'of the scene object 40 in the infrared image 60 in a field of view corresponding to the optical field angle FOV _ ar, but due to the angle difference of the field angle, the image contour 41' may be seriously deviated from the projection contour 42 of the scene object 40 on the augmented reality assembly 30 (i.e. the optical waveguide lens 32) along the observation optical path of the object, so that the image contour 41 'reflects the false pose shadow object 40' in the visual perception of the object, and cannot reflect the real pose of the scene object 40.

If the real scene has a certain visual visibility relative to the target object, the ghost target 40' will disturb the judgment of the target object on the scene target 40; for example, in a fire rescue scenario, rescue of trapped persons by rescuers may be delayed.

If the visual visibility of the real scene relative to the target object is very low, the target object can only observe the position-distorted ghost target 40', and at this time, for the target object, the infrared image 60 directly presented in the augmented Reality component 30 is equivalent to a VR (Virtual Reality) effect deviating from the real scene.

It can be seen that providing infrared images 60 directly to augmented reality component 40 for presentation is not conducive to target objects understanding the real environment regardless of visibility in the real scene.

Accordingly, in this embodiment, the infrared image 60 is processed accordingly using the processing component 500, resulting in a target infrared image for rendering at the augmented reality component 30. The Processing component 500 may include a Processor, which may have an image Processing capability, or the Processing component 500 may also include a controller without an image Processing capability and a Processor with an image Processing capability, for example, the Processor with an image Processing capability may be selected from a DSP (Digital Signal Processor), a GPU (Graphics Processing Unit), and the like.

Fig. 3 is a schematic diagram of a preferred presentation mechanism of the wearable gear as shown in fig. 1. Referring to fig. 3 in conjunction with fig. 1, the processing component 500 is configured to acquire the infrared image 60 output by the infrared detection component 51, process the infrared image 60 output by the infrared detection component 51 based on the conversion between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30, and send the target infrared image 80 to the augmented reality component 30.

In order to process the infrared image 60 based on the conversion between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, so as to obtain the target infrared image 80 matched with the optical field angle FOV _ ar of the augmented reality component 30, the processing component 500 may be specifically configured to:

based on the conversion relationship between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30, the infrared image 60 is processed to obtain the target infrared image 80 matched with the optical field angle FOV _ ar of the augmented reality component 30. The conversion relationship is at least used to define an infrared image cropping size corresponding to a difference between the imaging field angle FOV _ inf of the infrared detection component 51 and the optical field angle FOV _ ar of the augmented reality component 30. It should be understood that if other image editing operations, such as rotation, translation, etc., are required to be performed on the infrared image 60 to obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30, the foregoing conversion relationship may also be defined to include the rotation amount and the translation amount of the infrared image.

The conversion relationship between the imaging field angle FOV _ inf of the infrared detection assembly 51 and the optical field angle FOV _ ar of the augmented reality assembly 30 may include a predetermined calibration conversion relationship, or a calibration conversion relationship obtained by calibrating (or correcting) the calibration conversion relationship during the actual use of the wearable protector. The calibration conversion relationship may comprise a calibration conversion relationship determined by calibrating the wearable gear (i.e., calibrating the infrared detection assembly 51 and the augmented reality assembly 30) without regard to a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30, or may comprise a calibration conversion relationship determined by calibrating the wearable gear with regard to a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30.

For example, the wearable suit may be calibrated by a technician before shipment, for example, the wearable suit may be calibrated by a preset calibration object in a calibration environment at a preset calibration distance from the wearable suit to determine a calibration conversion relationship determined without consideration of a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30 or with consideration of a deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30. The deployment position deviation means that the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) are mutually avoided at the installation position of the wearable protector, that is, the optical axis positions do not coincide to ensure the respective functions to be realized. When the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 has a small influence on the presentation effect of the infrared image 60 in the augmented reality component 30, it may not be considered.

The difference between the imaging field angle FOV _ inf of the infrared detection module 51 and the optical field angle FOV _ ar of the augmented reality module 30 includes a horizontal field angle component difference value and a vertical field angle component difference value, and accordingly, in the foregoing conversion relationship, the infrared image cropping size includes a cropping size in the image horizontal direction and/or the image longitudinal direction (i.e., the direction perpendicular to the horizontal direction).

Specifically, if the vertical field angle component of the imaging field angle FOV _ inf of the infrared detection component 51 is greater than the vertical field angle component of the optical field angle FOV _ ar of the augmented reality component 30, the cropping size of the infrared image 60 in the image longitudinal direction may be determined according to the difference between the two vertical field angle components; similarly, if the horizontal field angle component of the imaging field angle FOV _ inf of the infrared detection component 51 is greater than the horizontal field angle component of the optical field angle FOV _ ar of the augmented reality component 30, the cropping size of the infrared image 60 in the image horizontal direction may be determined according to the difference between the horizontal field angle components of the two. For example, the correspondence relationship between the difference value of the vertical field angle component and the difference value of the horizontal field angle component of the infrared detection component 51 and the augmented reality component 30 and the image cropping size may be preset, and then after determining the difference value of the vertical field angle component and the difference value of the horizontal field angle component, the image cropping size in the image longitudinal direction or the image horizontal direction may be calculated based on the correspondence relationship. In general, the cut-out sizes of both sides of the infrared image 60 may be the same for the image longitudinal direction or the image horizontal direction.

Based on the conversion relationship, the target infrared image 80 can be quickly obtained during the use of the wearable protector. As shown in fig. 3, an edge portion of the infrared image 60 may be cropped, and the cropped portion is represented in fig. 3 as a shaded region surrounding the target infrared image 80, so that the target infrared image 80 has an overall image size matching the optical field angle FOV ar of the augmented reality component 30. The size scale and shape of the scene object 40 in the infrared image 60 are not changed during the image cropping process, and the scene object 40 in the cropped object infrared image 80 is not improperly deformed or displaced when presented on the augmented reality component 30.

It is further preferred to consider that there may be a deployment position deviation between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32), which may result in a position deviation between the optical axis of the infrared detection component 51 and the optical axis position of the augmented reality component 30 (i.e., the optical waveguide lens 32), or that the optical axis of the infrared detection component 51 intersects the optical axis of the augmented reality component 30 (i.e., the optical waveguide lens 32), so that, after the infrared image 60 is processed to obtain the target infrared image 80 without considering the deployment position deviation, there may be a distortion or an offset of the scene target 40 presented in the target infrared image 80 of the augmented reality component 30 relative to its posture in the real scene. From this, it can be understood that, when the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) is not considered, the default is to consider the position deviation between the optical axis of the infrared detection component 51 and the optical axis of the augmented reality component 30 or the intersection of the optical axes, that is, the position deviation between the two is considered to be negligible or the optical axes are parallel.

To reduce or even eliminate distortion or positional offset of the scene target 40 presented on the augmented reality assembly 30 caused by the deployment positional deviation, the infrared image 60 may be processed with a calibration conversion relationship determined by calibrating the wearable gear taking into account the deployment positional deviation between the infrared detection assembly 51 and the augmented reality assembly 30, resulting in a target infrared image 80 that matches the optical field angle FOV ar of the augmented reality assembly 30. In this case, specifically, at least the compensation amount of the infrared image cropping size determined based on the deployment position deviation between the infrared detection component 51 and the augmented reality component 30 is defined in the calibration conversion relationship. The deployment position deviation includes a deployment position deviation in the vertical direction and/or the horizontal direction, and accordingly, the compensation amount of the infrared image cropping size also includes a cropping size compensation amount in the image longitudinal direction or the image horizontal direction.

Illustratively, the infrared image 60 may be cropped in a manner that the two-sided cropping size is asymmetric (i.e., the cropping size of the two sides of the image may be different in the image longitudinal direction or the image horizontal direction) so as to compensate for the deployment position deviation (including the deployment position deviation in the vertical direction and/or the horizontal direction) between the infrared detection component 51 and the augmented reality component 30 (i.e., the optical waveguide lens 32) and avoid the phenomenon of deformation or offset that may exist when the scene object 40 is presented on the augmented reality component 30.

In general, the deformation or offset of the scene target 40 in the target infrared image 80 presented on the augmented reality assembly 30 due to the above-mentioned deployment position deviation may slightly differ with the distance of the scene target 40 with respect to the wearable suit, and when the distance of the scene target 40 in the real scene with respect to the wearable suit is different from the preset calibration distance between the wearable suit and the preset calibration object used in the calibration process of the wearable suit, the accuracy of the predetermined calibration conversion relationship may be reduced, and therefore, in the case of considering the deployment position deviation between the infrared detection assembly 51 and the augmented reality assembly 30, the calibration reference object may be determined in the real scene during the actual use of the wearable suit and based on the difference between the distance of the calibration reference object with respect to the wearable suit and the preset calibration distance of the wearable suit, the predetermined calibration conversion relationship is calibrated in the field to improve the accuracy of the cropping process of the infrared image 60. Through the on-site calibration, when the scene target 40 deviates from the preset calibration distance due to the distance relative to the wearable protector, the deviation of the scene target 40 relative to the real scene in the target infrared image 80 presented on the augmented reality component 30 can be reduced or even eliminated.

That is, in one embodiment, the processing component 500 is specifically configured to, for taking into account a deployment location offset between the infrared detection component 51 and the augmented reality component 30: determining a calibration reference object in a real scene; calibrating a predetermined calibration conversion relationship between an imaging field angle FOV _ inf of the infrared detection component 51 and an optical field angle FOV _ ar of the augmented reality component 30 based on a difference value between the determined distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relationship; based on the calibration transformation relationship, the infrared image 60 is processed to obtain the target infrared image 80 matching the optical field angle FOV _ ar of the augmented reality component 30.

Wherein the calibration conversion relationship takes into account the calibration of the infrared image cropping size compensation amount determined based on the deployment position deviation between the infrared detection component 51 and the augmented reality component 30. For example, a corresponding relationship between a difference value between a distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment and a calibration amount of the infrared image cutting size compensation amount can be preset, and then after the distance difference value is determined, the calibration amount of the infrared image cutting size compensation amount is calculated based on the corresponding relationship; in addition, the corresponding relationship between the difference between the distance of the calibration reference object from the wearable protective equipment and the preset calibration distance of the wearable protective equipment and the calibration amount of the cutting size of the infrared image including the cutting size compensation amount can be preset, and after the distance difference is determined, the calibration amount of the cutting size of the infrared image is calculated based on the corresponding relationship, so that the required calibration conversion relationship is obtained.

In this case, the wearable supporter may further include a device, such as an eyeball tracker, for tracking and detecting the attention position of the target object in the field of view corresponding to the optical field angle FOV _ ar of the augmented reality module 30, and the processing component 500 may determine the scene target located at the detected attention position of the target object as the calibration reference object in the real scene. That is, in the embodiment of the present application, as the attention position or the viewing direction of the target object changes, the calibration reference object may change accordingly.

Alternatively, the processing component 500 may determine a scene object appearing in the infrared image 60 as a calibration reference object in the real scene. If only one scene target appears in the infrared image 60, the scene target is a calibration reference object in a real scene; if a plurality of scene objects appear in the infrared image 60, the processing component 500 may select the scene object according to the image coordinates of the scene object in the infrared image 60, and/or the area of the scene object in the infrared image 60, and/or the object type of the scene object, for example, the processing component 500 may preferentially select the scene object closest to the center of the infrared image 60, and/or preferentially select the scene object with the largest area in the infrared image 60, and/or preferentially select the scene object with a humanoid outline, and the like, which may be configured according to requirements. In this case, as the detection direction of the infrared detection assembly 51 changes, the calibration reference object may also change accordingly.

By calibrating a predetermined calibration conversion relationship between the imaging field angle FOV _ inf of the infrared detection assembly 51 and the optical field angle FOV _ ar of the augmented reality assembly 30 based on the calibration reference object in the real scene, and processing the infrared image 60 by using the calibrated calibration conversion relationship to obtain the target infrared image 80, the posture deformation amount or the posture offset amount of the scene target in the target infrared image 80 presented on the augmented reality module 30 relative to the real scene can be minimized. In particular, when the target infrared image 80 displayed on the augmented reality component 30 includes a scene target determined as a calibration reference object, the calibration reference object displayed on the target infrared image 80 may have a posture relative to a posture in a real scene, and the deformation amount or offset may approach zero, compared to other scene targets. I.e., the augmented reality component 30 can achieve a higher scene rendering effect.

In addition, the processing component 500 may acquire the distance of the calibration reference object relative to the wearable brace in any manner, for example, as shown in fig. 7a and 7b, the wearable brace may further include a distance detection component 53, which may be used to detect the distance of any calibration reference object in the real scene relative to the wearable brace, and send it to the processing component 500.

Regardless of the manner in which processing component 500 performs processing based on the above-described transformation on infrared image 60, scene objects 40 may remain in the same pose in object infrared image 80 or in infrared image 60.

And, the augmented reality component 30, in presenting the target infrared image 80, causes the scene object 40 present in the target infrared image 80 to correspond to the pose of that scene object 40 in the real scene. The gesture correspondence may at least include that, when the target object observes the real scene through the enhanced display component 30, the scene target 40 existing in the target infrared image 80 coincides with a preset identification position of the scene target 40 in the real scene, where the preset identification position may be, for example, a central position of the scene target or another specific position having a position identification function, and may be specifically preset; the pose correspondence may further include that in the case that the target object observes the real scene through the augmented display component 30, the scene target 40 existing in the target infrared image 80 overlaps with the target envelope region of the scene target 40 in the real scene, wherein the target envelope region target may be a local region on the scene target or an entire region of the scene target, further, the overlap of the target envelope region may be a local region overlap in which an overlap ratio exceeds a preset proportion threshold (for example, 50%), or a region complete overlap in which the overlap ratio approaches 100%, and the complete overlap of the target envelope region is optimal enough to make the scene target 40 existing in the target infrared image 80 coincide with the contour of the scene target 40 in the real scene.

Thus, if there is sufficient visibility in the real scene to enable the scene object to be resolved to the target object, when the target infrared image 80 is presented to the augmented reality component 30 (i.e., the optical waveguide lens 32), the image contour 41 of the scene object 40 in the target infrared image 80 coincides with the projection contour 42 of the scene object 40 in the augmented reality component 30 (i.e., the optical waveguide lens 32) along the observation optical path of the target object to enhance the recognition capability of the target object on the scene object 40 in the real scene. The coincidence of the image contour 41 and the projection contour 42 mentioned herein means that the image contour 41 and the projection contour 42 approach to a theoretical perfect coincidence without excluding the case where there is a deviation of the contour details.

If visibility in a real scene (e.g., a disaster scene such as a fire) is extremely low, resulting in the scene targets being indistinguishable from the target objects, then when the target infrared image 80 is presented to the augmented reality component 30 (i.e., the optical waveguide lens 32), the scene targets 40 can substantially truly reproduce the corresponding poses of the scene targets 40 in the real scene, so that the target objects can identify the scene targets 40 in the real scene completely depending on the target infrared image 80.

It can be seen that the infrared image 60 output by the infrared detection component 51 by sensing the temperature of the scene object in the real scene can be converted into the object infrared image 80 matching the optical field angle of the augmented reality component by the processing of the processing component 500, thus, the augmented reality assembly 30 presents the target infrared image 80 by the optical waveguide lens 32, can enhance the correct reproduction of the real scene in the visual field of the target object, thereby enhancing the visual identification of the real scene, and further helps to alleviate or even eliminate the confusion that the real scene is difficult to be clearly known, such as a fire scene, the fire condition can be judged by the rescuers without completely depending on the accumulated actual combat experience, for example, the rescuers can accurately judge the position of the fire point in the fire, the position and the posture of the person to be rescued and the position and the posture of the teammates acting with the rescuers collectively.

Moreover, because the protective equipment in the embodiment is wearable, both hands of the rescuers can be liberated, the rescuers can hold auxiliary tools such as fire extinguishers and fire hydrants conveniently, and the fire extinguishing and rescuing efficiency is improved.

In particular implementations, the image type of the target infrared image 80 may be a thermal image (e.g., a thermal image known colloquially as a red-iron pattern) or a grayscale image (also known as a grayscale image). When the temperature difference between scene targets in a real scene (such as a disaster scene like a fire scene environment) or the temperature difference of the scene targets relative to the environment is large, the thermal image can more clearly present the scene targets; when the temperature difference between scene objects in a real scene (e.g., a disaster scene such as a fire scene environment) or the temperature difference between a scene object and the environment is small, the definition of the scene object in the thermal image is lower than that of the grayscale image. Therefore, in order to accommodate different situations, in this embodiment, switching of the image type of the target infrared image 80 is allowed.

Fig. 4a and 4b are schematic diagrams of the preferred presence mechanism shown in fig. 3 further incorporating a mode switching function. Referring to fig. 4a and 4b, the processing component 500 may be further configured to generate a switching instruction Ins _ sw of an image presentation type of the target infrared image 80, and send the switching instruction Ins _ sw to the augmented reality component 30, and the augmented reality component 30 may be further configured to present the target infrared image 80 according to the image presentation type corresponding to the switching instruction Ins _ sw, that is, the switching instruction carries the image presentation type of the target infrared image 80 to be presented in the augmented reality component 30, where the image presentation type includes a thermal image type or a grayscale image type.

The switching instruction Ins _ sw generated by the processing component 500 may be triggered according to the temperature information in the real scene detected by the infrared detection component 51, or may be triggered according to an external touch operation.

Exemplarily, in fig. 4a, the processing component 500 may be further configured to generate the switching instruction Ins _ sw according to the temperature information Info _ temp of the real scene detected by the infrared detection component 51. For example, the temperature information Info _ temp may be determined by the pixel values of the infrared image 60, and the processing component 500 may be further configured to determine a temperature difference between scene objects in the real scene or a temperature difference of the scene objects with respect to the environment according to the temperature information Info _ temp of the scene objects in the real scene detected by the infrared detection component 51, and generate the switching instruction Ins _ sw according to the determined temperature difference; further, if the determined temperature difference value is large, a switching instruction carrying thermal image type information is generated, and if the determined temperature difference value is small, a switching instruction carrying gray level image type information is generated.

Alternatively, in fig. 4b, the wearable supporter may further include a mode switch 57, the mode switch 57 being configured to generate a switching request Req _ sw of an image presentation type of the target infrared image 80 in response to an external touch operation (e.g., pressing or toggling or screwing, etc.), and transmit the switching request Req _ sw to the 500 processing component; and, the processing component 500 may be further configured to generate a switching instruction Ins _ sw according to the switching request Req _ sw. For example, the mode switch 57 may be a mechanical switch or a touch switch, the mode switch 57 may generate a switching request Req _ sw having a specific level state in response to an external touch operation (e.g., pressing or toggling or screwing), and the processing component 500 may recognize the image presentation type requested to be switched as a thermal image type or a grayscale image type according to the specific level state of the switching request Req _ sw. The position of the mode switch 57 is not particularly limited in the embodiment of the present application, and may be mounted on one side of the frame, for example.

In addition, the ambient brightness in the real scene also affects the recognition of the scene object in the target infrared image 80. To this end, the wearable brace in this embodiment may further adaptively adjust the backlight brightness of the augmented reality component 30 according to the ambient brightness to render the target infrared image 80 based on the adjusted backlight brightness. The backlight brightness refers to the illumination brightness of the light source when the display driving module 33 (including the light engine) presents the target infrared image 80 to the optical waveguide lens 32. If the ambient brightness is high, the backlight brightness may be adjusted high so that the target infrared image 80 is not difficult to distinguish due to the high ambient brightness; conversely, if the ambient brightness is relatively low, the backlight brightness may be adjusted low to avoid too high backlight brightness stimulating the vision of the target object.

Fig. 5 is a schematic diagram of the preferred rendering mechanism shown in fig. 3 further incorporating a backlight adjustment function. Referring to fig. 5, the wearable shield in this embodiment can further include a brightness detection component 55, the brightness detection component 55 may include a brightness sensor, and the brightness detection component 55 may be configured to detect an ambient brightness L _ env in the real scene, especially within a field of view corresponding to the optical field angle FOV _ ar of the optical waveguide lens 32, and accordingly, the processing component 500 may be further configured to generate a backlight brightness adjustment instruction Ins _ adj according to the ambient brightness L _ env, and send the backlight brightness adjustment instruction Ins _ adj to the augmented reality component 30, and, the augmented reality assembly 30 may be further configured to adjust the backlight brightness according to the backlight brightness adjustment instruction Ins _ adj, the target infrared image 80 is rendered based on the adjusted backlight brightness, which may be specifically performed by the display driving module 33 comprising a light engine.

For example, the processing component 500 may determine the adjustment target value of the backlight brightness by the augmented reality component 30 according to the backlight brightness adjustment instruction Ins _ adj according to the preset corresponding relationship between the ambient brightness and the backlight brightness.

Thus, the target object is facilitated to see scene objects in the target infrared image 80 presented by the augmented reality component 30 at various ambient intensities, and at the same time, the augmented reality component 30 is prevented from producing visual stimuli to the target object at low ambient intensities.

The switching of the image type is an optimization performed from the viewpoint of the presentation form of the target infrared image 80, and the adaptive adjustment of the backlight luminance of the augmented reality component 30 is an optimization performed from the viewpoint of the presentation environment of the target infrared image 80, which may be performed independently of each other, or may be performed in combination. In addition, the embodiment can further optimize the image content of the target infrared image 80, so as to further improve the recognition degree of the scene target appearing in the target infrared image 80.

As an optimization mechanism for the image content of the target infrared image 80, the processing component 500 may further be configured to perform local enhancement processing on the contours of scene objects appearing in the target infrared image 80. The local enhancement processing may be performed after obtaining the target infrared image 80, or may be performed on the infrared image 60 directly after obtaining the infrared image 60, and then the target infrared image 80 with locally enhanced contour is obtained through the processing on the infrared image 60. The local enhancement processing may include display enhancement of the outline by using different colors, or display enhancement of the outline by using a thick line (for example, an image stroking algorithm), and the like, wherein the used color type or line type may be preset according to the requirement of the outline presentation or flexibly adjusted during the use of the brace, and the embodiment of the disclosure is not particularly limited.

Fig. 6 is a schematic diagram of the preferred rendering mechanism shown in fig. 3 further incorporating a local enhancement function. Referring to fig. 6, the processing component 500 may be further configured to perform an enhancement process on the outline of the scene object appearing in the target infrared image 80, so that the outline of the scene object is displayed in an enhanced manner in the target infrared image 80 presented by the augmented reality component 30 (i.e., the optical waveguide lens 32).

In fig. 6, the scene object 40 of the human body contour is merely taken as an example to show the enhanced display effect of the human body contour, and in practical applications, the scene object capable of being subjected to contour enhancement processing is not limited to a human body, but may also include objects such as a gas tank, furniture and the like, and may also include building structures such as a passage opening, a wall and the like. For example, by performing target edge detection on the infrared image 60 or the target infrared image 80, the contours of all scene targets appearing in the infrared image 60 or the target infrared image 80 can be obtained synchronously, but not limited to detection and contour recognition for a certain target type, after the contour of a detected scene target is obtained, contour enhancement processing may be performed on the scene target belonging to the contour type according to a preset contour type to be enhanced, and naturally, enhancement processing may also be performed on all contours determined by edge detection. The type of the contour to be enhanced may be preset according to the type of a real scene, for example, for a disaster scene such as a fire, a smoke environment, etc., the type of the contour to be enhanced may include a human body contour, a road surface obstacle contour, a vehicle contour, a furniture or household appliance contour, a building structure-related contour, and the like.

Through the contour enhancement process, the target object can more quickly and accurately identify the scene target in the target infrared image 80. For example, in disaster scenes such as a fire scene environment and a smoke environment, the target object can accurately identify the posture actions of people appearing in the real scene, so as to identify posture actions which are beneficial to improving the rescue efficiency, such as the body language for asking for help made by people to be rescued or the body language for prompting made by rescue teammates; in addition, the arrangement position of objects in a real scene, the influence of a building structure on the advancing and the like can be accurately identified, so that dangerous targets such as gas tanks can be avoided, or paths can be accurately found.

The contour enhancement of the target infrared image 80 may be performed independently of switching the image presentation type and adaptively adjusting the backlight brightness of the augmented reality component 30, or may be performed in combination with at least one of them. For the case where the contour enhancement is implemented in combination with the switching of the image type, the contour color for the contour enhancement may be changed with the switching of the image presentation type, or may be in accordance with a color corresponding to the image presentation type, for example, when the image presentation type of the target infrared image 80 is determined to be a thermal image (e.g., a thermal image colloquially referred to as a magenta pattern), the contour color may be selected as a first color (e.g., a cool tone color) having a contrast effect with the main tone of the thermal image; when the image type of the target infrared image 80 is determined to be a gray-scale image (also referred to as a grayscale image), the contour color may be selected to be a second color having a contrast effect with gray (e.g., a highlight effect warm tone color).

Fig. 7a and 7b are schematic diagrams of the preferred rendering mechanism shown in fig. 3 further incorporating a distance visualization function. Referring to fig. 7a and 7b, the wearable brace in this embodiment may further include a distance detection component 53, the distance detection component 53 may include, but is not limited to, a laser ranging detector or a radar ranging detector, and the distance detection component 53 is configured to detect a distance D40 of the scene target 40 in the real scene relative to the wearable brace.

The distance D40 detected by the distance detection assembly 53 may be used in addition to calibrating the translation relationship in the manner previously described, and/or may be correlated for presentation with the target infrared image 80; accordingly, the processing component 500 may be further configured to perform an association process on the distance D40 detected by the distance detection component 53 and the scene object 40, and send the association process result to the augmented reality component 30; the augmented reality component 30 is further configured to: and performing association presentation on the distance and the scene target 40 based on the association processing result. The association processing result may include a presentation instruction for associating and presenting the distance and the scene target 40, or the association processing result may include distance data itself corresponding to the scene target 40, or the association processing result may include a target infrared image on which distance information corresponding to the scene target 40 is superimposed, or the like.

For example, in fig. 7a, the processing component 500 may be specifically configured to generate a distance information presenting instruction Ins _ dis (which may carry the distance D40 detected by the distance detecting component 53) based on the distance D40 detected by the distance detecting component 53, and send the distance information presenting instruction Ins _ dis as an associated presenting result to the augmented reality component 30; and, the augmented reality component 30 is further configured to present the scene object 40 in association with the distance D40 detected by the distance detection component 53 based on the distance information presentation instruction Ins _ dis. For example, the display driving module 33 of the augmented reality component 30 may display the target infrared image 80 and the distance D40 detected by the distance detection component 53 on the optical waveguide lens 32 in an overlapping manner according to the distance information presentation instruction Ins _ dis. Alternatively, the processing component 500 may also use distance data representing the distance D40 instead of the distance information presentation instruction Ins _ dis.

For example, in fig. 7b, the processing component 500 may be specifically configured to add the distance D40 detected by the distance detection component 53 to the target infrared image 80, and send the target infrared image 80 added with the distance D40 to the augmented reality component 30 as the associated presentation result. It will be appreciated that the effect of contouring of scene objects is also presented in fig. 7b, in order to more clearly show that the superimposition of distance D40 in object infrared image 80 is associated with a scene object, rather than the associated presentation of a defined distance having to be implemented in combination with contouring.

The distance D40 presented in association with the scene object 40 in the target infrared image 80 may have the form of a visual digital graphic or may be a pictorial form containing digital information content, and its display position in the target infrared image 80 is not particularly limited, for example, the distance D40 presented in association may be located within the outline of the scene object 40 or at a position at a specific distance from the scene object 40 in order to discriminate the distances of different scene objects detected by the distance detection component 53 when there are multiple scene objects in the target infrared image 80. The specific value of the specific distance can be reasonably set in advance according to the display requirement. When multiple scene objects are present in a real scene (e.g., a disaster scene such as a fire scene environment), the range detection component 53 may detect the range of each scene object and may distinguish different detected ranges according to the location identification of each scene object, and the processing component 500 may have the capability to distinguish the ranges of different scene objects.

Based on the relevance presentation of the distance detected by the distance detection component 53 and the scene target in the target infrared image 80, the space cognition of the target object to the scene target can be more accurate, the rescue efficiency in a disaster scene is improved, the target object can be reminded to control the pace, and the risk of injury caused by mistaken collision with other moving objects is reduced.

In addition to reproducing real scenes (e.g., disaster scenes such as fire scene environments) at the augmented reality component 30, the wearable brace in this embodiment can also support trajectory tracking and trajectory navigation.

Fig. 8 is a schematic diagram of a position reporting mechanism of the wearable supporter shown in fig. 1. Referring to fig. 8, in this embodiment, the wearable protector may further include a position sensing component 73 and a wireless communication component 71, and the specific installation position is not particularly limited. The position sensing assembly 73 may include a combination of various types of sensors such as accelerometers, gyroscopes, magnetic induction meters, for example, the position sensing assembly 73 may include a nine-axis sensing assembly including a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetic sensor, which may help to improve the accuracy of the positioning calculation. The position sensing component 73 is used for sensing spatial position information of the wearable brace (i.e. the target object 90 wearing the brace), and under the driving control of the processing component 500, the wireless communication component 71 is used for remotely transmitting the spatial position information, so that the remote server 900 (e.g. a command center server) can maintain a movement track of the wearable brace (i.e. a movement track of the target object 90 wearing the brace) by using the spatial position information.

The remote server 900 can obtain the real-time position of the wearable protector through real-time sampling and fusion operation of the spatial position information, and can obtain the movement track of the wearable protector (i.e. the target object 90 wearing the protector) through curve fitting to the real-time position, for example, the movement track of the target object 90 reaching the position of the person to be searched (i.e. as an example of the scene target 40) from the entrance of the fire scene environment is shown in fig. 8.

Moreover, the remote server 900 (e.g., command center server) can generate the visual navigation instructions based on the currently maintained movement trajectory, and the remote server 900 can also present the visual navigation instructions generated based on the movement trajectory in a subsequent wearable brace via teletransmission, wherein the starting movement time of the subsequent wearable brace is later than the starting movement time of the current wearable brace (i.e., the wearable brace having maintained the movement trajectory). Also, the visual navigation instruction may include a navigation track generated based on the maintained movement track, or may also include a navigation guidance identifier indicating, for example, forward, backward, left turn, or right turn generated based on the maintained movement track.

For example, for some disaster scenarios, such as fire, it may be necessary for the target object to wear the wearable protectors individually, in this case, each wearable protector may be pre-assigned with a corresponding device identifier, and the spatial location information transmitted by each wearable protector to the remote server via the wireless communication component 71 may carry the device identifier of the present protector, so that the remote server partially carries the target object of different wearable protectors. In the case of performing the rescue operation in a group action manner, in order to enter the target object with the later departure time in the disaster scene, find the target object with the earlier departure time in the disaster scene as soon as possible, or improve the rescue efficiency, the remote server 900 may maintain the movement track of the target object with the earlier departure time in the disaster scene (i.e., the movement track of the wearable shield), generate a visual navigation instruction based on the maintained movement track, and remotely transmit the visual navigation instruction to the wearable shield worn by the target object with the later departure time in the disaster scene (i.e., the rear wearable shield), so that the augmented reality component of the rear wearable shield can present the visual navigation instruction to guide the target object worn by the rear wearable shield to enter the disaster scene along the movement track of the target object with the earlier departure time, the method can enter a disaster scene with the highest efficiency or efficiently find out the target object which enters the disaster scene before so as to complete the cooperative rescue work in the shortest time and avoid the target object of individual soldier battle from being lost in the disaster scene. Therefore, the remote server 900 can implement effective overall planning on the rescue work by acquiring the spatial position information of the target object in the disaster scene and the movement track determined by the spatial position information.

The wearable gear augmented reality assembly 30 can further be used to present visual navigation instructions that direct the direction of travel. If the augmented reality component 30 of the current wearable brace is used as the rear wearable brace, the presented visual navigation instruction can be generated based on the movement track of the target object with an early departure time into the disaster scene. It is to be understood that the visual navigation instructions presented by the wearable shield augmented reality assembly 30 can also be based on a map of the real scene, or generated based on other means (e.g., drone-based high altitude navigation).

Fig. 9 is a schematic diagram of a trajectory navigation mechanism of the wearable brace shown in fig. 1. Referring to fig. 9, in this embodiment, the wireless communication component 71 may be further configured to obtain a visual navigation indication 89 through remote transmission, where the visual navigation indication 89 may be determined according to a movement track maintained by the remote server and spatial position information of the wearable brace (i.e. the target object 90 wearing the brace), and accordingly, the augmented reality component 30 may be further configured to present a visual navigation indication 89, such as a navigation guide identifier representing a current traveling direction, or a current navigation track, etc.

The visual navigation indication 89 can be provided by the remote server 900, and by monitoring the spatial position information of the wearable protector (i.e. the target object 90 wearing the protector) in real time, the visual navigation indication 89 suitable for each key inflection point can be provided when the wearable protector (i.e. the target object 90 wearing the protector) is at the key inflection point in the navigation track. In fig. 9, the visual navigation indication 89 is used to guide the target object 90 and the searched person (as an example of the scene target 40) to retreat and escape along the moving track as shown in fig. 8. In actual practice, however, the visual navigational directions 89 may be used to direct any path of travel.

Regarding the implementation of the presentation of the visual navigation indication 89, the visual navigation indication 89 may be presented in a manner of being displayed in a non-simultaneous manner with the target infrared image 80 in an overlapping manner, for example, the visual navigation indication 89 may interrupt the presentation of the target infrared image 80 in the augmented reality component 30, and then individually present for a preset time, and a value may be reasonably set in advance, or the visual navigation indication 89 may also be presented in a stacking manner with the target infrared image 80 in the augmented reality component 30, that is, the augmented reality component 30 presents the visual navigation indication 89 and the target infrared image 80 at the same time.

Therefore, by further introducing a track navigation mechanism, overall planning of rescue work can be facilitated, and rescue efficiency is improved. The wearable protective gear can also be provided with a voice communication component, and the voice interaction with the remote server 900 through the wireless communication component 71 can help to command the target object to be accurately rescued and timely inform the target object of emergency evacuation when danger is possibly generated. For example, through the real-time spatial position information and the movement track determined by the real-time spatial position information, a commander can master the current position and the movement trend of the target object in real time, and based on auxiliary information additionally acquired by other ways outside the disaster environment, the commander can pertinently guide the target object in the disaster environment to carry out rescue work accurately through voice instead of blindly commanding.

In addition, the wearable shield in this embodiment may also implement a respiratory protection mechanism to provide an alert when ambient gases are adverse to, or even prevent, the target subject from breathing.

Fig. 10 is a schematic diagram of the respiratory protection mechanism of the wearable suit shown in fig. 1. Referring to fig. 10, the wearable brace in this embodiment may further include a gas detection component 75, the gas detection component 75 may be configured to detect a gas composition parameter in an environment in which the wearable brace is located, and the processing component 500 may be further configured to generate the breathing alert information 87 according to the gas composition parameter detected by the gas detection component 75, and transmit the generated breathing alert information 87 to the augmented reality component 30; the augmented reality component 30 is further configured to present a breathing alert 87, where the breathing alert 87 includes at least information content indicating that hazardous gas exists in the environment of the wearable shield, for example, carbon monoxide content reaches a certain value, oxygen content is too low, flammable and explosive components are contained in the gas, and the breathing alert 87 may further include an alert symbol such as an exclamation point.

For example, the processing component 500 may detect a gas composition parameter detected by the gas detection component 75, and the processing component 500 may generate the respiratory alert message 87 in response to detecting a gas composition parameter having a preset hazard ratio (e.g., a toxic gas or carbon dioxide concentration in the gas is excessive); the breathing alert information 87 may interrupt (e.g., for a predetermined duration) the presentation of the target infrared image 80 on the augmented reality component 30, or the breathing alert information 87 may be presented in superposition with the target infrared image 80 on the augmented reality component 30.

The processing component 500 can also remotely transmit the gas composition parameters with dangerous mixture ratio to the remote service end through the wireless communication component 71, so that the remote service end can make the next decision and instruction for the rescue work, and prevent the greater disaster caused by making wrong instructions due to unknown field environment.

Fig. 11 is a perspective view of an example configuration of the wearable brace shown in fig. 1. Fig. 12 is an exploded view of the example structure shown in fig. 11. To more intuitively understand the physical form of the wearable gear, an example structure is provided in fig. 11 and 12, it being understood that the physical form of the wearable gear is not limited to this example structure.

Referring to fig. 11 in conjunction with fig. 12, an example structure of a wearable brace may include a mask 10, the mask 10 having a respiratory protection component 20. Preferably, the respiratory protection component 20 may include a respiratory valve having an air path switch, the air path switch may selectively connect the respiratory valve to the atmosphere in the environment where the wearable protective equipment is located or to the gas storage cylinder, for example, the air path switch may connect the respiratory valve to the atmosphere in the environment where the wearable protective equipment is located in a closed state, and the air path switch may connect the respiratory valve to the gas storage cylinder in an open state; the reverse is also possible. In this case, the breathing warning message 87 shown in fig. 10 may further include a visual prompt message for prompting the communication between the breathing valve and the gas cylinder, so as to prompt the target object to manually perform switching of the breathing valve. The gas storage bottle can be carried by a target object, and the selectable communication of the gas path change-over switch can be triggered automatically or manually.

Alternatively, the switching of the breathing valve may also be triggered automatically by the processing assembly 500, i.e. the processing assembly 500 may be further configured to: generating a breath switching instruction according to the gas component parameters detected by the gas detection component 75, and sending the breath switching instruction to a breather valve; the breather valve is further configured to: and determining a communication state according to the breathing switching instruction, wherein the communication state of the breathing valve comprises a state of communicating with the atmosphere in the environment where the wearable protective equipment is located or a state of communicating with a gas storage bottle. Therefore, when the environmental gas content threatens the health and even the life safety of the target object, the health and the safety of the target object can be guaranteed by switching the communicated object of the breather valve.

The processing component 500 is further configured to: generating a gas remaining amount prompt message according to the gas remaining amount (for example, oxygen storage amount) in the gas bomb, and sending the message to the augmented reality component 30; the augmented reality component is further to: and presenting the prompting information of the gas residual quantity. For example, the processing component 500 may determine the current gas remaining amount of the gas cylinder from the current internal pressure of the gas cylinder acquired by the gas pressure monitoring component, and generate the gas remaining amount prompt information, and if the current gas remaining amount of the gas cylinder is lower than the preset storage threshold, the processing component 500 may further generate warning prompt information, for example, the warning prompt information is presented in the augmented reality component 30, or the warning prompt information is fed back to the target object in a form of sound, so that the target object exits the disaster scene in time.

The face mask 10 has a light transmissive region at least over the respiratory protection component 20, and the augmented reality component 30 may be mounted within the face mask 10 in a position that covers the light transmissive region. An example configuration of a wearable brace may include a first pod 50, the first pod 50 being mounted to the mask 10 outside of a first side thereof, wherein the infrared detection component 51, the distance detection component 53, the brightness detection component 55, and the mode switch 57 may be mounted to the front of the first pod 50. An example configuration of a wearable suit may also include a second exoskeleton box 70, the second exoskeleton box 70 being mounted in a suspended manner on the outside of a second side of the mask 10 opposite the first side, wherein the wireless communication component 71 is mounted on top of the second exoskeleton box 70, the position sensing component 73 and the gas detection component 75 may be housed inside the second exoskeleton box 70, and wherein the second exoskeleton box 70 has a ventilation gap 750 on its front side, the ventilation gap 750 being disposed adjacent the respiratory protection component 20 and in communication with the sensing end of the gas detection component 75.

The processing unit 500 may be accommodated inside the first hanging box 50 and electrically connected to the infrared detection unit 51, the distance detection unit 53, the brightness detection unit 55, and the mode switching switch 57, and the processing unit 500 accommodated in the first hanging box 50 may also communicate with the wireless communication unit 71, the position sensing unit 73, and the gas detection unit 75 mounted in the second hanging box 70 across the cavity. Alternatively, the processing unit 500 may be accommodated inside the second hanging box 70 and electrically connected to the wireless communication unit 71, the position sensing unit 73, and the gas detection unit 75 mounted in the second hanging box 70, and the processing unit accommodated in the second hanging box 70 may communicate with the infrared detection unit 51, the distance detection unit 53, the brightness detection unit 55, and the mode switching switch 57 mounted in the first hanging box 50 across the cavity.

As is clear from fig. 12, the augmented reality element 30, the infrared detection element 51, the distance detection element 53, the brightness detection element 55, the mode switch 57, the wireless communication element 71, the position sensing element 73, and the gas detection element 75 may be separate from the mask 10 with the respiratory protection element 20, and therefore, the wearable supporter in this embodiment may be in the form of an accessory that does not include the mask 10 but can be detachably attached to the mask 10.

Fig. 13 is an exemplary flow diagram of a scene presentation method for a wearable brace in another embodiment. Referring to fig. 13, an exemplary flow of the scene presenting method may be adapted to be executed by a processing component of the wearable supporter in the foregoing embodiment, and reference may be made to the explanation in the foregoing embodiment for what is not explained in detail in the following embodiment. The scene presenting method may include:

s1310: and acquiring an infrared image of the real scene output by the infrared detection assembly, wherein a visual field corresponding to an imaging visual field angle of the infrared detection assembly and a visual field corresponding to an optical visual field angle of the augmented reality assembly are overlapped in the same direction, the augmented reality assembly is used for allowing the target object to observe the real scene, and the optical visual field angle is the visual field angle at which the target object penetrates through the augmented reality assembly to observe the real scene.

S1330: and processing the infrared image based on the conversion between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly.

In an alternative embodiment, the step may specifically include: and processing the infrared image based on a conversion relation between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly to obtain a target infrared image matched with the optical field angle of the augmented reality assembly, wherein the conversion relation is at least used for defining the infrared image cutting size corresponding to the difference between the imaging field angle of the infrared detection assembly and the optical field angle of the augmented reality assembly.

In another alternative embodiment, the step may specifically include: determining a calibration reference object in a real scene; calibrating a predetermined calibration conversion relation between an imaging field angle of the infrared detection assembly and an optical field angle of the augmented reality assembly based on a difference value between the determined distance of the calibration reference object relative to the wearable protective equipment and a preset calibration distance of the wearable protective equipment to obtain a calibration conversion relation; and processing the infrared image based on the calibration conversion relation to obtain a target infrared image matched with the optical field angle of the augmented reality component.

In either way, the conversion between the imaging field angle of the infrared detection component and the optical field angle of the augmented reality component can be realized by cropping the infrared image. Specifically, the edge portions of the infrared image may be cropped so that the target infrared image has an overall image size that matches the optical field of view of the augmented reality component, and the scene objects in the target infrared image are not improperly distorted or displaced due to image compression.

S1350: and sending the target infrared image to an augmented reality assembly so that the augmented reality assembly presents the target infrared image, wherein the augmented reality assembly is also used for enabling a scene target existing in the target infrared image to correspond to the pose of the scene target in the real scene.

Based on the above flow, if the wearable protector has an augmented reality component for allowing the target object to observe the real scene and an infrared detection component for outputting an infrared image of the real scene, the infrared image output by the infrared detection component by sensing the temperature of the scene target in the real scene can be converted into a target infrared image matched with the optical field angle of the augmented reality component, so that the augmented reality component can enhance the correct reproduction of the real scene in the field of view of the target object by presenting the converted target infrared image, thereby enhancing the visual recognition degree of the real scene (for example, disaster scenes such as fire and the like), and further helping to alleviate or even eliminate the trouble that the real scene is difficult to be clearly known.

During the execution of the flow shown in fig. 13, in order to ensure that the target infrared image can be clearly presented to the maximum extent when the environmental temperature difference is in different degrees, the image type of the target infrared image may be switchably determined as a thermal image or a grayscale image according to the environmental brightness. Specifically, the scene presenting method in this embodiment may further include: and generating a switching instruction of the image presentation type of the target infrared image, and sending the switching instruction to the augmented reality assembly, wherein the augmented reality assembly is further used for presenting the target infrared image according to the image presentation type corresponding to the switching instruction, and the image presentation type comprises a thermal image type or a gray image type.

The manner of generating the switching instruction of the image presentation type of the target infrared image may specifically include:

generating a switching instruction according to the temperature information of the real scene, for example, determining a temperature difference between scene targets in the real scene according to the temperature information of the scene targets in the real scene, and generating a switching instruction according to the temperature difference between the scene targets, and the temperature information of the real scene may be acquired from the infrared image;

or generating a switching instruction according to the switching request of the image presentation type of the target infrared image; the switching request is generated by the mode switch in response to an external touch operation.

During the execution of the flow shown in fig. 13, in order to avoid the rendering of the target infrared image being not easily distinguished due to too bright environment or generating visual stimulation to the target object due to too dark environment, the backlight brightness of the augmented reality component may be determined according to the detected ambient brightness of the real scene (especially, in the field of view corresponding to the optical field angle of the augmented reality component). Specifically, the scene presenting method in this embodiment may further include: the method comprises the steps of obtaining ambient brightness in a real scene (especially in a visual field corresponding to an optical field angle of an augmented reality assembly), generating a backlight brightness adjusting instruction according to the obtained ambient brightness, and sending the backlight brightness adjusting instruction to the augmented reality assembly, wherein the augmented reality assembly is further used for adjusting the backlight brightness according to the backlight brightness adjusting instruction, so that a target infrared image is presented based on the adjusted backlight brightness. For example, the ambient brightness in the real scene may be obtained from a brightness detection component further included in the wearable gear.

In order to highlight the scene target in the target infrared image, the scene presenting method in this embodiment may further include: and performing enhancement processing on the outline of the scene target appearing in the target infrared image, so that the outline of the scene target is enhanced and displayed in the target infrared image presented by the augmented reality component.

If the distance of the scene target relative to the wearable protector can also be detected in real time during the execution of the operation shown in fig. 13, the scene presenting method in this embodiment may further include: the method comprises the steps of obtaining the distance between a scene target detected in a real scene and a wearable protective tool, carrying out association processing on the detected distance and the scene target, and sending an association processing result to an augmented reality assembly, wherein the augmented reality assembly can be further used for carrying out association presentation on the detected distance in the real scene and the scene target based on the association processing result. For example, the distance of the scene object relative to the wearable gear may be obtained from a distance detection component further comprised by the wearable gear.

For example, a distance information presentation instruction or distance data representing the distance is generated based on the acquired distance, and the distance information presentation instruction or the distance data is sent to the augmented reality component as a correlation processing result, so that the augmented reality component further performs correlation presentation of the scene target and the distance based on the distance information presentation instruction or the distance data; or, the acquired distance is added to the target infrared image, and the target infrared image to which the distance is added is sent to the augmented reality component as a result of the association processing when S1350 is executed.

Optionally, the scene presenting method in this embodiment may also implement real-time reporting of the position of the wearable supporter. Specifically, the scene presenting method in this embodiment may further include: the method comprises the steps of acquiring spatial position information of the wearable protective tool, and remotely transmitting the acquired spatial position information (for example, remotely transmitting the acquired spatial position information through a wireless communication component further included in the wearable protective tool) so that a remote server can maintain a movement track of the wearable protective tool by using the spatial position information, wherein the remote server is also used for presenting a visual navigation instruction generated based on the movement track in a rear wearable protective tool, and the starting movement time of the rear wearable protective tool is later than that of the wearable protective tool.

Optionally, the scene presenting method in this embodiment may further implement a visual navigation indication. Specifically, the scene presenting method in this embodiment may further include: presenting a visual navigation indication directing the direction of travel. The visual navigation indication can be generated based on a movement track maintained by a remote server, or can be generated based on a map of a disaster scene or other ways; the visual navigation instruction may be obtained in real time by monitoring the remote transmission, and the visual navigation instruction may interrupt (e.g., interrupt for a preset time) the presentation of the target infrared image on the augmented reality component, or may be presented in superposition with the target infrared image on the augmented reality component.

In addition, during the execution of the flow shown in fig. 13, the method for presenting a scene in this embodiment may further include: the method comprises the steps of generating a breathing switching instruction according to gas composition parameters in the environment where the wearable protective equipment is located, and sending the breathing switching instruction to a breathing valve, wherein the breathing valve is used for determining a communication state according to the breathing switching instruction, and the communication state comprises a state communicated with the atmosphere in the environment where the wearable protective equipment is located or a state communicated with a gas storage bottle. If the wearable gear further comprises a gas detection component for detecting a gas composition parameter in the environment in which the wearable gear is located, the gas composition parameter can be obtained from the gas detection component. On this basis, the scene rendering method in this embodiment may further include: generating breathing warning information according to the gas composition parameters, and sending the breathing warning information to an augmented reality assembly, wherein the augmented reality assembly is used for presenting the breathing warning information, and the breathing warning information comprises information content used for representing that dangerous gas exists in the environment where the wearable protective equipment is located; alternatively, the scene presenting method in this embodiment may further include: and generating gas surplus prompt information according to the gas surplus in the gas storage bottle, and sending the gas surplus prompt information to the augmented reality component, wherein the augmented reality component is used for presenting the gas surplus prompt information.

The technical solution provided by the method embodiment belongs to the same inventive concept as the technical solution provided by the product embodiment, and the contents not explained in detail in the method embodiment can refer to the explanations in the product embodiment, and can achieve the corresponding beneficial effects described in the product embodiment.

In another embodiment, there is also provided a non-transitory computer readable storage medium storing instructions comprising instructions for causing a processor (or processing component) to perform any of the scene rendering methods provided in the above embodiments that are applicable to wearable gear, and reference may be made to the description in the above embodiments for a description of the methods. The non-transitory computer readable storage medium may include: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:提升HDR图像质量的方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!