Body temperature measuring method and camera equipment
1. A method of body temperature measurement, the method comprising:
determining a first face area where the face of a target living body is located in a thermal infrared image of the target living body;
acquiring skin surface temperatures corresponding to a plurality of pixel points of the first face region;
and acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
2. The method according to claim 1, wherein the acquiring skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and the acquiring body temperature of the living subject according to the skin surface temperatures of the plurality of target areas comprises:
determining M skin surface temperatures corresponding to M feature points of the multiple target areas of the first face area according to the skin surface temperatures corresponding to the multiple pixel points, wherein M is an integer greater than 1;
performing feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M;
determining the body temperature of the target living body through a target regression relationship according to the P characteristic values, wherein the independent variable of the target regression relationship is the P characteristic values, and the dependent variable is the body temperature of the target living body;
wherein the target regression relationship is determined according to a face type of the living target in the first face region, the face type being determined according to the M skin surface temperatures corresponding to the M feature points.
3. The method of claim 2, wherein said determining M skin surface temperatures for M feature points of said plurality of target regions of said first face region from said skin surface temperatures for said plurality of pixel points comprises:
extracting M feature points from the plurality of target regions of the first face region;
selecting skin surface temperatures corresponding to M pixel points corresponding to the M characteristic points from the skin surface temperatures corresponding to the multiple pixel points, and taking the skin surface temperatures corresponding to the M pixel points as the M skin surface temperatures corresponding to the M characteristic points; alternatively, the first and second electrodes may be,
for each feature point in the M feature points, determining a feature region where the feature point is located in the plurality of pixel points according to the feature point, determining an average temperature of skin surface temperatures of each pixel point in the feature region, and taking the average temperature as the skin surface temperature of the feature point.
4. The method of claim 1, wherein said obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region comprises:
extracting M feature points from a plurality of target regions in the first face region, M being an integer greater than 1;
and determining M skin surface temperatures corresponding to the M characteristic points, and determining the M skin surface temperatures corresponding to the M characteristic points as the skin surface temperatures of the multiple pixel points.
5. The method according to claim 1, wherein the acquiring skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and the acquiring body temperature of the living subject according to the skin surface temperatures of the plurality of target areas comprises:
generating a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the plurality of pixel points;
and inputting the skin surface temperature map into a body temperature neural network model of the target living body to obtain the body temperature of the target living body, wherein the body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
6. The method of claim 1, wherein determining a first face region in which a face of a living target is located in a thermal infrared image of the living target comprises:
inputting the thermal infrared image of the target living body into a face detection model of the target living body to obtain a first face area where the face of the target living body is located; alternatively, the first and second electrodes may be,
acquiring a visible light image of the target living body, determining a second face area where the face of the target living body is located in the visible light image, and determining a first face area where the face of the target living body is located in the thermal infrared image according to position information of the second face area in the visible light image, wherein the visible light image is shot by an image shooting device when the thermal infrared image is shot.
7. The method according to claim 6, wherein the determining a first face region in the thermal infrared image where the face of the living subject is located according to the position information of the second face region in the visible light image comprises:
the camera equipment comprises a camera, the visible light image and the thermal infrared image are shot by the camera through a light splitting technology, a first area corresponding to the position information is determined in the thermal infrared image according to the position information of the second face area in the visible light image, and the first area is used as a first face area where the face of the target living body is located; alternatively, the first and second electrodes may be,
the camera device comprises a first camera and a second camera, the visible light image is shot by the first camera, the thermal infrared image is shot by the second camera, a second area corresponding to the position information is determined in the thermal infrared image according to the position information of the second face area in the visible light image, and the second area is subjected to coordinate registration according to the registration information of the camera device to obtain the first face area.
8. The method of claim 1, wherein prior to said obtaining skin surface temperatures corresponding to a plurality of pixel points of said first face region, said method further comprises:
determining an occlusion region in the first face region, the occlusion region being a region in which a face of the living target is occluded by an object;
removing the occlusion region from the first face region.
9. The method of claim 1, wherein prior to said obtaining skin surface temperatures corresponding to a plurality of pixel points of said first face region, said method further comprises:
acquiring posture information of a face posture of a target living body in the first face region;
determining that an inclination angle of the face pose relative to a target pose is greater than a preset threshold in response to determining from the pose information;
and carrying out posture correction on the face in the first face region according to the posture information.
10. An image capturing apparatus, characterized in that the image capturing apparatus comprises a processor and a memory, wherein the memory has stored therein at least one instruction, which is loaded and executed by the processor to implement the operations performed by the body temperature measurement method according to any one of claims 1 to 9.
Background
At present, in many public places with dense people streams, such as airports, subways, railway stations and other places, the body temperature of a user needs to be measured, so that fever workers are screened out from crowds, further inspection on the fever workers is facilitated subsequently, and corresponding solving measures are taken.
In the related art, the temperature of the forehead of the user is generally measured using an infrared forehead temperature gun, and the temperature is taken as the body temperature of the user.
The related art has a problem that since the forehead temperature gun temperature is greatly affected by the alignment position of the operator, the body temperature has a large error and low accuracy compared with the real body temperature of the user.
Disclosure of Invention
The embodiment of the disclosure provides a body temperature measuring method and camera equipment, which can reduce errors of body temperature measurement and improve the accuracy of body temperature measurement. The technical scheme is as follows:
in a first aspect, a method of body temperature measurement is provided, the method comprising:
determining a first face area where the face of a target living body is located in a thermal infrared image of the target living body;
acquiring skin surface temperatures corresponding to a plurality of pixel points of the first face region;
and acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In a possible implementation manner, the acquiring skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquiring the body temperature of the living target according to the skin surface temperatures of the plurality of target areas includes:
determining M skin surface temperatures corresponding to M feature points of the multiple target areas of the first face area according to the skin surface temperatures corresponding to the multiple pixel points, wherein M is an integer greater than 1;
performing feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M;
determining the body temperature of the target living body through a target regression relationship according to the P characteristic values, wherein the independent variable of the target regression relationship is the P characteristic values, and the dependent variable is the body temperature of the target living body; wherein the target regression relationship is determined according to a face type of the living target in the first face region, the face type being determined according to the M skin surface temperatures corresponding to the M feature points.
In another possible implementation manner, the determining, according to the skin surface temperatures corresponding to the plurality of pixel points, M skin surface temperatures corresponding to M feature points of the plurality of target regions of the first face region includes:
extracting M feature points from the plurality of target regions of the first face region;
selecting skin surface temperatures corresponding to M pixel points corresponding to the M characteristic points from the skin surface temperatures corresponding to the multiple pixel points, and taking the skin surface temperatures corresponding to the M pixel points as the M skin surface temperatures corresponding to the M characteristic points; alternatively, the first and second electrodes may be,
for each feature point in the M feature points, determining a feature region where the feature point is located in the plurality of pixel points according to the feature point, determining an average temperature of skin surface temperatures of each pixel point in the feature region, and taking the average temperature as the skin surface temperature of the feature point.
In another possible implementation manner, the method further includes:
determining the face type of the target living body in the first face region according to the M skin surface temperatures corresponding to the M feature points;
and acquiring a target regression relation matched with the face type.
In another possible implementation manner, the acquiring skin surface temperatures corresponding to a plurality of pixel points of the first face region includes:
extracting M feature points from a plurality of target regions in the first face region, M being an integer greater than 1;
and determining M skin surface temperatures corresponding to the M characteristic points, and determining the M skin surface temperatures corresponding to the M characteristic points as the skin surface temperatures of the multiple pixel points.
In another possible implementation manner, the acquiring skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquiring the body temperature of the living target according to the skin surface temperatures of the plurality of target areas includes:
generating a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the plurality of pixel points;
and inputting the skin surface temperature map into a body temperature neural network model of the target living body to obtain the body temperature of the target living body, wherein the body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In another possible implementation manner, the body temperature neural network model comprises a convolutional layer, an activation layer, a pooling layer and a full-link layer;
the inputting the skin surface temperature map into the body temperature neural network model of the target living body to obtain the body temperature of the target living body comprises the following steps:
acquiring first feature maps of a plurality of target areas in the first face area through the convolutional layer according to the skin surface temperature map;
carrying out nonlinear correction on the first characteristic diagram through the activation layer to obtain a second characteristic diagram;
performing down-sampling on the second characteristic diagram through the pooling layer to obtain a third characteristic diagram;
compressing the third feature map into N one-dimensional feature vectors through the full connection layer, wherein N is an integer greater than or equal to 1;
and mapping the N one-dimensional feature vectors to the body temperature of the target living body.
In another possible implementation manner, the determining, in the thermal infrared image of the living target, a first face region where a face of the living target is located includes:
inputting the thermal infrared image of the target living body into a face detection model of the target living body to obtain a first face area where the face of the target living body is located; alternatively, the first and second electrodes may be,
acquiring a visible light image of the target living body, determining a second face area where the face of the target living body is located in the visible light image, and determining a first face area where the face of the target living body is located in the thermal infrared image according to position information of the second face area in the visible light image, wherein the visible light image is shot by an image shooting device when the thermal infrared image is shot.
In another possible implementation manner, the determining, in the thermal infrared image, a first face region where the face of the living target is located according to the position information of the second face region in the visible light image includes:
the camera equipment comprises a camera, the visible light image and the thermal infrared image are shot by the camera through a light splitting technology, a first area corresponding to the position information is determined in the thermal infrared image according to the position information of the second face area in the visible light image, and the first area is used as a first face area where the face of the target living body is located; alternatively, the first and second electrodes may be,
the camera device comprises a first camera and a second camera, the visible light image is shot by the first camera, the thermal infrared image is shot by the second camera, a second area corresponding to the position information is determined in the thermal infrared image according to the position information of the second face area in the visible light image, and the second area is subjected to coordinate registration according to the registration information of the camera device to obtain the first face area.
In another possible implementation, before the obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region, the method further includes:
determining an occlusion region in the first face region, the occlusion region being a region in which a face of the living target is occluded by an object;
removing the occlusion region from the first face region.
In another possible implementation, before the obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region, the method further includes:
acquiring posture information of a face posture of a target living body in the first face region;
determining that an inclination angle of the face pose relative to a target pose is greater than a preset threshold in response to determining from the pose information;
and carrying out posture correction on the face in the first face region according to the posture information.
In another possible implementation manner, the acquiring skin surface temperatures corresponding to a plurality of pixel points of the first face region includes:
determining thermal infrared gray values corresponding to a plurality of pixel points of the first face area;
and acquiring the skin surface temperatures corresponding to the plurality of pixel points from the corresponding relation between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the plurality of pixel points.
In a second aspect, there is provided a body temperature measurement device, the device comprising:
a face region determination module configured to determine, in a thermal infrared image of a living target, a first face region in which a face of the living target is located;
a skin surface temperature acquisition module configured to acquire skin surface temperatures corresponding to a plurality of pixel points of the first face region;
a body temperature acquisition module configured to acquire skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquire a body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In a possible implementation manner, the body temperature obtaining module is further configured to determine, according to the skin surface temperatures corresponding to the plurality of pixel points, M skin surface temperatures corresponding to M feature points of the plurality of target regions of the first face region, where M is an integer greater than 1; performing feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M; determining the body temperature of the target living body through a target regression relationship according to the P characteristic values, wherein the independent variable of the target regression relationship is the P characteristic values, and the dependent variable is the body temperature of the target living body; wherein the target regression relationship is determined according to a face type of the living target in the first face region, the face type being determined according to the M skin surface temperatures corresponding to the M feature points.
In another possible implementation manner, the body temperature obtaining module is further configured to extract M feature points from the multiple target regions of the first face region; selecting skin surface temperatures corresponding to M pixel points corresponding to the M characteristic points from the skin surface temperatures corresponding to the multiple pixel points, and taking the skin surface temperatures corresponding to the M pixel points as the M skin surface temperatures corresponding to the M characteristic points; or, for each feature point in the M feature points, determining a feature region where the feature point is located in the plurality of pixel points according to the feature point, determining an average temperature of skin surface temperatures of each pixel point in the feature region, and taking the average temperature as the skin surface temperature of the feature point.
In another possible implementation manner, the body temperature obtaining module is further configured to determine a face type of the target living body in the first face region according to M skin surface temperatures corresponding to the M feature points; and acquiring a target regression relation matched with the face type.
In another possible implementation manner, the skin surface temperature obtaining module is further configured to extract M feature points from a plurality of target regions in the first face region, where M is an integer greater than 1; and determining M skin surface temperatures corresponding to the M characteristic points, and determining the M skin surface temperatures corresponding to the M characteristic points as the skin surface temperatures of the multiple pixel points.
In another possible implementation manner, the body temperature obtaining module is further configured to generate a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the plurality of pixel points; and inputting the skin surface temperature map into a body temperature neural network model of the target living body to obtain the body temperature of the target living body, wherein the body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In another possible implementation manner, the body temperature neural network model comprises a convolutional layer, an activation layer, a pooling layer and a full-link layer;
the body temperature acquisition module is further configured to acquire a first feature map of a plurality of target areas in the first face area through the convolutional layer according to the skin surface temperature map; carrying out nonlinear correction on the first characteristic diagram through the activation layer to obtain a second characteristic diagram; performing down-sampling on the second characteristic diagram through the pooling layer to obtain a third characteristic diagram; compressing the third feature map into N one-dimensional feature vectors through the full connection layer, wherein N is an integer greater than or equal to 1; and mapping the N one-dimensional feature vectors to the body temperature of the target living body.
In another possible implementation manner, the face region determination module is further configured to input the thermal infrared image of the living target into a face detection model of the living target, and obtain a first face region where the face of the living target is located; or acquiring a visible light image of the target living body, determining a second face area where the face of the target living body is located in the visible light image, and determining a first face area where the face of the target living body is located in the thermal infrared image according to position information of the second face area in the visible light image, wherein the visible light image is shot by an image shooting device when the thermal infrared image is shot.
In another possible implementation manner, the image capturing apparatus includes a camera, the visible light image and the thermal infrared image are captured by the camera through a light splitting technique, and the face region determining module is further configured to determine, according to position information of the second face region in the visible light image, a first region corresponding to the position information in the thermal infrared image, and use the first region as a first face region where the face of the target living body is located; alternatively, the first and second electrodes may be,
the image pickup device comprises a first camera and a second camera, the visible light image is shot by the first camera, the thermal infrared image is shot by the second camera, the face area determining module is further configured to determine a second area corresponding to the position information in the thermal infrared image according to the position information of the second face area in the visible light image, and coordinate registration is performed on the second area according to registration information of the image pickup device to obtain the first face area.
In another possible implementation manner, the face region determination module is further configured to determine an occlusion region in the first face region, where the occlusion region is a region where the face of the target living body is occluded by an object; removing the occlusion region from the first face region.
In another possible implementation, the face region determination module is further configured to acquire pose information of a facial pose of a target living body in the first face region; determining that an inclination angle of the face pose relative to a target pose is greater than a preset threshold in response to determining from the pose information; and carrying out posture correction on the face in the first face region according to the posture information.
In another possible implementation, the skin surface temperature acquisition module is further configured to determine thermal infrared gray values corresponding to a plurality of pixel points of the first face region; and acquiring the skin surface temperatures corresponding to the plurality of pixel points from the corresponding relation between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the plurality of pixel points.
In a third aspect, an image capturing apparatus is provided, where the image capturing apparatus includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded by the processor and executed to implement the operations performed in the body temperature measurement method in any one of the above possible implementations.
In a fourth aspect, a computer-readable storage medium is provided, where at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the image capturing apparatus in the body temperature measurement method in any one of the above possible implementation manners.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by embodiments of the present disclosure;
FIG. 2 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 3 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 5 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a body temperature neural network model provided by an embodiment of the present disclosure;
FIG. 7 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 8 is a flow chart of a body temperature measurement method provided by an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of a body temperature measuring device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an image capturing apparatus provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an implementation environment provided by embodiments of the present disclosure. Referring to fig. 1, the implementation environment includes an imaging apparatus 10, and the imaging apparatus 10 is used to measure a body temperature of a target living body. The camera device 10 comprises an imaging acquisition unit 101, a face detection unit 102 and a body temperature measurement unit 103, wherein the imaging acquisition unit 101 is electrically connected with the face detection unit 102, and the face detection unit 102 is electrically connected with the body temperature measurement unit 103.
The imaging acquisition unit 101 may include components such as a camera infrared optical system, a thermal imaging infrared sensor, and the like, and is configured to acquire a thermal infrared image of a living body of a target. The imaging acquisition unit 101 may further include a camera visible light lens, a visible light image sensor, and other components, and is configured to acquire a visible light image of the target living body.
The face detection unit 102 is configured to determine a first face region in the thermal infrared image where the face of the target living body is located. In one possible implementation, the face detection unit 102 may determine the first face region in which the face of the target living body is located directly based on the thermal infrared image. For example, the face detection unit 102 may input the thermal infrared image of the living target into the face detection model of the living target, and obtain a first face region where the face of the living target is located. In another possible implementation, the face detection unit 102 may determine a first face region where the face of the target living body is located in combination with the visible light image of the target living body. For example, the face detection unit 102 may determine a second face region in which the face of the living subject is located in the visible light image, and determine a first face region in which the face of the living subject is located in the thermal infrared image based on position information of the second face region in the visible light image, wherein the visible light image is captured by the image capture apparatus when the thermal infrared image is captured.
The body temperature measuring unit 103 is used for determining the body temperature of the target living body according to the first face area of the target living body in the thermal infrared image. The body temperature measurement unit 103 acquires skin surface temperatures corresponding to a plurality of pixel points of the first face region, and acquires skin surface temperatures of a plurality of target regions in the first face region according to the skin surface temperatures corresponding to the plurality of pixel points. And then acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
The image pickup apparatus 10 may be provided in public places such as airports, landfalls, train stations, companies, and the like, for measuring the body temperature of passing persons. The image pickup apparatus 10 may also be provided together with a security inspection apparatus at the above-described location. Accordingly, the imaging collection unit 101 may be disposed at a security inspection port of a public place such as an airport, a ground fall, a train station, a company, and the like, and is configured to collect thermal infrared images and visible light images of passing persons. For example, a security inspection port of a subway may be equipped with a camera. Of course, the image pickup apparatus 10 may be provided in other places, which is not limited by the present disclosure.
Fig. 2 is a flowchart of a body temperature measurement method provided by an embodiment of the present disclosure. Referring to fig. 2, the embodiment includes:
step 201: in a thermal infrared image of a living subject of interest, a first face area in which a face of the living subject of interest is located is determined.
Step 202: skin surface temperatures corresponding to a plurality of pixel points of the first face region are acquired.
Step 203: the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas.
In one possible implementation manner, acquiring skin surface temperatures of a plurality of target areas in the first face area according to skin surface temperatures corresponding to a plurality of pixel points, and acquiring a body temperature of a target living body according to the skin surface temperatures of the plurality of target areas includes:
determining M skin surface temperatures corresponding to M characteristic points of a plurality of target areas of the first face area according to the skin surface temperatures corresponding to the pixel points, wherein M is an integer greater than 1;
performing feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M;
determining the body temperature of the target living body through a target regression relationship according to the P characteristic values, wherein the independent variable of the target regression relationship is the P characteristic values, and the dependent variable is the body temperature of the target living body; wherein the target regression relationship is determined according to a face type of the target living body in the first face region, the face type being determined according to the M skin surface temperatures corresponding to the M feature points.
In another possible implementation manner, determining M skin surface temperatures corresponding to M feature points of a plurality of target regions of the first face region according to the skin surface temperatures corresponding to the plurality of pixel points includes:
extracting M feature points from a plurality of target regions of a first face region;
selecting skin surface temperatures corresponding to M pixel points corresponding to M characteristic points from skin surface temperatures corresponding to the multiple pixel points, and taking the skin surface temperatures corresponding to the M pixel points as M skin surface temperatures corresponding to the M characteristic points; alternatively, the first and second electrodes may be,
for each feature point in the M feature points, determining a feature region where the feature point is located in the plurality of pixel points according to the feature point, determining the average temperature of the skin surface temperature of each pixel point in the feature region, and taking the average temperature as the skin surface temperature of the feature point.
In another possible implementation manner, the method further includes:
determining the face type of the living target in the first face region according to the M skin surface temperatures corresponding to the M feature points;
and acquiring a target regression relation matched with the face type.
In another possible implementation, obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region includes:
extracting M feature points from a plurality of target regions in the first face region, wherein M is an integer greater than 1;
and determining M skin surface temperatures corresponding to the M characteristic points, and determining the M skin surface temperatures corresponding to the M characteristic points as the skin surface temperatures of the multiple pixel points.
In another possible implementation manner, acquiring skin surface temperatures of a plurality of target areas in the first face area according to skin surface temperatures corresponding to a plurality of pixel points, and acquiring a body temperature of the living target according to the skin surface temperatures of the plurality of target areas, includes:
generating a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the multiple pixel points;
and inputting the skin surface temperature map into a body temperature neural network model of the target living body to obtain the body temperature of the target living body, wherein the body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In another possible implementation, the body temperature neural network model comprises a convolutional layer, an activation layer, a pooling layer and a full-link layer;
inputting the skin surface temperature map into a body temperature neural network model of a target living body to obtain the body temperature of the target living body, wherein the method comprises the following steps:
acquiring first feature maps of a plurality of target areas in a first face area through the convolution layer according to the skin surface temperature map;
carrying out nonlinear correction on the first characteristic diagram through the activation layer to obtain a second characteristic diagram;
performing down-sampling on the second characteristic diagram through the pooling layer to obtain a third characteristic diagram;
compressing the third feature map into N one-dimensional feature vectors through a full connection layer, wherein N is an integer greater than or equal to 1;
and mapping the N one-dimensional feature vectors to the body temperature of the target living body.
In another possible implementation, determining a first face region in which a face of a living target is located in a thermal infrared image of the living target includes:
inputting the thermal infrared image of the target living body into a face detection model of the target living body to obtain a first face area where the face of the target living body is located; alternatively, the first and second electrodes may be,
the method comprises the steps of acquiring a visible light image of a target living body, determining a second face area where the face of the target living body is located in the visible light image, and determining a first face area where the face of the target living body is located in a thermal infrared image according to position information of the second face area in the visible light image, wherein the visible light image is shot by an image shooting device when the thermal infrared image is shot.
In another possible implementation manner, determining a first face area in which the face of the living target is located in the thermal infrared image according to the position information of the second face area in the visible light image includes:
the camera equipment comprises a camera, wherein a visible light image and a thermal infrared image are shot by the camera through a light splitting technology, a first area corresponding to position information is determined in the thermal infrared image according to the position information of a second face area in the visible light image, and the first area is used as a first face area where the face of a target living body is located; alternatively, the first and second electrodes may be,
the camera equipment comprises a first camera and a second camera, a visible light image is shot by the first camera, a thermal infrared image is shot by the second camera, a second area corresponding to the position information is determined in the thermal infrared image according to the position information of the second face area in the visible light image, and coordinate registration is carried out on the second area according to registration information of the camera equipment to obtain the first face area.
In another possible implementation, before obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region, the method further includes:
determining an occlusion region in the first face region, wherein the occlusion region is a region in which the face of the target living body is occluded by an object;
the occlusion region is removed from the first face region.
In another possible implementation, before obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region, the method further includes:
acquiring posture information of a face posture of a target living body in a first face area;
determining that an inclination angle of the face pose relative to the target pose is greater than a preset threshold in response to determining from the pose information;
and performing posture correction on the face in the first face region according to the posture information.
In another possible implementation, obtaining skin surface temperatures corresponding to a plurality of pixel points of the first face region includes:
determining thermal infrared gray values corresponding to a plurality of pixel points of the first face area;
and acquiring the skin surface temperatures corresponding to the multiple pixel points from the corresponding relation between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the multiple pixel points.
In the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
Fig. 3 is a flowchart of a body temperature measurement method according to an embodiment of the present disclosure. This embodiment mainly describes a method of measuring body temperature by extracting feature points from a plurality of regions of the face. Referring to fig. 3, the embodiment includes:
step 301: the imaging apparatus determines a first face area in which a face of a living subject is located in a thermal infrared image of the living subject.
The target living body may be a human or an animal, the human may include humans of various ages, and the animal may include a pig, a cow, a sheep, a dog or other animals, which is not limited by the present disclosure.
The thermal infrared image is an image formed by receiving and recording thermal radiation energy emitted by a target living body by an image pickup apparatus, and since different objects or different parts of the same object generally have different thermal radiation characteristics, such as temperature difference, emissivity, and the like, after thermal infrared imaging is performed, the objects in the thermal infrared image are distinguished due to the difference in thermal radiation.
In one possible implementation manner, the image capturing apparatus may directly determine, based on the thermal infrared image, a first face region in the thermal infrared image where the face of the target living body is located, and accordingly, the image capturing apparatus may determine, in the thermal infrared image of the target living body, the first face region in the thermal infrared image where the face of the target living body is located in a manner that: the imaging apparatus inputs a thermal infrared image of the living target into a face detection model of the living target, and obtains a first face region where the face of the living target is located.
For example, if the target living body is a person, the imaging apparatus inputs a thermal infrared image of the person into a face detection model of the person, and obtains a first face region of the person's face in the thermal infrared image. The face detection model may be a detection model for a thermal infrared image, and is used for detecting a face region in the thermal infrared image. Moreover, the face detection model may adopt any neural network model, which is not limited by the present disclosure.
In the embodiment of the disclosure, the image pickup device obtains the first face area where the face of the living target is located by inputting the thermal infrared image of the living target into the face detection model of the living target, and the method is simple and efficient. In addition, since the thermal infrared image is less susceptible to illumination variation, the imaging apparatus directly acquires the first face region where the face of the target living body is located on the thermal infrared image, and the detection effect is stable.
In one possible implementation manner, the image capturing apparatus may further determine, in combination with the visible light image of the living target, a first face area in which the face of the living target is located in the thermal infrared image, and accordingly, the image capturing apparatus determines, in the thermal infrared image of the living target, the first face area in which the face of the living target is located in a manner that: the imaging apparatus acquires a visible light image of the living subject, determines a second face area in the visible light image in which the face of the living subject is located, and determines a first face area in the thermal infrared image in which the face of the living subject is located, based on positional information of the second face area in the visible light image. Wherein the visible light image is taken by the camera device when taking the thermal infrared image.
The implementation manner of the image capturing apparatus determining the second face area where the face of the target living body is located in the visible light image may be: the imaging apparatus inputs a visible light image of the living subject into a face detection model of the living subject, and obtains a second face region in which the face of the living subject is located. The face detection model may be a detection model for the visible light image, and is used for detecting a face region in the visible light image. Moreover, the face detection model may adopt any neural network model, which is not limited by the present disclosure.
In the embodiment of the present disclosure, the image capturing apparatus determines the second face area where the face of the living subject is located from the visible light image, and then determines the first face area where the face of the living subject is located in the thermal infrared image based on the position information of the second face area in the visible light image, since the visible light has good glass transmittance and the image effect of the visible light image in the long-distance shooting is good, the accuracy of the second face area determined from the visible light image is high, and the accuracy of the first face area determined from the thermal infrared image is high.
In a possible implementation manner, the image capturing apparatus may be an image capturing apparatus with a light splitting structure, that is, the image capturing apparatus includes a camera, and the camera is respectively connected to the thermal imaging infrared sensor and the visible light image sensor. The camera device is used for collecting a thermal infrared image and a visible light image of the target living body through the camera. That is, the visible light image and the thermal infrared image are captured by the camera by using the light splitting technology, and accordingly, the implementation manner of determining the first face area where the face of the target living body is located in the thermal infrared image according to the position information of the second face area in the visible light image by the camera device is as follows: the image pickup apparatus determines a first region corresponding to the position information in the thermal infrared image based on the position information of the second face region in the visible light image, and sets the first region as a first face region where the face of the target living body is located.
In the embodiment of the disclosure, the image pickup apparatus determines the second face area through the visible light image captured by adopting the light splitting technology, determines the first area corresponding to the position information in the thermal infrared image according to the position information of the second face area in the visible light image, and takes the first area as the first face area where the face of the living body of the target is located.
In another possible implementation manner, the image capturing apparatus may also be an image capturing apparatus with a binocular structure, that is, the image capturing apparatus includes two cameras, namely a first camera and a second camera. Wherein, first camera can be the visible light camera, and visible light image sensor is connected to this visible light camera, and the second camera can be hot infrared camera, and thermal imaging infrared sensor is connected to this hot infrared camera. The camera device is used for collecting a visible light image of a target living body through the visible light camera and collecting a thermal infrared image of the target living body through the thermal infrared camera, namely the visible light image is shot by the first camera, the thermal infrared image is shot by the second camera, correspondingly, the camera device determines the first face area where the face of the target living body is located in the thermal infrared image according to the position information of the second face area in the visible light image in an implementation mode that: and the camera equipment determines a second area corresponding to the position information in the thermal infrared image according to the position information of the second face area in the visible light image, and performs coordinate registration on the second area according to the registration information of the camera equipment to obtain the first face area.
The registration information of the camera device may include parameters such as a focal length of the first camera and the second camera, a baseline distance of the first camera and the second camera, and correspondingly, the coordinate registration is performed on the second region according to the registration information of the camera device, and an implementation manner of obtaining the first face region may be: according to the registration information of the image pickup apparatus, a transformation matrix is solved, by which the second region is mapped to the first face region.
In the embodiment of the disclosure, the first camera and the second camera in the camera device with the binocular structure are used for respectively acquiring the visible light image and the thermal infrared image, so that the requirement on the camera device is low, and the cost of the camera device can be reduced.
It should be noted that, in the embodiment of the present disclosure, the first face area in the thermal infrared image may be an area occupied by a rectangle circumscribing the face of the living target in the thermal infrared image, and may also be an area surrounded by a face contour of the living target in the thermal infrared image, which is not limited by the present disclosure.
It should be noted that the first face area may be represented by coordinates of the first face area in the thermal infrared image, and for example, when the first face area is an area occupied by a circumscribed rectangle of the face of the living subject in the thermal infrared image, the first face area is represented by coordinates of the circumscribed rectangle of the face in the thermal infrared image. Illustratively, when the first face area is an area surrounded by a face contour of the living subject in the thermal infrared image, the first face area is represented by coordinates of the area surrounded by the face contour in the thermal infrared image.
It should be noted that the number of living target bodies in the thermal infrared image in the present disclosure may be one or more. That is, the temperature measurement method in the present disclosure is applicable to an application scenario of single temperature measurement and an application scenario of multiple temperature measurements. In addition, when the thermal infrared image of the target living body is acquired, the distance between the target living body and the camera equipment is not limited, namely, the temperature measurement method is suitable for temperature measurement application scenes with different distances.
Step 302: the image pickup apparatus determines an occlusion region in a first face region, the occlusion region being a region in which a face of the living target is occluded by the object, the occlusion region being removed from the first face region.
In one possible implementation manner, a face segmentation module may be included in the image capturing apparatus, and the image capturing apparatus divides the first face region into an occlusion region and a skin region by the face segmentation module, and removes the occlusion region in the first face region to leave the skin region. For example, the mask area in the first face area is removed, leaving a skin area.
For example, the face segmentation module may divide the first face region into an occlusion region and a skin region according to colors and brightness of a plurality of pixel points in the first face region.
In the embodiment of the present disclosure, the imaging apparatus removes the occlusion region from the first face region by determining the occlusion region in the first face region, and then the first face region leaves only the skin region, and the body temperature acquired subsequently by the first face region including only the skin region is more accurate.
It should be noted that, before executing step 302, the image capturing apparatus may determine whether an occlusion region exists in the first face region, and execute step 302 in response to the existence.
Step 303: the imaging apparatus acquires skin surface temperatures corresponding to a plurality of pixel points of the first face region.
Wherein the skin surface temperature is the skin temperature of the facial surface of the target living body. The skin surface temperature may be an equivalent black body temperature or other temperature that may be indicative of the skin surface temperature, as the present disclosure is not limited in this respect.
The method for acquiring the skin surface temperature corresponding to the plurality of pixel points of the first face area by the image pickup equipment is as follows: the image pickup device determines thermal infrared gray values corresponding to a plurality of pixel points of the first face area, and the image pickup device acquires skin surface temperatures corresponding to the plurality of pixel points from the corresponding relation between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the plurality of pixel points. The correspondence between the thermal infrared gray value and the skin surface temperature can be preset in the image pickup device.
It should be noted that the plurality of pixel points may be each pixel point of the first face region, and the body temperature accuracy of the target living body obtained based on the skin surface temperature is high by obtaining the skin surface temperature corresponding to each pixel point of the first face region.
In a possible implementation manner, before the image capturing apparatus acquires the skin surface temperatures corresponding to the plurality of pixel points of the first face region, it is necessary to perform posture correction on the face in the first face region, and the implementation manner may be: the image pickup apparatus acquires pose information of a facial pose of the target living body in the first face region, and in response to determining that an inclination angle of the facial pose with respect to the target pose is greater than a preset threshold value according to the pose information, the image pickup apparatus performs pose correction on the face in the first face region according to the pose information.
Wherein the target pose may be a frontal face and the pose information may be a deflection angle of the face pose relative to the frontal face. Correspondingly, the image pickup apparatus performs posture correction on the face in the first face region according to the posture information in the following manner: the image pickup apparatus adjusts the face in the first face region to the frontal face according to a deflection angle of the face pose with respect to the frontal face.
It should be noted that, the image capturing apparatus may also acquire skin surface temperatures corresponding to a plurality of pixel points of the first face region, and then perform posture correction on the face in the first face region, which is not limited by the present disclosure.
In the embodiment of the present disclosure, the image capturing apparatus facilitates subsequent extraction of feature points from a plurality of target regions of the first face region and acquisition of skin surface temperatures of the feature points by performing pose correction on the face in the first face region.
Step 304: the image pickup apparatus determines M skin surface temperatures corresponding to M feature points of a plurality of target regions of the first face region, M being an integer greater than 1, from the skin surface temperatures corresponding to the plurality of pixel points.
The plurality of target regions may be divided as needed, for example, the plurality of target regions may include any one or more of a forehead region, an eye region, a cheek region, a nose region, and the like, and the plurality of target regions may further include other regions, for example, a chin region, and the like. For example, the image pickup apparatus may divide the plurality of target regions according to the distribution characteristics of the face temperature. For example, the temperature of the forehead region and the eye region is high, the image pickup apparatus may take the forehead region and the eye region as one target region, and the temperature of the cheek region and the nose region is low, the cheek region and the nose region as one target region. The multiple target areas described above are merely exemplary and the disclosure is not limited thereto.
The method comprises the following steps: the image pickup apparatus extracts M feature points from a plurality of target regions of the first face region, and determines M skin surface temperatures corresponding to the M feature points from skin surface temperatures corresponding to the plurality of pixel points.
The image capturing apparatus may extract the same number of feature points from each of the plurality of target regions, may also extract different numbers of feature points from each of the plurality of target regions, and may also use each of the pixel points in the plurality of target regions as a feature point, which is not limited by the present disclosure. The image capturing apparatus may extract typical feature points from a plurality of target regions by a face key point detection algorithm, may also extract feature points from specific parts of the plurality of target regions, and may also randomly extract feature points from the plurality of target regions, which is not limited by the present disclosure.
In the embodiment of the present disclosure, the imaging apparatus acquires the body temperature of the living subject from the skin surface temperatures of the feature points of the plurality of target regions by extracting the feature points from the plurality of target regions of the first face region, sufficiently takes into account the difference in temperature at different positions in the face region, and thus the acquired body temperature has a small error and high accuracy.
In one possible implementation manner, the image capturing apparatus may determine, according to the skin surface temperatures corresponding to the plurality of pixel points, M skin surface temperatures corresponding to the M feature points by: the image pickup apparatus selects skin surface temperatures corresponding to M pixel points corresponding to the M feature points from skin surface temperatures corresponding to the plurality of pixel points, and takes the skin surface temperatures corresponding to the M pixel points as the M skin surface temperatures corresponding to the M feature points. The method is simple and has high efficiency.
In another possible implementation manner, the implementation manner of determining, by the image capturing apparatus, the M skin surface temperatures corresponding to the M feature points according to the skin surface temperatures corresponding to the multiple pixel points is as follows: for each feature point in the M feature points, the camera device determines a feature area where the feature point is located in the plurality of pixel points according to the feature point, the camera device determines the average temperature of the skin surface temperature of each pixel point in the feature area, and the average temperature is used as the skin surface temperature of the feature point.
The size of the feature region may be set as required, for example, the feature region may be a region corresponding to 3 × 3 neighboring pixels, which is not limited in this disclosure.
In the embodiment of the disclosure, the image pickup device determines the characteristic region where the characteristic point is located in the plurality of pixel points, determines the average temperature of the skin surface temperature of each pixel point in the characteristic region, and uses the average temperature as the skin surface temperature of the characteristic point, so that the influence of noise fluctuation in the imaging process can be effectively eliminated, and the accuracy of the M skin surface temperatures corresponding to the M characteristic points is ensured.
Step 305: and the image pickup equipment performs feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M.
The imaging device may perform feature compression on the M skin surface temperatures by any one feature dimension compression method, which is not limited by this disclosure.
In the embodiment of the disclosure, the image pickup device performs feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, and then determines the body temperature of the target living body based on the P feature values, so that the calculation amount can be greatly reduced.
Step 306: the image pickup equipment determines the body temperature of the target living body through a target regression relation according to the P characteristic values, the independent variable of the target regression relation is the P characteristic values, and the dependent variable is the body temperature of the target living body.
The target regression relationship may be acquired in advance and stored in the image capturing apparatus. The target regression relationship is acquired based on a large amount of observation data, which may be a correspondence between P feature values of the living body and the body temperature of the living body. In one possible implementation, the target regression relationship may be a default, and the imaging apparatus stores therein a default target regression relationship by which the electronic apparatus determines the body temperature of the target living body.
In another possible implementation, the target regression relationship is determined according to a face type of the target living body in the first face region, the face type being determined according to M skin surface temperatures corresponding to the M feature points. The image capturing apparatus may further store therein a plurality of regression relationships corresponding to the face types. For example, the face type may include a face temperature uniformity type and a face temperature uniformity type, which respectively correspond to different target regression relationships, wherein the target regression relationship corresponding to the face temperature uniformity type is obtained by observation data of a large number of living bodies of the face temperature uniformity type, and the target regression relationship corresponding to the face temperature uniformity type is obtained by observation data of a large number of living bodies of the face temperature uniformity type.
It should be noted that the facial type may be classified based on the temperature difference correlation of different parts of the face, and the facial type is only an exemplary one, and the disclosure does not limit this.
The target regression relationship may be any one of the multiple regression relationships described above, and the disclosure is not limited thereto. Before executing step 306, the image capturing apparatus needs to acquire a target regression relationship, and the implementation manner may be: the image pickup apparatus determines a face type of the living subject in the first face region according to the M skin surface temperatures corresponding to the M feature points, and the image pickup apparatus acquires a target regression relationship matching the face type.
In the embodiment of the disclosure, the imaging device determines the body temperature of the target living body through the target regression relationship according to the P characteristic values, and the method is simple and easy to operate. The image pickup apparatus determines a face type of the living target in the first face region from the M skin surface temperatures corresponding to the M feature points, and determines a body temperature of the living target by acquiring a target regression relationship matching the face type, which can improve the accuracy of the body temperature.
Referring to fig. 4, which is a flow chart of temperature measurement, after the face in the first face region is subjected to posture correction, skin surface temperature calculation is performed, and then feature points are extracted, and the skin surface temperature of the feature points is converted into body temperature.
In summary, the body temperature measurement method provided by the present disclosure determines the body temperature of the target living body by using the skin surface temperatures of multiple regions on the face, so that instability of single-region temperature measurement can be avoided, the temperature measurement error is small, and the accuracy is high.
In the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
Fig. 5 is a flowchart of a body temperature measurement method provided by an embodiment of the present disclosure. This embodiment mainly describes a method of measuring body temperature by a body temperature neural network model. Referring to fig. 5, the embodiment includes:
step 501: the imaging apparatus determines a first face area in which a face of a living subject is located in a thermal infrared image of the living subject.
The implementation manner of this step is the same as that of step 301, and is not described herein again.
Step 502: the image pickup apparatus determines an occlusion region in a first face region, the occlusion region being a region in which a face of the living target is occluded by the object, the occlusion region being removed from the first face region.
The implementation of this step is the same as that of step 302, and is not described here again.
Step 503: the imaging apparatus acquires skin surface temperatures corresponding to a plurality of pixel points of the first face region.
The implementation manner of this step is the same as that of step 303, and is not described here again.
Step 504: the image pickup device generates a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the plurality of pixel points.
If the skin surface temperature in step 503 is the equivalent blackbody temperature, the skin surface temperature map in this step is the equivalent blackbody temperature map.
The skin surface temperature map is a skin surface temperature map corresponding to the face in the target posture. Accordingly, if the image capturing apparatus has performed the posture correction of the face part in the first face region before acquiring the skin surface temperatures corresponding to the plurality of pixel points of the first face region in step 503, the image capturing apparatus generates a skin surface temperature map, that is, a skin surface temperature map corresponding to the face part in the target posture, based on the skin surface temperatures corresponding to the plurality of pixel points in this step.
If the face in the first face region is not subjected to the posture correction before the image capturing apparatus acquires the skin surface temperatures corresponding to the plurality of pixel points of the first face region in step 503, after the image capturing apparatus generates the skin surface temperature map in this step, it is necessary to perform the face alignment between the skin surface temperature map and the first face region after the posture correction. Therefore, the subsequent body temperature neural network model can measure the body temperature of the target living body through the skin surface temperature map conveniently.
Step 505: and the camera equipment inputs the skin surface temperature map into the body temperature neural network model of the target living body to obtain the body temperature of the target living body.
The body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In one possible implementation, the body temperature neural network model includes Convolutional Layers (CONV), activation Layers (activation Layers), Pooling Layers (POOL), and Fully Connected Layers (FC). Referring to fig. 6, a schematic diagram of a network structure of a body temperature neural network model is shown, where an activation function in an activation layer may be ReLu (one activation function), or may be another activation function, such as Sigmoid (another activation function), tanh (another activation function), and the like, which is not limited in this disclosure.
The imaging equipment inputs the skin surface temperature map into the body temperature neural network model of the target living body, and the implementation mode of obtaining the body temperature of the target living body comprises the following steps (1) to (3):
(1) the camera device obtains first feature maps of a plurality of target areas in the first face area through the convolutional layer according to the skin surface temperature map, conducts nonlinear correction on the first feature maps through the activation layer to obtain second feature maps, and conducts down-sampling on the second feature maps through the pooling layer to obtain third feature maps.
In one possible implementation, the image capturing apparatus may extract the same number of feature points or different numbers of feature points from a plurality of target regions by the convolution layer according to the skin surface temperature map. In another possible implementation manner, the combination of the multiple target regions is the first face region, and the image capturing device may extract all pixel points from the multiple target regions as feature points through the convolution layer according to the skin surface temperature map.
In the embodiment of the disclosure, all the pixel points are extracted from the plurality of target areas to serve as the feature points, and the temperature of each point in the face area is fully utilized by the body temperature of the target living body acquired based on the feature points, so that the measured body temperature is small in error and high in accuracy.
It should be noted that the number of times of executing the step (1) may be set according to needs, and the disclosure does not limit this.
(2) And compressing the third feature map into N one-dimensional feature vectors through a full connection layer, wherein N is an integer greater than or equal to 1.
(3) The imaging apparatus maps the N one-dimensional feature vectors to a body temperature of the target living body.
In one possible implementation manner, the implementation manner of this step is: the imaging apparatus maps the N one-dimensional feature vectors to one of K individual temperature values, which is taken as the body temperature of the target living body.
Taking the target living body as an example, the body temperature of a human is in the range of 34.1-42 degrees, and at intervals of 0.1 degrees, the value of K is 80. The image pickup device maps the N one-dimensional feature vectors into one of the 80 individual temperature values, and the body temperature value is taken as the body temperature of the human body.
In another possible implementation manner, the implementation manner of the step is as follows: the image pickup device maps the N one-dimensional feature vectors into a plurality of K individual temperature values, acquires the mapping probability of each of the plurality of body temperature values, and takes the body temperature value with the highest mapping probability as the body temperature of the target living body.
Referring to fig. 7, a temperature measurement flowchart is shown, after the posture of the face in the first face region is corrected, the skin surface temperature is calculated, a skin surface temperature map is generated, and the skin surface temperature map is input into the body temperature neural network model to obtain the body temperature of the target living body.
In the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
Fig. 8 is a flowchart of a body temperature measurement method provided by an embodiment of the present disclosure. The embodiment mainly introduces a method of obtaining skin surface temperatures of a plurality of pixel points from a plurality of regions of a face and measuring a body temperature according to the skin surface temperatures of the plurality of pixel points. Referring to fig. 8, the embodiment includes:
step 801: the imaging apparatus determines a first face area in which a face of a living subject is located in a thermal infrared image of the living subject.
The implementation manner of this step is the same as that of step 301, and is not described herein again.
Step 802: the image pickup apparatus determines an occlusion region in a first face region, the occlusion region being a region in which a face of the living target is occluded by the object, the occlusion region being removed from the first face region.
The implementation of this step is the same as that of step 302, and is not described here again.
Step 803: the image capturing apparatus extracts M feature points from a plurality of target regions in the first face region, M being an integer greater than 1.
The implementation of this step is the same as that in step 304, and is not described here again.
Step 804: the image pickup equipment determines M skin surface temperatures corresponding to the M characteristic points, and the M skin surface temperatures corresponding to the M characteristic points are determined as the skin surface temperatures of the multiple pixel points.
In one possible implementation manner, the implementation manner in which the image capturing apparatus determines the M skin surface temperatures corresponding to the M feature points may be: the camera device directly obtains the skin surface temperatures corresponding to the M pixel points corresponding to the M characteristic points, and the skin surface temperatures corresponding to the M pixel points are used as the M skin surface temperatures corresponding to the M characteristic points. The method is simple and has high efficiency.
The implementation mode of the camera equipment for acquiring the skin surface temperature corresponding to the M pixel points corresponding to the M characteristic points is as follows: the camera device determines the thermal infrared gray values corresponding to the M pixel points in the first face region, and obtains the skin surface temperatures corresponding to the M pixel points from the corresponding relationship between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the M pixel points. The correspondence between the thermal infrared gray value and the skin surface temperature can be preset in the image pickup device.
In another possible implementation manner, the implementation manner of the image capturing apparatus determining the M skin surface temperatures corresponding to the M feature points may be: for each feature point in the M feature points, the image pickup equipment determines a feature region where the feature point is located according to the feature point, acquires the average temperature of the skin surface temperature of each pixel point in the feature region, and takes the average temperature as the skin surface temperature of the feature point.
The size of the feature region may be set as required, for example, the feature region may be a region corresponding to 3 × 3 neighboring pixels, which is not limited in this disclosure.
The realization mode that the camera equipment acquires the average temperature of the skin surface temperature of each pixel point in the characteristic region is as follows: the image pickup equipment determines a thermal infrared gray value corresponding to each pixel point in the characteristic region, obtains the skin surface temperature corresponding to each pixel point from the corresponding relation between the thermal infrared gray value and the skin surface temperature according to the thermal infrared gray value corresponding to each pixel point, and calculates the average temperature of the skin surface temperature corresponding to each pixel point. The correspondence between the thermal infrared gray value and the skin surface temperature can be preset in the image pickup device.
In the embodiment of the disclosure, the image pickup device obtains the average temperature of the skin surface temperature of each pixel point in the characteristic region by determining the characteristic region where the characteristic point is located, and uses the average temperature as the skin surface temperature of the characteristic point, so that the influence of noise fluctuation in the imaging process can be effectively eliminated, and the accuracy of the M skin surface temperatures corresponding to the M characteristic points is ensured.
Step 805: the image pickup equipment performs feature compression on the skin surface temperatures of the multiple pixel points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M.
The image capturing device performs feature compression on the skin surface temperatures of the plurality of pixel points to obtain P feature values corresponding to the P feature points, and the image capturing device performs feature compression on M skin surface temperatures corresponding to the M feature points in step 305 to obtain P feature values corresponding to the P feature points in the same manner, which is not repeated here.
Step 806: the image pickup equipment determines the body temperature of the target living body through a target regression relation according to the P characteristic values, the independent variable of the target regression relation is the P characteristic values, and the dependent variable is the body temperature of the target living body.
The implementation of this step is the same as that of step 306, and is not described here again.
In the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
Fig. 9 is a block diagram of a body temperature measurement device provided in an embodiment of the present disclosure. Referring to fig. 9, the embodiment includes:
a face region determining module 901 configured to determine a first face region in which a face of the living subject is located in the thermal infrared image of the living subject.
A skin surface temperature acquisition module 902 configured to acquire skin surface temperatures corresponding to a plurality of pixel points of the first face region.
A body temperature acquiring module 903 configured to acquire skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperatures corresponding to the plurality of pixel points, and acquire a body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In a possible implementation manner, the body temperature obtaining module 903 is further configured to determine, according to skin surface temperatures corresponding to a plurality of pixel points, M skin surface temperatures corresponding to M feature points of a plurality of target regions of the first face region, where M is an integer greater than 1; performing feature compression on the M skin surface temperatures corresponding to the M feature points to obtain P feature values corresponding to the P feature points, wherein P is a positive integer smaller than M; determining the body temperature of the target living body through a target regression relationship according to the P characteristic values, wherein the independent variable of the target regression relationship is the P characteristic values, and the dependent variable is the body temperature of the target living body; wherein the target regression relationship is determined according to a face type of the target living body in the first face region, the face type being determined according to the M skin surface temperatures corresponding to the M feature points.
In another possible implementation manner, the body temperature obtaining module 903 is further configured to extract M feature points from a plurality of target regions of the first face region; selecting skin surface temperatures corresponding to M pixel points corresponding to M characteristic points from skin surface temperatures corresponding to the multiple pixel points, and taking the skin surface temperatures corresponding to the M pixel points as M skin surface temperatures corresponding to the M characteristic points; or, for each feature point in the M feature points, determining a feature region where the feature point is located among the plurality of pixel points according to the feature point, determining an average temperature of skin surface temperatures of each pixel point in the feature region, and taking the average temperature as the skin surface temperature of the feature point.
In another possible implementation manner, the body temperature obtaining module 903 is further configured to determine a face type of the target living body in the first face region according to M skin surface temperatures corresponding to the M feature points; and acquiring a target regression relation matched with the face type.
In another possible implementation, the skin surface temperature obtaining module 902 is further configured to extract M feature points from a plurality of target regions in the first face region, where M is an integer greater than 1; and determining M skin surface temperatures corresponding to the M characteristic points, and determining the M skin surface temperatures corresponding to the M characteristic points as the skin surface temperatures of the multiple pixel points.
In another possible implementation manner, the body temperature obtaining module 903 is further configured to generate a skin surface temperature map corresponding to the first face area according to the skin surface temperatures corresponding to the multiple pixel points; and inputting the skin surface temperature map into a body temperature neural network model of the target living body to obtain the body temperature of the target living body, wherein the body temperature neural network model is used for acquiring the skin surface temperatures of a plurality of target areas in the first face area according to the skin surface temperature map and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas.
In another possible implementation, the body temperature neural network model comprises a convolutional layer, an activation layer, a pooling layer and a full-link layer;
a body temperature acquisition module 903, further configured to acquire a first feature map of a plurality of target regions in the first face region through the convolution layer according to the skin surface temperature map; carrying out nonlinear correction on the first characteristic diagram through the activation layer to obtain a second characteristic diagram; performing down-sampling on the second characteristic diagram through the pooling layer to obtain a third characteristic diagram; compressing the third feature map into N one-dimensional feature vectors through a full connection layer, wherein N is an integer greater than or equal to 1; and mapping the N one-dimensional feature vectors to the body temperature of the target living body.
In another possible implementation manner, the face region determining module 901 is further configured to input the thermal infrared image of the living target into the face detection model of the living target, and obtain a first face region where the face of the living target is located; or acquiring a visible light image of the target living body, determining a second face area where the face of the target living body is located in the visible light image, and determining a first face area where the face of the target living body is located in the thermal infrared image according to the position information of the second face area in the visible light image, wherein the visible light image is shot by the camera when the thermal infrared image is shot.
In another possible implementation manner, the image capturing apparatus includes a camera, where the visible light image and the thermal infrared image are captured by the camera through a light splitting technique, and the face region determining module 901 is further configured to determine, according to the position information of the second face region in the visible light image, a first region corresponding to the position information in the thermal infrared image, and use the first region as a first face region where the face of the target living body is located; alternatively, the first and second electrodes may be,
the image pickup device includes a first camera and a second camera, the visible light image is captured by the first camera, the thermal infrared image is captured by the second camera, and the face region determining module 901 is further configured to determine a second region corresponding to the position information in the thermal infrared image according to the position information of the second face region in the visible light image, and perform coordinate registration on the second region according to registration information of the image pickup device to obtain the first face region.
In another possible implementation manner, the face region determining module 901 is further configured to determine an occlusion region in the first face region, where the occlusion region is a region where the face of the target living body is occluded by an object; the occlusion region is removed from the first face region.
In another possible implementation, the face region determining module 901 is further configured to acquire pose information of a facial pose of the target living body in the first face region; determining that an inclination angle of the face pose relative to the target pose is greater than a preset threshold in response to determining from the pose information; and performing posture correction on the face in the first face region according to the posture information.
In another possible implementation, the skin surface temperature acquisition module 902 is further configured to determine thermal infrared gray values corresponding to a plurality of pixel points of the first face region; and acquiring the skin surface temperatures corresponding to the multiple pixel points from the corresponding relation between the thermal infrared gray values and the skin surface temperatures according to the thermal infrared gray values corresponding to the multiple pixel points.
In the embodiment of the disclosure, in the thermal infrared image of the living target, a first face area where the face of the living target is located is determined; acquiring skin surface temperatures corresponding to a plurality of pixel points of a first face region; the skin surface temperatures of a plurality of target areas in the first face area are acquired according to the skin surface temperatures corresponding to the plurality of pixel points, and the body temperature of the target living body is acquired according to the skin surface temperatures of the plurality of target areas. By acquiring the skin surface temperatures of a plurality of target areas in the first face area and acquiring the body temperature of the target living body according to the skin surface temperatures of the plurality of target areas, the temperatures of all areas of the face are fully utilized, so that the acquired body temperature has small error and high accuracy. And the forehead temperature gun does not need to be tightly attached to the forehead, so that the user feels uncomfortable and worry about the adverse effect caused by contacting the forehead temperature gun.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the body temperature measurement device provided in the above embodiment, when the body temperature is measured, only the division of the above functional modules is taken as an example, and in practical application, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the body temperature measurement device provided by the above embodiment and the body temperature measurement method embodiment belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and will not be described herein again.
Fig. 10 shows a block diagram of an image capturing apparatus 1000 according to an exemplary embodiment of the present disclosure. The image pickup apparatus 1000 may be: a smartphone, a tablet, a laptop, or a desktop computer. The camera device 1000 may also be referred to by other names such as user device, portable camera device, laptop camera device, desktop camera device, etc.
In general, the image pickup apparatus 1000 includes: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1002 is used to store at least one instruction for execution by the processor 1001 to implement the body temperature measurement methods provided by the method embodiments herein.
In some embodiments, the image capturing apparatus 1000 may further include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, touch screen display 1005, camera assembly 1006, and power supply 1007.
The peripheral interface 1003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1001 and the memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other image capture devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1005 may be one, and a front panel of the image pickup apparatus 1000 is provided; in other embodiments, the number of the display screens 1005 may be at least two, and the display screens 1005 are respectively disposed on different surfaces of the image capturing apparatus 1000 or are in a folding design; in still other embodiments, the display screen 1005 may be a flexible display screen, disposed on a curved surface or on a folded surface of the camera device 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1006 is used to capture images or video. The camera assembly 1006 may include a visible image camera and an infrared camera. Optionally, the camera assembly 1006 includes a front camera and a rear camera. In general, a front camera is provided on a front panel of an image pickup apparatus, and a rear camera is provided on a rear surface of the image pickup apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The power supply 1007 is used to supply power to each component in the image pickup apparatus 1000. The power source 1007 may be alternating current, direct current, a disposable battery, or a rechargeable battery. When the power source 1007 includes a rechargeable battery, the rechargeable battery may support wired charging or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the camera device 1000 further comprises one or more sensors 1008. The one or more sensors 1008 include, but are not limited to: acceleration sensor 1009, gyro sensor 1010, pressure sensor 1011, fingerprint sensor 1012, optical sensor 1013, and proximity sensor 1014.
The acceleration sensor 1009 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the image pickup apparatus 1000. For example, the acceleration sensor 1009 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the touch display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1009. The acceleration sensor 1009 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1010 may detect a body direction and a rotation angle of the image pickup apparatus 1000, and the gyro sensor 1010 may acquire a 3D motion of the user to the image pickup apparatus 1000 in cooperation with the acceleration sensor 1009. The processor 1001 may implement the following functions according to the data collected by the gyro sensor 1010: motion sensing (e.g., changing the UI according to a tilt operation by a user), image stabilization at the time of shooting.
The pressure sensor 1011 may be disposed on a side bezel of the image capture apparatus 1000 and/or on a lower layer of the touch display screen 1005. When the pressure sensor 1011 is disposed on the side frame of the image pickup apparatus 1000, a user's grip signal on the image pickup apparatus 1000 can be detected, and left-right hand recognition or shortcut operation is performed by the processor 1001 according to the grip signal acquired by the pressure sensor 1011. When the pressure sensor 1011 is disposed at the lower layer of the touch display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1012 is used to collect a fingerprint of a user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1012, or the fingerprint sensor 1012 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1012 may be provided on the front, back, or side of the image pickup apparatus 1000. When a physical key or a vendor Logo is provided on the image pickup apparatus 1000, the fingerprint sensor 1012 may be integrated with the physical key or the vendor Logo.
The optical sensor 1013 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the touch display screen 1005 according to the ambient light intensity collected by the optical sensor 1013. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the ambient light intensity collected by the optical sensor 1013.
A proximity sensor 1014, also called a distance sensor, is generally provided on the front panel of the image pickup apparatus 1000. The proximity sensor 1014 is used to capture the distance between the user and the front of the image capture apparatus 1000.
Those skilled in the art will appreciate that the configuration shown in fig. 10 does not constitute a limitation of the image capturing apparatus 1000, and may include more or fewer components than those shown, or combine some of the components, or adopt a different arrangement of components.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, is also provided that includes instructions executable by a processor in an imaging device to perform a body temperature measurement method in the embodiments described below. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only and not to limit the present disclosure, and any modification, equivalent replacement, or improvement made without departing from the spirit and scope of the present disclosure is to be considered as the same as the present disclosure.