Face living body detection method based on temperature information

文档序号:8379 发布日期:2021-09-17 浏览:32次 中文

1. A face living body detection method based on temperature information is characterized by being realized by the following method: firstly, a color camera and a thermal infrared imager sensor are arranged in the same plane in a close proximity mode to establish a camera model, and the coordinate relation corresponding to an RGB image collected by the color camera and an infrared image collected by the thermal infrared imager is solved through the established camera model; then, detecting key point coordinates of a face in the RGB image through an MTCNN face detection algorithm, calculating a forehead position area through coordinates of pupils of two eyes, substituting the position area coordinates of the forehead in the RGB image into a coordinate relation between the obtained RGB image and the infrared image, calculating the forehead position area coordinates in the infrared image, finally obtaining forehead temperature information through the infrared image, performing temperature compensation by using a multiple regression model to obtain forehead temperature, judging whether the living body is the living body or not by using the forehead temperature, and simultaneously, detecting a plurality of living bodies.

2. The method for detecting the living human face based on the temperature information as claimed in claim 1 is specifically realized by the following steps:

a) arranging a camera, arranging a color camera for collecting color images and a thermal infrared imager sensor for collecting temperature information in the same plane and in close proximity, and adjusting the visual fields of the color camera and the thermal infrared imager to be consistent;

b) acquiring a color image, namely acquiring the color image acquired by a color camera, recording the color image as an RGB image, wherein the size of the image is L multiplied by H, namely the width is L pixels, and the height is H pixels;

c) acquiring an infrared thermal imager image, wherein data output by the infrared thermal imager is a temperature value, converting the temperature value into a pixel value, firstly normalizing the acquired temperature value, wherein the normalized temperature value is the pixel value and is a gray image, and then converting the gray image into the infrared image by utilizing a Jet color mapping algorithm; setting the size of the converted infrared image to l × h, namely the width of the converted infrared image is l pixels, and the height of the converted infrared image is h pixels; if the infrared image and the RGB image are equal in size, adjusting is not needed, and step d) is executed; if the infrared image is smaller than the RGB image, the infrared image and the RGB image are adjusted to be consistent in size through a difference algorithm;

d) solving a corresponding coordinate relation, and solving the coordinate relation between the RGB image and the infrared image according to the imaging principle of the color camera, the imaging principle of the thermal infrared imager and the position relation of the color camera and the thermal infrared imager sensor which are arranged on the same plane;

e) determining the forehead area coordinate and temperature, firstly acquiring face information including the left eye pupil through hole position, the right eye pupil position and the face width in the real-time acquired RGB image through an MTCNN face detection algorithm, and then determining the coordinate E of the upper left point of the forehead area position according to the two eye pupil position and the face width information1(X1,Y1) And coordinates E of the lower right point2(X2,Y2) Then the forehead position coordinate E in the RGB image1(X1,Y1) And E2(X2,Y2) Bringing the coordinate relationship between the RGB image and the infrared image obtained in the step d) into the coordinate relationship, and calculating the coordinate of the RGB image in the infrared image, and recording the coordinate as E'1(X′1,Y′1) And E'2(X′2,Y′2) (ii) a Finally, obtaining E 'in the infrared image'1(X′1,Y′1) And E'2(X′2,Y′2) And (3) carrying out temperature compensation on forehead temperature values of points in the coordinate area by using a multiple regression model to obtain forehead temperature values, averaging the forehead area temperature values to obtain T, and judging whether the detected target is a living body according to the temperature average value T.

3. The living human face detection method based on temperature information as claimed in claim 2, wherein the temperature value of each point is converted into a pixel value in step c) by equation (1):

wherein, the temperature is a temperature value of a certain point, max is a maximum temperature value, min is a minimum temperature value, and C is a value of a pixel point converted by the temperature value temperature.

4. The method for detecting the living human face based on the temperature information as claimed in claim 2, wherein the step d) of finding the corresponding coordinate relationship is realized by the following steps:

d-1) establishing an imaging model of the color camera, wherein an object plane adopts a world geodetic coordinate system o-XYZ, an image plane of the color camera adopts a camera coordinate system o-XYZ, and one point in the object planecP(Xc,Yc,Zc) The coordinates projected into the camera imaging plane via the camera coordinate system are P (x, y), and the following relation is obtained according to the imaging principle of the color camera:

wherein f is the focal length of the camera;

converting the relation (2) into a matrix form as shown in the formula (3):

wherein, (x, y) is the physical coordinate of P in the image coordinate system, and the unit is mm; (X)c,Yc,Zc) Is composed ofcP is the coordinate of the camera coordinate system, and the unit is mm; f is the focal length of the camera in mm;

d-2) conversion of physical coordinates to pixel coordinates, wherein the pixel coordinates corresponding to the physical coordinates (x, y) in the image coordinate system are (u, v), and the following relation is satisfied:

in the formula (u)0,v0) The pixel being the central point of the image planeCoordinates, dx、dyThe length of each pixel point in the x-axis direction and the y-axis direction is respectively;

d-3), calculating a perspective transformation matrix, assuming that the coordinate of a point in the object plane in a camera coordinate system is p (x, y, z), imaging through a camera lens to obtain a point p ' (x ', y ') in the image plane, introducing a focal symmetry plane at the position of the image plane symmetrical to the camera plane because the imaging in the image plane is reverse, and taking the focal symmetry plane as the image plane; the simultaneous equations (3) and (4) can obtain the conversion relation from the point in the camera coordinate system to the pixel coordinate system, i.e. deducing the coordinate p (x, y, z) in the camera coordinate system to the coordinate p' in the pixel coordinate system "1The perspective transformation matrix for (x ", y") is as follows:

in the formula (u)0,v0) Pixel coordinates which are the center of the image plane;

through the established perspective transformation matrix, one point in the camera space can be converted into an imaging plane, and similarly, one point corresponding to the three-dimensional coordinate of the camera space can be solved through one point in the pixel coordinate plane, so that a positioning model from a color camera image to be obtained to an infrared thermal imager image is deduced;

d-4) solving a transformation relation, setting a point p' on an object to move in an object plane, wherein the distance to a lens is constant Z, the object is imaged as a point p (x, y) in an image plane of a color camera, and is imaged as a point p "(x", y ") in an imaging plane of a thermal infrared imager, f and f respectively represent focal lengths of the color camera and a thermal imaging camera, L represents the position of the color camera, R represents the position of the thermal imaging camera according to an L coordinate system of the color camera established by the point L, and the two cameras are positioned on the same horizontal line, namely the x axis is shared, and the distance between the two cameras is L according to an R coordinate system of the thermal infrared imager established by the point R;

when the face image is collected, the position of the face moves in a certain area range relative to the camera, so that the face image is extractedOne point on the outline of the human faceLP, taking the coordinate system of the color camera as a world coordinate system, and assuming pointsLP moves in an object plane with a distance Z from the camera, and can be obtained according to the formula (5):

in the same way, the following can be obtained:

is provided withRP is a pointLAnd the coordinates of the P relative to the R coordinate system are obtained by transforming the transfer matrix according to the coordinate position in the three-dimensional space:

in the formula (I), the compound is shown in the specification,is a transfer matrix of coordinate system L relative to coordinate system R, expressed as follows:

by combining the formulas (6) to (9), it is possible to obtain:

the equation (10) is converted into a matrix form, i.e. a transformation relation model for locating p "(x", y ") by the point p (x, y) is derived as follows:

in the formula (x)0,y0) And (x)0”,y0") pixel coordinates, d, of the center of the image plane of the color camera and thermal infrared imager, respectivelyx,dyAnd dx”,dy"the length of each pixel point in the horizontal direction and the vertical direction in the image plane of the color camera and the thermal infrared imager respectively;

wherein the content of the first and second substances,in order to locate the transformation matrix,are the correction matrix coefficients.

5. The method for detecting the living human face based on the temperature information as claimed in claim 2 or 3, wherein the method for determining the head-rating coordinate in step e) is as follows: setting the position coordinates of the left-eye pupil in the acquired RGB image as L (x1, y1), the position coordinates of the right-eye pupil as R (x2, y2), and the width of the acquired human face as L; firstly, the midpoint coordinate Z ((x1+ x2)/2, (y1+ y2)/2) of the connecting line of the pupils of the two eyes in the RGB image is obtained according to the position coordinates of the pupils of the two eyes; then, the coordinate at position 1/5 × L above midpoint coordinate Z ((x1+ x2)/2, (y1+ y2)/2) is determined as the forehead coordinate in the RGB image.

6. The method for detecting the living human face based on the temperature information as claimed in claim 2 or 3, wherein the temperature compensation method of the multiple regression model in the step e) is as follows:

since the measurement temperature is determined by the measurement error and the ambient temperature, y is set to β0+β1x1+β2x2And + epsilon is the term to be solved, where y is the dependent variable actual temperature,x1for measuring temperature, x2Is the ambient temperature, and ε is the random error; firstly, a multiple regression model is established, and if n groups of measurement values exist in the actual measurement process, the multiple regression model of the item to be solved can be expressed as:

then, an estimated value of a parameter, parameter beta, is found for the system of multiple linear regression equations in equation (12)012The least square method is adopted for estimation solution, and the corresponding estimation value is set asThe deviation square sum of the model and the observed value is made to be minimum value, and when the deviation square sum is equal toAt the minimum, this is equation (13):

obtained at this timeIs just a regression coefficientA least squares estimate of (d); estimated value of regression coefficientThe following conditions are satisfied:

finally, a random variable epsilon is determined, the random variable epsilon and the independent variable x1,x2Has no correlation and satisfies the formula Cov (X)jii) When j is 1,2, … k, i is 1,2, … n), the random variable epsilon can be solved;

thus, the compensation temperature resulting from multiple regression is:

7. the living human face detection method based on the temperature information as claimed in claim 2 or 3, wherein: in the step e), after the forehead temperature value T of the target is obtained, whether the forehead temperature meets the normal temperature of the face is judged, and if the forehead temperature is within the temperature range, the forehead temperature is a living body; otherwise, the human face is not a living body, and the method realizes the detection of the living body of the human face; if the forehead temperature value T is between 33 and 40 ℃, the forehead is considered as a living body.

Background

The living body detection plays an important role in face recognition, for a face recognition system, the lack of living body detection is easy to be attacked by deception, and common deception behaviors such as printing of legal user photos, making of 3D face models, playing of user videos and the like cause the face recognition system to be not safe enough and not matched with the safety requirements of face recognition entrance guard. Therefore, the combination of the temperature information and the face detection algorithm to realize the living body detection has great significance.

Disclosure of Invention

In order to overcome the defects of the technical problems, the invention provides a method for detecting the living human face based on temperature information.

The invention discloses a human face living body detection method based on temperature information, which is characterized by comprising the following steps of: firstly, a color camera and a thermal infrared imager sensor are arranged in the same plane in a close proximity manner to establish a camera model, and the coordinate relation corresponding to an RGB image acquired by the color camera and an infrared image acquired by the thermal infrared imager is solved through the established camera model; then, detecting key point coordinates of a human face in the RGB image through an MTCNN (multiple-terminal coupled neural network) human face detection algorithm, calculating a forehead position area through coordinates of pupils of two eyes, substituting the forehead position area coordinates in the RGB image into a coordinate relation between the RGB image and the infrared image, calculating the forehead position area coordinates in the infrared image, finally obtaining forehead temperature information through the infrared image, performing temperature compensation by using a multiple regression model to obtain forehead temperature, judging whether the living body is the living body or not by using the forehead temperature, and simultaneously, detecting a plurality of living bodies.

The invention relates to a human face living body detection method based on temperature information, which is realized by the following steps:

a) arranging a camera, arranging a color camera for collecting color images and a thermal infrared imager sensor for collecting temperature information in the same plane and in close proximity, and adjusting the visual fields of the color camera and the thermal infrared imager to be consistent;

b) acquiring a color image, namely acquiring the color image acquired by a color camera, recording the color image as an RGB image, wherein the size of the image is L multiplied by H, namely the width is L pixels, and the height is H pixels;

c) acquiring an infrared thermal imager image, wherein data output by the infrared thermal imager is a temperature value, converting the temperature value into a pixel value, firstly normalizing the acquired temperature value, wherein the normalized temperature value is the pixel value and is a gray image, and then converting the gray image into the infrared image by utilizing a Jet color mapping algorithm; setting the size of the converted infrared image to l × h, namely the width of the converted infrared image is l pixels, and the height of the converted infrared image is h pixels; if the infrared image and the RGB image are equal in size, adjusting is not needed, and step d) is executed; if the infrared image is smaller than the RGB image, the infrared image and the RGB image are adjusted to be consistent in size through a difference algorithm;

d) solving a corresponding coordinate relation, and solving the coordinate relation between the RGB image and the infrared image according to the imaging principle of the color camera, the imaging principle of the thermal infrared imager and the position relation of the color camera and the thermal infrared imager sensor which are arranged on the same plane;

e) determining the forehead area coordinate and temperature, firstly acquiring face information including the left eye pupil through hole position, the right eye pupil position and the face width in an RGB image acquired in real time through an MTCNN face detection algorithm, and then determining the coordinate E of the upper left point of the forehead area position according to the two eye pupil positions and the face width information1(X1,Y1) And coordinates E of the lower right point2(X2,Y2) Then the forehead position coordinate E in the RGB image1(X1,Y1) And E2(X2,Y2) Bringing the coordinate relationship between the RGB image and the infrared image obtained in the step d) into a coordinate relationship, and calculating the coordinate of the coordinate in the infrared image, and recording the coordinate as E'1(X′1,Y′1) And E'2(X′2,Y′2) (ii) a Finally, obtaining E 'in the infrared image'1(X′1,Y′1) And E'2(X′2,Y′2) And (3) carrying out temperature compensation on the forehead temperature value of the point in the coordinate area by using a multivariate regression model to obtain a forehead temperature value, averaging the forehead area temperature value to obtain T, and judging whether the detected target is a living body according to the temperature average value T.

The invention relates to a human face living body detection method based on temperature information, wherein in the step c), the temperature value of each point is converted into a pixel value by a formula (1):

wherein, the temperature is a temperature value of a certain point, max is a maximum temperature value, min is a minimum temperature value, and C is a value of a pixel point converted by the temperature value temperature.

According to the face living body detection method based on the temperature information, the corresponding coordinate relation in the step d) is specifically solved through the following steps:

d-1) establishing an imaging model of the color camera, wherein an object plane adopts a world geodetic coordinate system o-XYZ, an image plane of the color camera adopts a camera coordinate system o-XYZ, and one point in the object planecP(Xc,Yc,Zc) The coordinates projected into the camera imaging plane via the camera coordinate system are P (x, y), and the following relation is obtained according to the imaging principle of the color camera:

wherein f is the focal length of the camera;

converting the relation (2) into a matrix form as shown in the formula (3):

wherein, (x, y) is the physical coordinate of P in the image coordinate system, and the unit is mm; (X)c,Yc,Zc) Is composed ofcP is the coordinate of the camera coordinate system, and the unit is mm; f is the focal length of the camera in mm;

d-2) conversion of physical coordinates to pixel coordinates, wherein the pixel coordinates corresponding to the physical coordinates (x, y) in the image coordinate system are (u, v), and the following relation is satisfied:

in the formula (u)0,v0) Pixel coordinates of the center point of the image plane, dx、dyThe length of each pixel point in the x-axis direction and the y-axis direction is respectively;

d-3), calculating a perspective transformation matrix, assuming that the coordinate of a point in the object plane in a camera coordinate system is p (x, y, z), imaging through a camera lens to obtain a point p ' (x ', y ') in the image plane, introducing a focal symmetry plane at the position of the image plane symmetrical to the camera plane because the imaging in the image plane is reverse, and taking the focal symmetry plane as the image plane; the transformation relation from the point in the camera coordinate system to the pixel coordinate system can be obtained by simultaneous equations (3) and (4), i.e. the transformation relation from the coordinate p (x, y, z) in the camera coordinate system to the coordinate p ″' in the pixel coordinate system can be deduced1The perspective transformation matrix for (x ", y") is as follows:

in the formula (u)0,v0) Pixel coordinates which are the center of the image plane;

through the established perspective transformation matrix, one point in the camera space can be converted into an imaging plane, and similarly, one point corresponding to a three-dimensional coordinate of the camera space can be solved through one point in a pixel coordinate plane, so that a positioning model from a color camera image to be obtained to an infrared thermal imager image is deduced;

d-4) solving a transformation relation, setting a point p ' on an object to move in an object plane, wherein the distance to a lens is constant Z, the object is imaged as a point p (x, y) in an image plane of a color camera, and is imaged as a point p ' (x ', y ') in an imaging plane of a thermal infrared imager, f and f ' respectively represent focal lengths of the color camera and a thermal imaging camera, L represents the position of the color camera, an L coordinate system of the color camera is established according to the point L, R represents the position of the thermal imaging camera, an R coordinate system of the thermal infrared imager is established according to the point R, the two cameras are located on the same horizontal line, namely the x axis is shared, and the distance between the two cameras is L;

when the face image is collected, the position of the face moves in a certain area range relative to the camera, so that one point on the outline of the face is extractedLP, taking the coordinate system of the color camera as a world coordinate system, and assuming pointsLP moves in an object plane with a distance Z from the camera, and can be obtained according to the formula (5):

in the same way, the following can be obtained:

is provided withRP is a pointLAnd the coordinates of the P relative to the R coordinate system are obtained by transforming the transfer matrix according to the coordinate position in the three-dimensional space:

in the formula (I), the compound is shown in the specification,is a transfer matrix of coordinate system L relative to coordinate system R, expressed as follows:

by combining the formulas (6) to (9), it is possible to obtain:

the transformation of equation (10) into a matrix form, i.e., the transformation relation model for locating p "(x", y ") by point p (x, y) is derived as follows:

in the formula (x)0,y0) And (x)0″,y0") pixel coordinates of the centers of the image planes of the color camera and the thermal infrared imager, dx,dyAnd dx″,dy"the length of each pixel point in the horizontal direction and the vertical direction in the image plane of the color camera and the thermal infrared imager respectively;

wherein the content of the first and second substances,in order to locate the transformation matrix,are the correction matrix coefficients.

The invention relates to a face living body detection method based on temperature information, wherein the method for determining the head quota coordinate in the step e) comprises the following steps: setting the position coordinates of the left-eye pupil in the acquired RGB image as L (x1, y1), the position coordinates of the right-eye pupil as R (x2, y2), and the width of the acquired human face as L; firstly, the midpoint coordinate Z ((x1+ x2)/2, (y1+ y2)/2) of the connecting line of the pupils of the two eyes in the RGB image is obtained according to the position coordinates of the pupils of the two eyes; then, the coordinate at position 1/5 × L above midpoint coordinate Z ((x1+ x2)/2, (y1+ y2)/2) is determined as the forehead coordinate in the RGB image.

The invention relates to a face living body detection method based on temperature information, wherein the multivariate regression model temperature compensation method in the step e) comprises the following steps:

since the measurement temperature is determined by the measurement error and the ambient temperature, y is set to β01x12x2+ ε is the term to be solved, where y is the dependent variable actual temperature, x1For measuring temperature, x2Is the ambient temperature, and ε is the random error; firstly, a multiple regression model is established, and if n groups of measurement values exist in the actual measurement process, the multiple regression model of the item to be solved can be expressed as:

then, an estimated value of a parameter, parameter beta, is found for the system of multiple linear regression equations in equation (12)012The least square method is adopted for estimation solution, and the corresponding estimation value is set asThe deviation square sum of the model and the observed value is made to be minimum value, and when the deviation square sum is equal toAt the minimum, this is equation (13):

obtained at this timeIs just a regression coefficientA least squares estimate of (d); estimation of regression coefficientsThe following conditions are satisfied:

finally, a random variable epsilon is determined, the random variable epsilon and the independent variable x1,x2Has no correlation and satisfies the formula Cov (X)jii) When j is 1,2, … k, i is 1,2, … n), the random variable epsilon can be solved;

thus, the compensation temperature resulting from multiple regression is:

in the method for detecting the living human face based on the temperature information, in the step e), if the acquired forehead temperature value T is between 33 and 40 ℃, the forehead temperature value T is considered as a living human body.

The invention has the beneficial effects that: the invention relates to a human face living body detection method based on temperature information, which comprises the steps of firstly, placing a color camera and a thermal infrared imager sensor in close proximity in the same plane, establishing a camera model, calculating the corresponding coordinate relationship between the camera and the thermal infrared imager according to the relative position information of camera parameters and the camera, then, adjusting the sizes of the obtained RGB image and the infrared image to be consistent, firstly, obtaining human face information including the positions and the face widths of eyes through holes by the RGB image through a human face recognition algorithm in the subsequent living body detection process, calculating the forehead coordinate in the RGB image, substituting the forehead coordinate in the RGB image into a linear proportional relationship to obtain the coordinate of a forehead in the infrared image, wherein the temperature corresponding to the coordinate of the forehead in the infrared image is the temperature of a detected living body, and finally, judging whether a target is the living body according to the detected temperature, and whether the temperature is normal can be judged, so that the deception attack behaviors of printing legal user photos, making 3D face models and playing user videos at present can be effectively avoided.

Drawings

FIG. 1 is a schematic view of the imaging principle of the color camera according to the present invention;

FIG. 2 is a schematic view of an imaging principle of a thermal infrared imager according to the present invention;

FIG. 3 is a schematic diagram of a positioning model from a color camera image to a thermal infrared imager image according to the present invention;

FIG. 4 is a schematic diagram of forehead position coordinate determination in the present invention;

fig. 5 is a schematic diagram of an application of the method for detecting a living human face based on temperature information according to the present invention.

Detailed Description

The invention is further described with reference to the following figures and examples.

As shown in fig. 1, a schematic diagram of the imaging principle of the color camera according to the present invention is given.

As shown in fig. 2, a schematic diagram of the infrared thermal imager according to the present invention is shown. The upper part of the figure is an infrared image (which is converted into a gray image because a color picture cannot be submitted), and the data output by the thermal infrared imager sensor is the temperature value of each point, so that if the infrared image is to be displayed, the temperature value needs to be converted into a pixel value.

As shown in fig. 3, a schematic view of a positioning model from a color camera image to a thermal infrared imager image in the invention is given, the thermal infrared imager sensor and the color camera are fixed in the same plane and arranged in close proximity, the visual fields of the thermal infrared imager sensor and the color camera are adjusted to be consistent, and if the positions of the thermal infrared imager sensor and the color camera are moved, corresponding parameters need to be recalculated.

The invention relates to a human face living body detection method based on temperature information, which is realized by the following steps:

a) arranging a camera, arranging a color camera for collecting color images and a thermal infrared imager sensor for collecting temperature information in the same plane and in close proximity, adjusting the visual fields of the color camera and the thermal infrared imager to be consistent, and establishing a camera model;

b) acquiring a color image, namely acquiring the color image acquired by a color camera, recording the color image as an RGB image, wherein the size of the image is L multiplied by H, namely the width is L pixels, and the height is H pixels;

c) acquiring an infrared thermal imager image, wherein data output by the infrared thermal imager is a temperature value, converting the temperature value into a pixel value, firstly normalizing the acquired temperature value, wherein the normalized temperature value is the pixel value and is a gray image, and then converting the gray image into the infrared image by utilizing a Jet color mapping algorithm; setting the size of the converted infrared image to l × h, namely the width of the converted infrared image is l pixels, and the height of the converted infrared image is h pixels; if the infrared image and the RGB image are equal in size, adjusting is not needed, and step d) is executed; if the infrared image is smaller than the RGB image, the infrared image and the RGB image are adjusted to be consistent in size through a difference algorithm;

in this step, the temperature value of each point is converted into a pixel value by formula (1):

wherein, the temperature is a temperature value of a certain point, max is a maximum temperature value, min is a minimum temperature value, and C is a value of a pixel point converted by the temperature value temperature.

Generally, the resolution of the RGB image collected by the color camera is higher than that of the infrared image collected by the thermal infrared imager, for example, a 640 × 480 color camera is used, and the width of the collected RGB image is 640 pixels and the height thereof is 480 pixels. If the resolution of the thermal infrared imager sensor is 60 × 48 array, it outputs 60 temperature values in the horizontal direction and 48 temperature values in the height direction, and in order to make the converted infrared image and the RGB image have the same size, the infrared image needs to be extended to 640 × 480 resolution by interpolation.

d) Solving a corresponding coordinate relation, and solving the coordinate relation between the RGB image and the infrared image according to the imaging principle of the color camera, the imaging principle of the thermal infrared imager and the position relation of the color camera and the thermal infrared imager sensor which are arranged on the same plane;

as shown in fig. 1, fig. 2 and fig. 3, this step is specifically realized by the following method:

d-1) establishing an imaging model of the color camera, wherein an object plane adopts a world geodetic coordinate system o-XYZ, an image plane of the color camera adopts a camera coordinate system o-XYZ, and one point in the object planecP(Xc,Yc,Zc) The coordinates projected into the camera imaging plane via the camera coordinate system are P (x, y), and the following relation is obtained according to the imaging principle of the color camera:

wherein f is the focal length of the camera;

converting the relation (2) into a matrix form as shown in the formula (3):

wherein, (x, y) is the physical coordinate of P in the image coordinate system, and the unit is mm; (X)c,Yc,Zc) Is composed ofcP is the coordinate of the camera coordinate system, and the unit is mm; f is the focal length of the camera in mm;

d-2) conversion of physical coordinates to pixel coordinates, wherein the pixel coordinates corresponding to the physical coordinates (x, y) in the image coordinate system are (u, v), and the following relation is satisfied:

in the formula (u)0,v0) Is the image of the central point of the image planePixel coordinate, dx、dyThe length of each pixel point in the x-axis direction and the y-axis direction is respectively;

d-3), calculating a perspective transformation matrix, assuming that the coordinate of a point in the object plane in a camera coordinate system is p (x, y, z), imaging through a camera lens to obtain a point p ' (x ', y ') in the image plane, introducing a focal symmetry plane at the position of the image plane symmetrical to the camera plane because the imaging in the image plane is reverse, and taking the focal symmetry plane as the image plane; the transformation relation from the point in the camera coordinate system to the pixel coordinate system can be obtained by simultaneous equations (3) and (4), i.e. the transformation relation from the coordinate p (x, y, z) in the camera coordinate system to the coordinate p ″' in the pixel coordinate system can be deduced1The perspective transformation matrix for (x ", y") is as follows:

in the formula (u)0,v0) Pixel coordinates which are the center of the image plane;

through the established perspective transformation matrix, one point in the camera space can be converted into an imaging plane, and similarly, one point corresponding to a three-dimensional coordinate of the camera space can be solved through one point in a pixel coordinate plane, so that a positioning model from a color camera image to be obtained to an infrared thermal imager image is deduced;

d-4) solving a transformation relation, setting a point p ' on an object to move in an object plane, wherein the distance to a lens is constant Z, the object is imaged as a point p (x, y) in an image plane of a color camera, and is imaged as a point p ' (x ', y ') in an imaging plane of a thermal infrared imager, f and f ' respectively represent focal lengths of the color camera and a thermal imaging camera, L represents the position of the color camera, an L coordinate system of the color camera is established according to the point L, R represents the position of the thermal imaging camera, an R coordinate system of the thermal infrared imager is established according to the point R, the two cameras are located on the same horizontal line, namely the x axis is shared, and the distance between the two cameras is L;

when the face image is collected, the position of the face moves in a certain area range relative to the cameraSo as to extract a point on the outline of the human faceLP, taking the coordinate system of the color camera as a world coordinate system, and assuming pointsLP moves in an object plane with a distance Z from the camera, and can be obtained according to the formula (5):

in the same way, the following can be obtained:

is provided withRP is a pointLAnd the coordinates of the P relative to the R coordinate system are obtained by transforming the transfer matrix according to the coordinate position in the three-dimensional space:

in the formula (I), the compound is shown in the specification,is a transfer matrix of coordinate system L relative to coordinate system R, expressed as follows:

by combining the formulas (6) to (9), it is possible to obtain:

the transformation of equation (10) into a matrix form, i.e., the transformation relation model for locating p "(x", y ") by point p (x, y) is derived as follows:

in the formula (x)0,y0) And (x)0″,y0") pixel coordinates of the centers of the image planes of the color camera and the thermal infrared imager, dx,dyAnd dx″,dy"the length of each pixel point in the horizontal direction and the vertical direction in the image plane of the color camera and the thermal infrared imager respectively;

wherein the content of the first and second substances,in order to locate the transformation matrix,are the correction matrix coefficients.

e) Determining the forehead area coordinate and temperature, firstly acquiring face information including the left eye pupil through hole position, the right eye pupil position and the face width in an RGB image acquired in real time through an MTCNN face detection algorithm, and then determining the coordinate E of the upper left point of the forehead area position according to the two eye pupil positions and the face width information1(X1,Y1) And coordinates E of the lower right point2(X2,Y2) Then the forehead position coordinate E in the RGB image1(X1,Y1) And E2(X2,Y2) Bringing the coordinate relationship between the RGB image and the infrared image obtained in the step d) into a coordinate relationship, and calculating the coordinate of the coordinate in the infrared image, and recording the coordinate as E'1(X′1,Y′1) And E'2(X′2,Y′2) (ii) a Finally, obtaining E 'in the infrared image'1(X′1,Y′1) And E'2(X′2,Y′2) And (3) carrying out temperature compensation on the forehead temperature value of the point in the coordinate area by using a multivariate regression model to obtain a forehead temperature value, averaging the forehead area temperature value to obtain T, and judging whether the detected target is a living body according to the temperature average value T.

In the step, because the temperature measured by the thermal infrared imager has errors, the calculated temperature value needs to be compensated, and the compensation method adopts a least square method to solve. The adopted multivariate regression model temperature compensation method comprises the following steps:

since the measurement temperature is determined by the measurement error and the ambient temperature, y is set to β01x12x2+ ε is the term to be solved, where y is the dependent variable actual temperature, x1For measuring temperature, x2Is the ambient temperature, and ε is the random error; firstly, a multiple regression model is established, and if n groups of measurement values exist in the actual measurement process, the multiple regression model of the item to be solved can be expressed as:

then, an estimated value of a parameter, parameter beta, is found for the system of multiple linear regression equations in equation (12)012The least square method is adopted for estimation solution, and the corresponding estimation value is set asThe deviation square sum of the model and the observed value is made to be minimum value, and when the deviation square sum is equal toAt the minimum, this is equation (13):

obtained at this timeIs just a regression coefficientA least squares estimate of (d); estimation of regression coefficientsThe following conditions are satisfied:

finally, a random variable epsilon is determined, the random variable epsilon and the independent variable x1,x2Has no correlation and satisfies the formula Cov (X)jii) When j is 1,2, … k, i is 1,2, … n), the random variable epsilon can be solved;

thus, the compensation temperature resulting from multiple regression is:

after the forehead temperature value T of the target is obtained, judging whether the forehead temperature meets the normal temperature of the face, and if the forehead temperature is within the temperature range, determining that the forehead temperature is a living body; otherwise, the human face is not a living body, and the method realizes the detection of the living body of the human face; if the forehead temperature value T is between 33 and 40 ℃, the forehead is considered as a living body.

As shown in fig. 5, an application schematic diagram of the human face living body detection method based on temperature information is given, and it can be seen that the human face living body detection method of the present invention can accurately capture the forehead position of a human face when the human face moves to different positions.

In summary, after the RGB image is matched with the infrared image, the method for detecting the living body of the human face based on the temperature information performs the human face recognition through the RGB image, preliminarily judges whether the image meets the human face characteristics, collects the temperature information of the forehead part of the human face, further judges whether the image is the living body according to the forehead temperature, and eliminates the deception attack behaviors of printing a legal user photo, making a 3D human face model and playing a user video at present, so that the recognition method is safer and more effective, and meanwhile, the method for detecting the living body of the human face based on the temperature information can perform the living body detection of multiple persons at the same time.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种多级协同的变电设备多光谱缺陷识别方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!