Face detection method, device, robot and storage medium

文档序号:8475 发布日期:2021-09-17 浏览:16次 中文

1. A face detection method is applied to a service robot, and the method comprises the following steps:

receiving an original face image of a target user acquired by a face acquisition module;

carrying out image enhancement processing on the original face image by using an image enhancement algorithm to obtain an enhanced face image;

correcting the proportion of three primary color components between the enhanced face image and the original face image to obtain a corrected face image;

and carrying out face detection on the corrected face image by using a pre-constructed face detection model to obtain a face detection result of the target user.

2. The method of claim 1, wherein the original face image is subjected to image enhancement processing by using a Retinex algorithm.

3. The method of claim 2, wherein the modifying the three primary color component ratio between the enhanced face image and the original face image comprises:

calculating the mean value and the mean square error of each component of three primary colors in the enhanced face image;

and correcting the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following correction models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors;

correcting the reflection component R in the enhanced face image by utilizing the following linear quantization modeli(x, y), obtaining a modified face image:

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

4. The method of claim 1, wherein the face detection model is obtained by training a cascade classifier using an AdaBoost algorithm.

5. The method of claim 1, wherein the modified face image is subjected to multiple-scale face detection in multiple image region units using a pre-constructed face detection model.

6. A face detection apparatus, applied to a service robot, the apparatus comprising:

the receiving module is used for receiving the original face image of the target user acquired by the face acquisition module;

the image enhancement module is used for carrying out image enhancement processing on the original face image by utilizing an image enhancement algorithm to obtain an enhanced face image;

the correction module is used for correcting the three-primary-color component proportion between the enhanced face image and the original face image to obtain a corrected face image;

and the detection module is used for carrying out face detection on the corrected face image by utilizing a pre-constructed face detection model to obtain a face detection result of the target user.

7. The apparatus of claim 6, wherein the image enhancement module is configured to perform image enhancement processing on the original face image by using a Retinex algorithm.

8. The apparatus of claim 7, wherein the modification module comprises:

the computing unit is used for computing the mean value and the mean square error of each component of the three primary colors in the enhanced face image;

a first modification unit, configured to modify the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following modification models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors;

a second modification unit for modifying the inverse of the enhanced face image using a linear quantization modelComponent of radiation Ri(x, y) obtaining a corrected face image;

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

9. A service robot, characterized in that it comprises at least one processor and a memory for storing processor-executable instructions, which when executed by the processor implement steps comprising the method of any of the claims 1-5.

10. A computer-readable storage medium having stored thereon computer instructions, wherein the instructions, when executed, implement the steps of the method of any one of claims 1-5.

Background

In the process of face recognition, face detection is firstly carried out. The accuracy of face detection is critical to face recognition. The face detection is to perform traversal search on a picture by adopting a certain algorithm. If the face is searched, the face exists in the picture, otherwise, the face does not exist. There are difficulties in implementation, such as interference of complex lighting. The brightness and contrast of the image can be influenced by the intensity of illumination, and the difficulty of face detection is increased. The working environment of the welcome robot is complex and changeable, the decoration styles of the workplaces are different, and different lighting designs can generate different illumination influences. Under such a non-ideal environment, the conventional image preprocessing method, such as binarization, histogram equalization, or gray value normalization, cannot solve the influence of abnormal illumination on face detection.

Disclosure of Invention

An object of the embodiments of the present specification is to provide a face detection method, an apparatus, a robot, and a storage medium, which can further improve accuracy of face detection.

The present specification provides a face detection method, a face detection device, a robot, and a storage medium, which are implemented in the following ways:

a face detection method is applied to a service robot, and comprises the following steps: receiving an original face image of a target user acquired by a face acquisition module; carrying out image enhancement processing on the original face image by using an image enhancement algorithm to obtain an enhanced face image; correcting the proportion of three primary color components between the enhanced face image and the original face image to obtain a corrected face image; and carrying out face detection on the corrected face image by using a pre-constructed face detection model to obtain a face detection result of the target user.

In other embodiments of the method provided in this specification, the original face image is subjected to image enhancement processing by using a Retinex algorithm.

In other embodiments of the method provided in this specification, the modifying the three-primary-color component ratio between the enhanced face image and the original face image includes: calculating the mean value and the mean square error of each component of three primary colors in the enhanced face image; and correcting the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following correction models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors; correcting the reflection component R in the enhanced face image by utilizing the following linear quantization modeli(x, y), obtaining a modified face image:

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

In other embodiments of the method provided in this specification, the face detection model is obtained by training a cascade classifier using an AdaBoost algorithm.

In other embodiments of the method provided in this specification, a face detection model is constructed in advance, and multiple scales of face detection of multiple image region units are performed on the modified face image.

On the other hand, the embodiment of this specification also provides a face detection device, apply to the service robot, the device includes: the receiving module is used for receiving the original face image of the target user acquired by the face acquisition module; the image enhancement module is used for carrying out image enhancement processing on the original face image by utilizing an image enhancement algorithm to obtain an enhanced face image; the correction module is used for correcting the three-primary-color component proportion between the enhanced face image and the original face image to obtain a corrected face image; and the detection module is used for carrying out face detection on the corrected face image by utilizing a pre-constructed face detection model to obtain a face detection result of the target user.

In other embodiments of the apparatus provided in this specification, the image enhancement module is configured to perform image enhancement processing on an original face image by using a Retinex algorithm.

In other embodiments of the apparatus provided in this specification, the modification module includes: the computing unit is used for computing the mean value and the mean square error of each component of the three primary colors in the enhanced face image; a first modification unit, configured to modify the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following modification models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors; a second correction unit for correcting the reflection component R in the enhanced face image by using the following linear quantization modeli(x, y) obtaining a corrected face image;

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

In another aspect, embodiments of the present specification further provide a service robot, where the service robot includes at least one processor and a memory for storing processor-executable instructions, where the instructions, when executed by the processor, implement the steps of the method according to any one or more of the above embodiments;

in another aspect, the present specification further provides a computer readable storage medium, on which computer instructions are stored, and the instructions, when executed, implement the steps of the method according to any one or more of the above embodiments.

According to the face detection method, the face detection device, the robot and the storage medium provided by one or more embodiments of the present specification, the original face image is enhanced, and then the three primary color ratio of the enhanced face image to the original image is corrected, so that the influence of the problem of excessive noise caused by enhancement on the image fidelity can be further reduced while the original face image is enhanced, the quality of the image finally input into the face detection model is higher, and the quality of the image used for face detection under complex illumination is further improved. The efficiency and the accuracy of the training of the face detection model can be further improved based on the corrected face image, and the accuracy of the face detection is further improved.

Drawings

In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:

FIG. 1 is a schematic view of light propagation during image acquisition provided by the present specification;

fig. 2 is a schematic diagram of a face detection result provided in the present specification;

fig. 3 is a schematic diagram of a face detection result provided in the present specification;

fig. 4 is a schematic view of an implementation flow of a face detection method provided in this specification;

fig. 5 is a schematic block structure diagram of a face detection apparatus provided in this specification.

Detailed Description

In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on one or more embodiments of the present specification without making any creative effort shall fall within the protection scope of the embodiments of the present specification.

In an application scenario example of the present specification, the face detection method may be applied to a service robot. The service robot at least comprises a face acquisition module, a processor and a memory. The face acquisition module can be a camera. The face acquisition module can acquire a face image of a user and send the face image to the processor. For convenience of description, the face image acquired by the acquisition module may be used as an original face image. The processor can detect whether a face image with the matching degree larger than a specified threshold value with the original face image exists in a face sample library based on a pre-configured face detection algorithm. The face sample library may be stored in the memory or in a server. Correspondingly, a communication module can be configured in the service robot. The service robot can communicate with the server through the communication module to access a face sample library stored in the server to complete face detection.

The service robot may be a greeting robot. The working environment of the welcome robot is complex and changeable, the decoration styles of the workplaces are different, and different lighting designs can generate different illumination influences. Under such a non-ideal environment, the conventional image preprocessing method, such as binarization, histogram equalization, or gray value normalization, cannot solve the influence of abnormal illumination on face detection. In this scene example, the original face image may be preprocessed to improve the image quality of the original face image, so as to improve the accuracy of face detection.

The original face image can be subjected to image enhancement processing by using an image enhancement algorithm to obtain an enhanced face image. For example, the original face image can be subjected to image enhancement processing by using a Retinex algorithm. Of course, other image enhancement algorithms, such as histogram equalization, etc., may also be employed.

The principle behind the Retinex algorithm is that the reflection of three primary colors by an object determines the color of the object. The color of the object is kept unchanged, and the illumination cannot influence the object, namely, the Retinex algorithm theory basis is color sense consistency. As shown in fig. 1, incident light impinges on the reflector R, which reflects the light into the eye of the observer I. The reflectivity of the reflector R is constant, independent of the incident light L. The principle of the Retinex algorithm is to estimate the influence of incident light L based on an original face image S originally acquired. If R can be calculated, the influence of incident light can be solved like a human vision system, the purpose of enhancing the picture effect can be more effectively achieved, and the influence of external complex illumination on the face image is reduced. But after the image enhancement, the image may have distortion and other problems due to the addition of noise. Of course, other enhancement algorithms also typically have the inevitable problem of similar image distortion.

In this scene example, the three-primary-color component ratio between the enhanced face image and the original face image may be further modified to obtain a modified face image. By correcting the proportion of the three primary color components of the enhanced face image relative to the original face image, the problem of overlarge noise caused by image enhancement can be effectively reduced, so that the effective information of the image is closer to the original face image, and the problem of image distortion is reduced.

For the face image added based on the Retinex algorithm, the following method can be used for correction processing:

calculating the mean value and the mean square error of each component of three primary colors in the enhanced face image; and correcting the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following correction models:

wherein, muiDenotes the mean value, viDenotes the mean square error, α denotes a correction factor, Min denotes the minimum after correction, Max denotes the maximum after correction, and i denotes any of the components of the three primary colors. The magnitude of the correction factor α can be determined by a number of sample tests. And can use the following linear quantization model to correct the reflection component R in the enhanced face imagei(x, y), obtaining a modified face image:

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

The image correction processing is carried out in the quantitative mode, so that the noise of the original face image enhanced by the Retinex algorithm can be more accurately reduced, and the enhanced original face image is more natural and has better quality.

And then, the corrected face image can be transmitted to a face detection model which is constructed in advance so as to carry out face detection processing. Preferably, the face detection model can be obtained by training a cascade classifier by using an AdaBoost algorithm. The accuracy of face detection can be further improved by training the cascade classifier. Of course, other algorithms, such as a neural network algorithm, may also be used to construct the face detection model.

And after the corrected face image is transmitted to a face detection model, the face detection model carries out multi-scale face detection of a plurality of image area units on the corrected face image. The face detection model can firstly segment the corrected face image into a plurality of image area units, and each image area unit is detected, so that the image processing amount is reduced, and the image processing efficiency is improved. However, the image area unit is usually smaller, and as the number of layers of the model is deepened, the geometric detail information in the extracted image features may disappear completely, so that the face image information is difficult to detect. Accordingly, in the scene example, by further combining the detection methods of multiple scales to process the image region unit, it can be further ensured that effective information in the image is extracted. The multiple scale detection may be performed by combining multiple layers of features and then predicting, or by predicting at different layers, respectively. Meanwhile, in the detection process, the size of the search window can be initialized continuously, and the detection efficiency is improved by continuous window search.

In the scene example, the original face image is enhanced first, and then the three-primary-color proportion of the enhanced face image relative to the original image is corrected, so that the influence of the problem of overlarge noise caused by enhancement on the image fidelity can be further reduced while the original face image is enhanced, the quality of the image finally input into the face detection model is higher, and the quality of the image used for face detection under complex illumination is improved. The efficiency and the accuracy of the training of the face detection model can be further improved based on the corrected face image, and the accuracy of the face detection is further improved.

And verifying the face detection method by using an Extended Yale B face image data set, wherein the verification factors are the detection rate and the false detection number of the face detection. The detection rate of face detection is also called face recall rate, i.e. the ratio of the number of detected faces to the total number of faces. The false detection number is the number of errors (i.e. not human faces) in the detected human face. The Yale B dataset is the most commonly used face dataset for face recognition light preprocessing studies. Images with 5 different shooting parameters were selected from each person in the data set to form a sample set. As shown in table 1, table 1 is a comparison schematic table of the face detection results based on the Yale B face data set.

TABLE 1

Number of samples Total number of faces Total number of detections Detection rate Number of false detections
Before image correction 200 200 168 80% 8
After the image correction is carried out 200 200 184 92% 0

As can be seen from table 1, the above detection method provided in this scene example improves the influence of complex illumination, and further improves the accuracy of face detection. The face detection is used as the premise of face recognition, and the accuracy of the face detection can be improved to a certain extent even if the face recognition algorithm is not optimized by improving the accuracy of the face detection.

Fig. 2 shows the face detection result without image correction, and it can be seen that although most faces can be detected, some faces with weak light rays may be missed. Fig. 3 shows the result of face detection by using the method provided by the above scene example, which shows that some faces that are missed to be detected due to weak light are also detected, thereby improving the accuracy of face detection.

Based on the above scene example, the present specification further provides a face detection method. Fig. 4 is a schematic flow chart of an embodiment of the face detection method provided in this specification. As shown in fig. 4, in an embodiment of the face detection method provided in this specification, the method may be applied to a service robot. The method may comprise the steps of:

s40: and receiving an original face image of the target user acquired by the face acquisition module.

S42: and carrying out image enhancement processing on the original face image by using an image enhancement algorithm to obtain an enhanced face image.

S44: and correcting the proportion of the three primary color components between the enhanced face image and the original face image to obtain a corrected face image.

S46: and carrying out face detection on the corrected face image by using a pre-constructed face detection model to obtain a face detection result of the target user.

In other embodiments, the original face image may be subjected to image enhancement processing by using a Retinex algorithm. Correspondingly, the modifying the three-primary-color component ratio between the enhanced face image and the original face image may include: calculating the mean value and the mean square error of each component of three primary colors in the enhanced face image; and correcting the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following correction models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors; and correcting the enhanced face image by using the following linear quantization modelOf (2) a reflection component Ri(x, y), obtaining a modified face image:

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

In other embodiments, the face detection model may be obtained by training a cascade classifier using an AdaBoost algorithm. Correspondingly, a face detection model which is constructed in advance can be used for carrying out face detection on the corrected face image in multiple scales of multiple image area units.

The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. For details, reference may be made to the description of the related embodiments of the related processing, and details are not repeated herein.

The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

Based on the above-mentioned face detection method, one or more embodiments of the present specification further provide a face detection apparatus. Specifically, fig. 5 is a schematic diagram of a module structure of an embodiment of a face detection apparatus provided in the specification, and as shown in fig. 5, the apparatus is applied to a service robot, and the apparatus includes: a receiving module 50, configured to receive an original face image of a target user acquired by the face acquisition module; the image enhancement module 52 is configured to perform image enhancement processing on the original face image by using an image enhancement algorithm to obtain an enhanced face image; a correction module 54, configured to correct a three-primary-color component ratio between the enhanced face image and the original face image to obtain a corrected face image; and the detection module 56 is configured to perform face detection on the corrected face image by using a pre-constructed face detection model, so as to obtain a face detection result of the target user.

In other embodiments, the image enhancement module 52 may be configured to perform image enhancement processing on the original face image by using a Retinex algorithm. Accordingly, the modification module 54 may include: the computing unit is used for computing the mean value and the mean square error of each component of the three primary colors in the enhanced face image; a first modification unit, configured to modify the minimum value and the maximum value of each component of the three primary colors in the enhanced face image by using the following modification models:

wherein, muiDenotes the mean value, viExpressing mean square error, alpha expressing a correction factor, Min expressing a minimum value after correction, Max expressing a maximum value after correction, and i expressing any component of three primary colors; a second correction unit for correcting the reflection component R in the enhanced face image by using the following linear quantization modeli(x, y) obtaining a corrected face image;

wherein the content of the first and second substances,indicating a reflection component in the modified face image.

It should be noted that the above-described apparatus may also include other embodiments according to the description of the method embodiment. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.

The present specification also provides a service robot that can be applied in a variety of computer data processing systems. The service robot may include at least one processor and a memory for storing processor-executable instructions that, when executed by the processor, implement the steps of the method of any one or more of the embodiments described above. The memory may include physical means for storing information, typically by digitizing the information for storage on a medium using electrical, magnetic or optical means. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth. Accordingly, the embodiments of the present specification also provide a computer readable storage medium, on which computer instructions are stored, which can be executed to implement the steps of the method according to any one or more of the above embodiments.

The embodiments of the present description are not limited to what must be consistent with a standard data model/template or described in the embodiments of the present description. Certain industry standards, or implementations modified slightly from those described using custom modes or examples, may also achieve the same, equivalent, or similar, or other, contemplated implementations of the above-described examples. The embodiments using these modified or transformed data acquisition, storage, judgment, processing, etc. may still fall within the scope of the alternative embodiments of the present description.

The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.

The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种林下复合生态处理方法、装置、介质及终端设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!