Face image processing method, device, medium and equipment

文档序号:9222 发布日期:2021-09-17 浏览:43次 中文

1. A face image processing method is characterized by comprising the following steps:

determining a target face image;

calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image;

calculating an initial mean value and an initial standard deviation of face region pixels of a face image to be processed;

and normalizing the color data of the face image to be processed according to the reference mean value and the reference standard deviation of the face region pixels of the target face image and the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed to obtain the processed face image.

2. The method of claim 1, wherein the calculating of the reference mean and the reference standard deviation of the face region pixels of the target face image comprises:

converting the values of the three color channels of the target face image from an RGB color space to an LAB color space; calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image under the three color channels respectively;

calculating an initial mean value and an initial standard deviation of face region pixels of a face image to be processed, wherein the calculation comprises the following steps:

converting the values of the three color channels of the face image to be processed from an RGB color space to an LAB color space; and respectively calculating the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed under the three color channels.

3. The method according to claim 2, wherein normalizing the color data of the face image to be processed according to the reference mean value and the reference standard deviation of the face region pixels of the target face image and the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed to obtain the processed face image comprises:

calculating a standard color channel value of each pixel of each color channel of the face image to be processed according to the initial mean value and the initial standard deviation of the face region pixel of the face image to be processed and the initial value of each pixel of each color channel of the face image to be processed;

obtaining a normalized color channel value of each pixel of each color channel of the face image to be processed according to the standard color channel value of each pixel of each color channel of the face image to be processed and the reference mean value and the reference standard deviation of the face region pixel of the target face image;

and converting the normalized color channel value of each pixel of each color channel of the face image to be processed into an RGB color space from an LAB color space to obtain the processed face image.

4. The method of claim 3,

the standard color channel value satisfies the following formula one, and the normalized color channel value satisfies the following formula two:

a

Wherein μ 'represents an initial mean value of face region pixels of the face image to be processed, σ' represents an initial standard deviation of the face region pixels of the face image to be processed, and μ represents a reference mean value of the face region pixels of the target face image; sigma represents the reference standard deviation of the face region pixels of the target face image, x represents the initial value of the face image to be processed, x 'represents the standard color channel value of the face image to be processed, and x' represents the normalized color channel value of the face image to be processed.

5. The method of any of claims 1 to 4, wherein the image to be processed comprises training image data and test image data; the processed face image comprises processed training image data and processed test image data;

the method further comprises the following steps:

and training the face algorithm reference model according to the processed face image comprising the processed training image data and the processed test image data to obtain the trained face algorithm model.

6. A face image processing apparatus, comprising:

a determination unit for determining a target face image;

the calculating unit is used for calculating a reference mean value and a reference standard deviation of the face area pixels of the target face image; calculating an initial mean value and an initial standard deviation of the face region pixels of the face image to be processed;

and the processing unit is used for normalizing the color data of the face image to be processed according to the reference mean value and the reference standard deviation of the face area pixels of the target face image and the initial mean value and the initial standard deviation of the face area pixels of the face image to be processed to obtain the processed face image.

7. The apparatus according to claim 6, wherein the computing unit is specifically configured to:

converting the values of the three color channels of the target face image from an RGB color space to an LAB color space; calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image under the three color channels respectively;

converting the values of the three color channels of the face image to be processed from an RGB color space to an LAB color space; and respectively calculating the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed under the three color channels.

8. The apparatus according to claim 7, wherein the processing unit is specifically configured to:

calculating a standard color channel value of each pixel of each color channel of the face image to be processed according to the initial mean value and the initial standard deviation of the face region pixel of the face image to be processed and the initial value of each pixel of each color channel of the face image to be processed;

obtaining a normalized color channel value of each pixel of each color channel of the face image to be processed according to the standard color channel value of each pixel of each color channel of the face image to be processed and the reference mean value and the reference standard deviation of the face region pixel of the target face image;

and converting the normalized color channel value of each pixel of each color channel of the face image to be processed into an RGB color space from an LAB color space to obtain the processed face image.

9. The apparatus of claim 8,

the standard color channel value satisfies the following formula one, and the normalized color channel value satisfies the following formula two:

a

Wherein μ 'represents an initial mean value of face region pixels of the face image to be processed, σ' represents an initial standard deviation of the face region pixels of the face image to be processed, and μ represents a reference mean value of the face region pixels of the target face image; sigma represents the reference standard deviation of the face region pixels of the target face image, x represents the initial value of the face image to be processed, x 'represents the standard color channel value of the face image to be processed, and x' represents the normalized color channel value of the face image to be processed.

10. The apparatus of any of claims 6 to 9, wherein the image to be processed comprises training image data and test image data; the processed face image comprises processed training image data and processed test image data;

the apparatus further comprises a training unit:

and the training unit is used for training the face algorithm reference model according to the processed face image including the processed training image data and the processed test image data to obtain the trained face algorithm model.

11. A terminal device, comprising: a processor and a memory for storing a computer program; the processor is configured to execute the computer program stored by the memory to cause the terminal to perform the method of any of claims 1 to 5.

12. A computer-readable storage medium, having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1 to 5.

Background

With the wide application of face recognition and face unlocking technologies in the fields of finance, attendance machines, access control systems, mobile devices and the like, face-related algorithm technologies have gained more and more attention in recent years. In practical applications, generalization performance is a significant challenge for face correlation algorithms. Image colors such as different imaging devices, different color light environments, different race complexions, etc., may interfere with the face correlation algorithm. In the development of the algorithm, in order to ensure that the face correlation algorithm has good generalization performance under different environments and conditions, the traditional solution method mainly comprises the following steps: massive data are collected for training, and the massive data need to cover main mobile phones, camera models, different display equipment models, different color light environments, different race complexion face data and the like in the market. Usually, it is difficult for developers to acquire a complete data set that can cover various conditions, so the image color still causes interference to the current face correlation algorithm, and the accuracy of the face recognition result is affected.

Disclosure of Invention

The invention aims to provide a method, a device, a medium and equipment for processing a face image, which are used for realizing the normalization processing of the color of the face image.

The invention provides a face image processing method, which comprises the following steps: determining a target face image; calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image; calculating an initial mean value and an initial standard deviation of face region pixels of a face image to be processed; and normalizing the color data of the face image to be processed according to the reference mean value and the reference standard deviation of the face region pixels of the target face image and the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed to obtain the processed face image.

The training image data and the test image data can be preprocessed through the method, so that the training image data and the test image data after preprocessing are utilized to train the face algorithm reference model to obtain the trained face algorithm model.

In one possible implementation, calculating a reference mean and a reference standard deviation of face region pixels of a target face image includes: converting the values of the three color channels of the target face image from an RGB color space to an LAB color space; calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image under the three color channels respectively;

calculating an initial mean value and an initial standard deviation of face region pixels of a face image to be processed, wherein the calculation comprises the following steps:

converting the values of three color channels of the face image to be processed from an RGB color space to an LAB color space; and respectively calculating the reference mean value and the reference standard deviation of the face region pixels of the face image to be processed under the three color channels. In the method, the RGB color space is converted into the LAB color space and then the calculation is carried out, so that the color distortion of the image can be avoided, and the accuracy of the final calculation result is improved.

In one possible implementation, the method further comprises: calculating a standard color channel value of each pixel of each color channel of the face image to be processed according to the initial mean value and the initial standard deviation of the face region pixel of the face image to be processed and the initial value of each pixel of each color channel of the face image to be processed;

obtaining a normalized color channel value of each pixel of each color channel of the face image to be processed according to the standard color channel value of each pixel of each color channel of the face image to be processed and the reference mean value and the reference standard deviation of the face region pixel of the target face image; and converting the normalized color channel value of each pixel of each color channel of the face image to be processed into an RGB color space from an LAB color space to obtain the processed face image. The method can realize that the image colors of the images to be processed are unified to the image colors of the target face images, and the interference of the image colors to the training parameters of the model algorithm is small when the reference model of the face algorithm is trained on the basis of the training image data and the processed test image data processed in the way.

In one possible implementation, the standard color channel values satisfy the following equation one, and the normalized color channel values satisfy the following equation two:

a

Wherein μ 'represents an initial mean value of face region pixels of the face image to be processed, σ' represents an initial standard deviation of the face region pixels of the face image to be processed, and μ represents a reference mean value of the face region pixels of the target face image; sigma represents the reference standard deviation of the face region pixels of the target face image, x represents the initial value of the face image to be processed, x 'represents the standard color channel value of the face image to be processed, and x' represents the normalized color channel value of the face image to be processed.

In one possible implementation, the image to be processed includes training image data and test image data; the processed face image comprises processed training image data and processed test image data; the method further comprises the following steps:

and training the face algorithm reference model according to the processed face image comprising the processed training image data and the processed test image data to obtain the trained face algorithm model.

In a second aspect, an embodiment of the present application further provides a shutdown warning apparatus, where the apparatus includes a module/unit that executes any one of the design methods of the second aspect. These modules/units may be implemented by hardware, or by hardware executing corresponding software.

In a third aspect, an embodiment of the present application provides a terminal device, which includes a processor and a memory. Wherein the memory is used to store one or more computer programs; the one or more computer programs stored in the memory, when executed by the processor, enable the terminal device to implement the method of any of the possible designs of the second aspect described above.

In a fourth aspect, this embodiment also provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program is run on an electronic device, the computer program causes the electronic device to perform any one of the possible design methods of the foregoing aspects.

In a fifth aspect, the present application further provides a computer program product, which when run on a terminal, causes the electronic device to execute any one of the possible design methods of any one of the above aspects.

In a sixth aspect, an embodiment of the present application further provides a chip or a chip module, where the chip or the chip module is coupled to a memory, and is configured to execute a computer program stored in the memory, so that the electronic device performs any one of the design methods of the foregoing aspects.

As for the advantageous effects of the above second to sixth aspects, reference may be made to the description in the above first aspect.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;

fig. 2 is a schematic flow chart of a face image processing method according to an embodiment of the present invention;

fig. 3 is a schematic flow chart of another face image processing method according to an embodiment of the present invention;

fig. 4 is a schematic diagram of a face image processing apparatus according to an embodiment of the present invention;

fig. 5 is a schematic structural diagram of another terminal device according to an embodiment of the present invention.

Detailed Description

Before describing the embodiments of the present invention in detail, some terms used in the embodiments of the present invention will be explained below to facilitate understanding by those skilled in the art.

At present, a face algorithm model is obtained by training mainly according to training image data and testing image data, and different training image data and testing image data cannot cover various color light environments, various human skin color face data and the like, so that the constructed face algorithm model cannot eliminate interference of image colors on the face algorithm model, and the generalization performance of the face algorithm model is poor. The invention provides a face image processing method, which can carry out image color normalization processing on training image data and test image data in advance, namely, a target face image is selected, the training data and the test image data are subjected to color normalization preprocessing according to the color characteristics of the target face image, then training is carried out based on the preprocessed training image data and test image data to obtain a face algorithm model, and the constructed face algorithm model can eliminate the interference of image colors on the face algorithm model. By adopting the preprocessing mode, the color characteristics of the image can be kept, meanwhile, the interference of the image colors on the result of the face related algorithm can be avoided, and the generalization performance of the face related algorithm is effectively improved.

The technical solution in the embodiments of the present application is described below with reference to the drawings in the embodiments of the present application. In the description of the embodiments of the present application, the terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the specification of the present application and the appended claims, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, such as "one or more", unless the context clearly indicates otherwise. It should also be understood that in the following embodiments of the present application, "at least one", "one or more" means one or more than two (including two). The term "and/or" is used to describe an association relationship that associates objects, meaning that three relationships may exist; for example, a and/or B, may represent: a alone, both A and B, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise. The term "coupled" includes both direct and indirect connections, unless otherwise noted. "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated.

In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.

The face image processing method provided in the embodiment of the present application may be applied to a terminal device as shown in fig. 1, where fig. 1 shows a hardware configuration block diagram of the terminal device 100.

In some embodiments, the terminal apparatus 100 includes at least one of a tuner demodulator 110, a communicator 120, a collector 130, an external device interface 140, a controller 150, a display 160, an audio output interface 170, a memory, a power supply, and a user interface 180.

In some embodiments the controller comprises a central processor, a video processor, an audio processor, a graphics processor, a Random Access Memory (RAM), a read-only memory (ROM), a first interface for input/output to an nth interface.

In some embodiments, the display 160 includes a display screen component for displaying pictures, and a driving component for driving image display, a component for receiving image signals from the controller output, displaying video content, image content, and menu manipulation interface, and a user manipulation UI interface, etc.

In some embodiments, the display 160 may be at least one of a liquid crystal display, an OLED display, and a projection display, and may also be a projection device and a projection screen.

In some embodiments, the tuner/demodulator 110 receives broadcast television signals via wired or wireless reception, and demodulates audio/video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.

In some embodiments, communicator 120 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a wireless fidelity (wifi) module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or near field communication protocol chip, and an infrared receiver. The terminal device 100 may establish transmission and reception of a control signal and a data signal with the control apparatus 100 or the server 400 through the communicator 120.

In some embodiments, the collector 130 is used to collect external environment or signals interacting with the outside. For example, the collector 130 includes a light receiver, a sensor for collecting the intensity of ambient light; alternatively, the collector 130 includes an image collector, such as a camera, which may be used to collect external environment scenes, attributes of the user, or user interaction gestures, or the collector 130 includes a sound collector, such as a microphone, which is used to receive external sounds.

In some embodiments, the external device interface 140 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, and the like. The interface may be a composite input/output interface formed by the plurality of interfaces.

In some embodiments, the controller 150 and the modem 110 may be located in different separate devices, that is, the modem 110 may also be located in an external device of the main device where the controller 150 is located, such as an external set-top box.

In some embodiments, the controller 150 controls the operation of the display device and responds to user actions through various software control programs stored in memory. The controller 150 controls the overall operation of the terminal device 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 160, the controller 150 may perform an operation related to the object selected by the user command.

In some embodiments, the object may be any one of selectable objects, such as a hyperlink, an icon, or other actionable control. The operations related to the selected object are: displaying an operation connected to a hyperlink page, document, image, or the like, or performing an operation of a program corresponding to the icon.

In some embodiments the controller comprises at least one of a Central Processing Unit (CPU), a video processor, an audio processor, a Graphics Processing Unit (GPU), a RAM, a ROM, first to nth interfaces for input/output, a communication Bus (Bus), and the like.

And the central processor is used for executing the operating system and the application program instructions stored in the memory and executing various application programs, data and contents according to various interaction instructions for receiving external input so as to finally display and play various audio and video contents. The central processor may include a plurality of processors. E.g. comprising a main processor and one or more sub-processors.

In some embodiments, a graphics processor for generating various graphics objects, such as: at least one of an icon, an operation menu, and a user input instruction display figure. The graphic processor comprises an arithmetic unit, which performs operation by receiving various interactive instructions input by a user and displays various objects according to display attributes; the system also comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.

In some embodiments, the video processor is configured to receive an external video signal, and perform at least one of video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the terminal device 100.

In some embodiments, the video processor includes at least one of a demultiplexing module, a video decoding module, an image composition module, a frame rate conversion module, a display formatting module, and the like. The demultiplexing module is used for demultiplexing the input audio and video data stream. And the video decoding module is used for processing the video signal after demultiplexing, including decoding, scaling and the like. And an image synthesis module, such as an image synthesizer, configured to perform superposition and mixing processing on the Graphical User Interface (GUI) signal generated by the graphical generator according to the user input or the GUI signal and the video image after the scaling processing, so as to generate an image signal for display. And the frame rate conversion module is used for converting the frame rate of the input video. And the display formatting module is used for converting the received video output signal after the frame rate conversion, and changing the signal to be in accordance with the signal of the display format, such as an output RGB data signal.

In some embodiments, the audio processor is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform at least one of noise reduction, digital-to-analog conversion, and amplification processing to obtain a sound signal that can be played in the speaker.

In some embodiments, a user may enter user commands on a graphical user interface displayed on the display 160, and the user input interface receives the user input commands through the graphical user interface. Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface receives the user input command by recognizing the sound or gesture through the sensor.

In some embodiments, a "user interface" is a media interface for interaction and information exchange between an application or operating system and a user that enables conversion between an internal form of information and a form that is acceptable to the user. A common presentation form of a user interface is a graphical user interface, which refers to a user interface displayed in a graphical manner and related to computer operations. It may be an interface element such as an icon, a window, a control, etc. displayed in the display screen of the electronic device, where the control may include at least one of an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc. visual interface elements.

In some embodiments, user interface 180 is an interface that can be used to receive control inputs (e.g., physical keys on the body of the display device, or the like).

In specific implementation, the terminal device 100 may be a mobile phone, a tablet computer, a handheld computer, a Personal Computer (PC), a cellular phone, a Personal Digital Assistant (PDA), a wearable device (e.g., a smart watch), a smart home device (e.g., a television), a vehicle-mounted computer, a game machine, and an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, and the specific device form of the terminal device 100 is not particularly limited in this embodiment.

Based on the terminal device shown in fig. 1, an embodiment of the present application provides a flowchart of a method for processing a face image, as shown in fig. 2, the flow of the method may be executed by the terminal device, and the method includes the following steps:

s201, determining a target face image.

Illustratively, the target face image includes a face, and the target face image may satisfy the following condition: the human face imaging is clear, the human face is moderate in size, the human face is not shielded, the light environment in the image is normal light (non-dim light, backlight and colored light environment), and the human face area is not too dark, too bright, overexposed, abnormal in color and the like.

S202, calculating a reference mean value and a reference standard deviation of the face region pixels of the target face image, and calculating an initial mean value and an initial standard deviation of the face region pixels of the face image to be processed.

In one possible implementation, the terminal device may convert the values of the RGB three color channels of the target face image from the RGB color space to the LAB color space; next, the reference mean μ and the standard deviation σ of the face region pixels of the target face image are calculated under the three color channels, respectively. Similarly, the terminal equipment converts the values of the three color channels of the face image to be processed from the RGB color space to the LAB color space; and respectively calculating the reference mean value u 'and the reference standard deviation sigma' of the face region pixels of the face image to be processed under the three color channels.

S203, normalizing the color data of the face image to be processed according to the reference mean value and the reference standard deviation of the face region pixels of the target face image and the initial mean value and the initial standard deviation of the face region pixels of the face image to be processed to obtain the processed face image.

In this step, the terminal may calculate a standard color channel value of each pixel of each color channel of the face image to be processed according to an initial mean value and an initial standard deviation of the face region pixels of the face image to be processed and an initial value of each pixel of each color channel of the face image to be processed; the standard color channel value satisfies the following formula one:

wherein, mu ' represents the initial mean value of the face region pixels of the face image to be processed, sigma ' represents the initial standard deviation of the face region pixels of the face image to be processed, x represents the initial value of the face image to be processed, and x ' represents the standard color channel value of the face image to be processed.

Then the terminal equipment obtains the normalized color channel value of each pixel of each color channel of the face image to be processed according to the standard color channel value of each pixel of each color channel of the face image to be processed and the reference mean value and the reference standard deviation of the face area pixel of the target face image; and converting the normalized color channel value of each pixel of each color channel of the face image to be processed into an RGB color space from an LAB color space to obtain the processed face image. The normalized color channel value satisfies the following formula two:

a

Wherein, mu represents the reference mean value of the face region pixels of the target face image; sigma represents the reference standard deviation of the face region pixels of the target face image, x 'represents the standard color channel value of the face image to be processed, and x' represents the normalized color channel value of the face image to be processed.

Based on the method, the training image data and the test image data can be preprocessed, so that the preprocessed training image data and the preprocessed test image data are utilized to train the face algorithm reference model to obtain the trained face algorithm model. It should be noted that the method is also applicable to the generation of other face-related algorithms, such as a face detection algorithm, a face alignment algorithm, a face living body detection algorithm, a face attribute recognition algorithm, a face segmentation algorithm, a face comparison algorithm, and the like.

In order to more systematically explain the above face image algorithm, the present invention further provides a method flow as shown in fig. 3, and the method flow includes the following steps.

S301, a target face image tarImg is selected in advance, face detection is carried out on the target face image tarImg to obtain face position information tar _ face, and therefore a face area is determined.

S302, converting the target face image tarImg from RGB color space to LAB color space to obtain an LAB color space image tarImgLAB.

And S303, calculating the mean value and the reference standard deviation of the pixel values of the face region in each channel in the LAB color space of the target face image.

Specifically, the LAB color space image tarImgLAB is combined with the face position information tar _ face obtained in S301 to calculate a face region image tarImgLAB _ face, and RGB three color channels are tarImgL _ face/tarImgA _ face/tarImgB _ face respectively. The reference mean and the reference standard deviation of each of 3 channels of the face region image tarimglb _ face were calculated using a statistical method as shown in table 1 below.

TABLE 1

Color channel Mean value Standard deviation of
L color channel tarImgL_face_mean tarImgL_face_std
Color channel A tarImgA_face_mean tarImgA_face_std
B color channel tarImgB_face_mean tarImgB_face_std

S304, carrying out face detection on the face image srcImg to be processed to obtain face position information src _ face so as to determine a face area.

S305, converting the face image srcImg to be processed from the RGB color space to the LAB color space to obtain an LAB color space image srcImgLAB.

S306, calculating the initial mean value and the initial standard deviation of the pixel value of the face region in each channel of the face image srcImgLAB to be processed in the LAB color space.

Specifically, the initial mean and the initial standard deviation of each of 3 channels of the face region image srcmlglab _ face are calculated using a statistical method, as shown in table 2 below.

TABLE 2

Color channel Initial mean value Initial standard deviation
L color channel srcImgL_face_mean srcImgL_face_std
Color channel A srcImgA_face_mean srcImgA_face_std
B color channel srcImgB_face_mean srcImgB_face_std

And S307, respectively carrying out pixel-by-pixel normalization processing on the 3 channels srcImgL, srcImgA and srcImgB of the LAB color space of the face image to be processed according to the initial mean value and the initial standard deviation of the pixel value of the face region in each channel of the face image to be processed in the LAB color space and the reference mean value and the reference standard deviation of the pixel value of the face region in each channel of the target face image in the LAB color space.

Specifically, the to-be-processed image is divided into channels, the initial mean value of the face area pixels of each corresponding channel is subtracted pixel by pixel, then the ratio of the standard deviation of the target face image and the face area pixels of the to-be-processed image is multiplied, then the reference mean value of the face area pixels of the target face image is added (the calculation steps are shown in the following formulas three to five), and the obtained pixel values of srcmlgL _ new, srcmgA _ new and srcmgB _ new are limited to the interval [0, 255], namely the pixel value of the result smaller than 0 is set as 0, and the pixel value larger than 255 is set as 255.

Wherein, srcImgLisrcImgL _ new, the pixel value of the ith pixel of the L color channel of the image to be processediIs the normalized pixel value of the ith pixel of the L color channel of the image to be processed, wherein i is the value range of (0, N)]And N is the total number of pixels of the L color channels of the image to be processed.

Wherein, srcmmgAisrcImgA _ new, the pixel value of the ith pixel of the A color channel of the image to be processediIs the normalized pixel value of the ith pixel of the color channel A of the image to be processed, wherein i is the value range of (0, N)]And N is the total number of pixels of the A color channel of the image to be processedAnd (4) the number.

Wherein, srcmmgBisrcImgB _ new, the pixel value of the ith pixel of the B color channel of the image to be processediIs the normalized pixel value of the ith pixel of the color channel of the image B to be processed, wherein i is the value range of (0, N)]And N is the total number of pixels of the B color channel of the image to be processed.

S308, the processed face image srcImgL _ newi、srcImgA_newi、srcImgB_newiConverting the LAB color space into the RGB color space to obtain srcImgR _ new in the RGB color spacei、srcImgG_newi、srcImgB_newiSo as to obtain the result according to the srcImgR _ new of each pixel point in RGB color spacei、srcImgG_newi、srcImgB_newiAnd obtaining a preprocessed image.

It should be noted that the execution sequence between S301 to S303 and S304 to S306 does not have to be strict, S301 to S303 may be executed first, and then S304 to S306, or S304 to S306, and then S301 to S303 may be executed, or both may be executed in parallel, for which, this embodiment is not limited,

in conclusion, the normalization processing of the colors of the face images can be performed on the training data and the test data according to the method, so that the training of the face algorithm reference model is performed by utilizing the preprocessed training image data and the processed test image data, and the trained face algorithm model is obtained.

In some embodiments of the present application, this application further discloses a face image processing apparatus, as shown in fig. 4, which is configured to implement the method described in the above method embodiments, and includes: a determination unit 401 for determining a target face image; a calculating unit 402, configured to calculate a reference mean value and a reference standard deviation of face region pixels of the target face image; calculating an initial mean value and an initial standard deviation of the face region pixels of the face image to be processed; the processing unit 403 is configured to normalize the color data of the to-be-processed face image according to the reference mean value and the reference standard deviation of the face area pixels of the target face image and the initial mean value and the initial standard deviation of the face area pixels of the to-be-processed face image, so as to obtain a processed face image. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.

In other embodiments of the present application, an embodiment of the present application discloses a terminal device, and as shown in fig. 5, the terminal device may include: one or more processors 501; a memory 502; a display 503; one or more application programs (not shown); and one or more computer programs 504, which may be connected via one or more communication buses 505. Wherein the one or more computer programs 504 are stored in the memory 502 and configured to be executed by the one or more processors 501, the one or more computer programs 504 comprising instructions which may be used to perform the steps as in fig. 2 and 4 and the corresponding embodiments.

Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.

Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.

The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种图像颜色抠除方法、系统、计算机设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!