Image processing method, image processing device, electronic equipment and storage medium

文档序号:9187 发布日期:2021-09-17 浏览:41次 中文

1. An image processing method, characterized in that the method comprises:

in response to an eyebrow shape modification instruction, superimposing a target eyebrow shape to a position in the face image corresponding to a position of an original eyebrow, wherein the position of the original eyebrow is characterized by a position of a center of a contour of the original eyebrow;

determining a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image, wherein the coincidence region is a region belonging to both an original eyebrow region and a target eyebrow region, the first non-coincidence region is a region belonging to the original eyebrow region and not belonging to the target eyebrow region, and the second non-coincidence region is a region belonging to the target eyebrow region and not belonging to the original eyebrow region;

carrying out pixel replacement processing on the face image to obtain a processed face image, wherein the pixel replacement processing comprises the following steps: traversing pixels in the face image, and for the traversed current pixel, modifying the pixel value of the current pixel based on the pixel values of skin pixels used for replacing the current pixel in the other skin areas when the current pixel is located in the first non-overlapped area; modifying a pixel value of the current pixel based on a pixel value of an original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region.

2. The method of claim 1, wherein modifying the pixel value of the current pixel based on the pixel values of skin pixels in the other skin regions used to replace the current pixel when the current pixel is located in the first non-overlapping region comprises:

determining a skin pixel in the other skin area closest to the current pixel as a skin pixel for replacing the current pixel;

modifying the pixel value of the current pixel to the pixel value of the skin pixel that replaces the current pixel.

3. The method of claim 1, wherein modifying the pixel value of the current pixel based on the pixel value of the original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region comprises:

determining an original eyebrow pixel in the original eyebrow region closest to the current pixel as an original eyebrow pixel for replacing the current pixel;

and modifying the pixel value of the current pixel into the pixel value of the original eyebrow pixel replacing the current pixel.

4. The method of claim 1, wherein the pixel replacement process further comprises:

when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the first non-overlapping area is a first value, determining that the current pixel is located in the first non-overlapping area;

when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the second non-overlapping area is the first value, determining that the current pixel is located in the second non-overlapping area.

5. The method of claim 1, wherein prior to said superimposing, in response to the eyebrow shape modification instruction, the target eyebrow shape onto a position in the face image corresponding to a position of an original eyebrow, the method further comprises:

performing face key point detection on the face image to obtain eyebrow key points;

determining the edge of the original eyebrow based on the eyebrow key points;

and determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

6. The method according to any one of claims 1-5, wherein after said deriving the processed face image, the method further comprises:

and performing cypress cloning processing on the processed face image to obtain a target face image.

7. An image processing apparatus, characterized in that the apparatus comprises:

a superimposing module configured to superimpose a target eyebrow shape onto a position in the face image corresponding to a position of an original eyebrow in response to an eyebrow shape modification instruction, wherein the position of the original eyebrow is characterized by a position of a center of a contour of the original eyebrow;

a determining module configured to determine a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image, wherein the coincidence region is a region belonging to both an original eyebrow region and a target eyebrow region, the first non-coincidence region is a region belonging to the original eyebrow region and not belonging to the target eyebrow region, and the second non-coincidence region is a region belonging to the target eyebrow region and not belonging to the original eyebrow region;

a processing module configured to perform pixel replacement processing on the face image to obtain a processed face image, where the pixel replacement processing includes: traversing pixels in the face image, and for the traversed current pixel, modifying the pixel value of the current pixel based on the pixel values of skin pixels used for replacing the current pixel in the other skin areas when the current pixel is located in the first non-overlapped area; modifying a pixel value of the current pixel based on a pixel value of an original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region.

8. An electronic device, comprising:

a processor;

a memory for storing the processor-executable instructions;

wherein the processor is configured to execute the instructions to implement the method of any one of claims 1 to 6.

9. A computer-readable storage medium whose instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-6.

10. A computer program product comprising computer readable code which, when run on an electronic device, causes the electronic device to perform the method of any of claims 1 to 6.

Background

In the beautifying process of the face image, it is a common requirement of the user to adjust the original eyebrow shape in the face image to the target eyebrow shape. Generally, the shapes of eyebrows of different users are different, how to adjust the shapes of the eyebrows to the target eyebrow shape for any one of the shapes of the eyebrows, and determining the pixel values in the area surrounded by the target eyebrow shape to form the target eyebrow is a problem to be solved.

Disclosure of Invention

The disclosure provides an image processing method, an image processing apparatus, an electronic device and a storage medium.

According to a first aspect of embodiments of the present disclosure, there is provided an image processing method, including:

in response to an eyebrow shape modification instruction, superimposing a target eyebrow shape to a position in the face image corresponding to a position of an original eyebrow, wherein the position of the original eyebrow is characterized by a position of a center of a contour of the original eyebrow;

determining a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image, wherein the coincidence region is a region belonging to both an original eyebrow region and a target eyebrow region, the first non-coincidence region is a region belonging to the original eyebrow region and not belonging to the target eyebrow region, and the second non-coincidence region is a region belonging to the target eyebrow region and not belonging to the original eyebrow region;

carrying out pixel replacement processing on the face image to obtain a processed face image, wherein the pixel replacement processing comprises the following steps: traversing pixels in the face image, and for the traversed current pixel, modifying the pixel value of the current pixel based on the pixel values of skin pixels used for replacing the current pixel in the other skin areas when the current pixel is located in the first non-overlapped area; modifying a pixel value of the current pixel based on a pixel value of an original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region.

In some embodiments, modifying the pixel value of the current pixel based on the pixel values of the skin pixels in the other skin regions used to replace the current pixel when the current pixel is located in the first non-overlapping region comprises:

determining a skin pixel in the other skin area closest to the current pixel as a skin pixel for replacing the current pixel;

modifying the pixel value of the current pixel to the pixel value of the skin pixel that replaces the current pixel.

In some embodiments, modifying the pixel value of the current pixel based on the pixel value of the original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is in the second non-overlapping region comprises:

determining an original eyebrow pixel in the original eyebrow region closest to the current pixel as an original eyebrow pixel for replacing the current pixel;

and modifying the pixel value of the current pixel into the pixel value of the original eyebrow pixel replacing the current pixel.

In some embodiments, the method further comprises:

when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the first non-overlapping area is a first value, determining that the current pixel is located in the first non-overlapping area;

when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the second non-overlapping area is the first value, determining that the current pixel is located in the second non-overlapping area.

In some embodiments, before superimposing the target eyebrow shape onto a position in the face image corresponding to the position of the original eyebrow in response to the eyebrow shape modification instruction, the method further comprises:

performing face key point detection on the face image to obtain eyebrow key points;

determining the edge of the original eyebrow based on the eyebrow key points;

and determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

In some embodiments, after obtaining the processed face image, the method further comprises:

and performing cypress cloning processing on the processed face image to obtain a target face image.

According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including:

a superimposing module configured to superimpose a target eyebrow shape onto a position in the face image corresponding to a position of an original eyebrow in response to an eyebrow shape modification instruction, wherein the position of the original eyebrow is characterized by a position of a center of a contour of the original eyebrow;

a determining module configured to determine a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image, wherein the coincidence region is a region belonging to both an original eyebrow region and a target eyebrow region, the first non-coincidence region is a region belonging to the original eyebrow region and not belonging to the target eyebrow region, and the second non-coincidence region is a region belonging to the target eyebrow region and not belonging to the original eyebrow region;

a processing module configured to perform pixel replacement processing on the face image to obtain a processed face image, where the pixel replacement processing includes: traversing pixels in the face image, and for the traversed current pixel, modifying the pixel value of the current pixel based on the pixel values of skin pixels used for replacing the current pixel in the other skin areas when the current pixel is located in the first non-overlapped area; modifying a pixel value of the current pixel based on a pixel value of an original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region.

In some embodiments, the processing module comprises:

a first replacement sub-module configured to determine a skin pixel of the other skin regions that is closest in distance to the current pixel as a skin pixel for replacing the current pixel when the current pixel is located in the first non-overlapping region; modifying the pixel value of the current pixel to the pixel value of the skin pixel that replaces the current pixel.

In some embodiments, the processing module comprises:

a second replacement sub-module configured to determine, when the current pixel is located in the second non-overlapping region, an original eyebrow pixel in the original eyebrow region that is closest in distance to the current pixel as an original eyebrow pixel for replacing the current pixel; and modifying the pixel value of the current pixel into the pixel value of the original eyebrow pixel replacing the current pixel.

In some embodiments, the image processing apparatus further comprises:

the query module is configured to determine that the current pixel is located in the first non-overlapping region when a pixel value of a mask pixel, which is located in a mask image corresponding to the first non-overlapping region and is located at the same position as the current pixel, is queried to be a first value; when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the second non-overlapping area is the first value, determining that the current pixel is located in the second non-overlapping area.

In some embodiments, the image processing apparatus further comprises:

the positioning module is configured to perform face key point detection on the face image to obtain eyebrow key points before overlaying the target eyebrow shape to a position corresponding to the position of the original eyebrow in the face image in response to the eyebrow shape modification instruction; determining the edge of the original eyebrow based on the eyebrow key points; and determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

In some embodiments, the image processing apparatus further comprises:

and the optimization module is configured to perform cedar clone processing on the processed face image after the processed face image is obtained, so as to obtain a target face image.

According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of any of the first aspects.

According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of the first aspects.

According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer readable code which, when run on an electronic device, causes the electronic device to perform the method according to any of the first aspects.

The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:

when the eyebrow shape needs to be adjusted, the related area is automatically determined, pixel replacement processing is carried out on the face image, the processed face image is obtained, the shape of the eyebrow in the processed face image is the target eyebrow shape, any eyebrow shape can be adjusted to be the target eyebrow shape, and convenience in modifying the eyebrow shape is improved.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.

FIG. 1 is a flow diagram illustrating one embodiment of a method of image processing in accordance with an exemplary embodiment;

FIG. 2 is a block diagram illustrating the structure of an image processing apparatus according to an exemplary embodiment;

fig. 3 is a block diagram illustrating a structure of an electronic device according to an example embodiment.

Detailed Description

The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.

It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

FIG. 1 is a flow diagram illustrating one embodiment of an image processing method according to an exemplary embodiment. The method comprises the following steps:

and step 101, in response to the eyebrow shape modification instruction, overlaying the target eyebrow shape to a position corresponding to the position of the original eyebrow in the face image.

When the user desires to change the eyebrow shape, the user can perform a modification instruction operation to generate an eyebrow-type modification instruction. For example, when the user desires to adjust the shape of the eyebrow in the face image, the face image of the user and the eyebrow shape designed by the designer, such as a willow-leaf eyebrow, a brow-in-one, etc., are presented on the relevant interface. The modification indication operation may be: the user drags an eyebrow shape designed by the designer as a target eyebrow shape into a face image by using a mouse, and clicks a button for indicating eyebrow modification.

In the present disclosure, the original eyebrows in the face image may be referred to as original eyebrows. The original eyebrow in the face image is composed of all pixels belonging to the original eyebrow.

In the present disclosure, the position of the original eyebrow can be characterized by the position of the center of the contour of the original eyebrow. For example, the position of the original eyebrow can be expressed by coordinates of the center of the outline of the original eyebrow. The position corresponding to the position of the original eyebrow may be the same as the position of the original eyebrow, or may be a position obtained by adding the position of the original eyebrow and a position offset set by the user.

Step 102, determining a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image.

In the present disclosure, after superimposing the target eyebrow shape into the face image, the face image includes the original eyebrow, the target eyebrow shape. The original eyebrow area is the area occupied by all pixels belonging to the original eyebrow. The target eyebrow region is a region surrounded by the target eyebrow shape.

In the present disclosure, a region in the face image that belongs to both the target eyebrow region and the original eyebrow region may be referred to as a registration region.

In the present disclosure, a region in the face image that belongs to the original eyebrow region and does not belong to the target eyebrow region may be referred to as a first non-overlapping region. A region in the face image that belongs to the target eyebrow region and does not belong to the original eyebrow region may be referred to as a second non-overlapping region. The areas of the face image other than the overlapped area, the first non-overlapped area and the second non-overlapped area can be called other skin areas.

And 103, performing pixel replacement processing on the face image to obtain a processed face image.

In the present disclosure, the shape of the eyebrow in the processed face image is the target eyebrow shape.

The pixel replacement processing includes: traversing pixels in the face image, and modifying the pixel value of the traversed current pixel based on the pixel values of skin pixels used for replacing the current pixel in other skin areas when the current pixel is located in a first non-overlapped area; modifying the pixel value of the current pixel based on the pixel value of the original eyebrow pixel in the original eyebrow area for replacing the current pixel when the current pixel is located in the second non-overlapping area.

In the present disclosure, for any traversed pixel, which of the skin area, the original eyebrow area, the target eyebrow area, and the overlap area the current pixel belongs to can be determined according to the position of the boundary of the current pixel, the skin area, the original eyebrow area, the target eyebrow area, and the overlap area.

In this disclosure, for any traversed current pixel, when the current pixel is located in the first non-overlapping region, a skin pixel in another skin region for replacing the current pixel may be determined through a patchmatch algorithm. The patch is an image block of a preset size, for example, 3 × 3, obtained by dividing the face image. When the current pixel is located in the first non-overlapping area, the patch match algorithm may be used to determine an image block in the other skin area, which has the highest similarity to the image block in the first non-overlapping area where the current pixel is located, find a pixel with the highest similarity to the current pixel from the image block with the highest similarity, and use the found pixel as a skin pixel in the other skin area for replacing the current pixel. The pixel value of the current pixel may then be modified to the pixel value of the skin pixel in the other skin region that replaced the current pixel.

For any traversed current pixel, when the current pixel is located in the second non-overlapped region, the original eyebrow pixel in the original eyebrow region for replacing the current pixel can be determined through a patchmatch algorithm. The patch match algorithm may determine an image block in the original eyebrow area with the highest similarity to the image block in the second non-overlapping area where the current pixel is located, find out a pixel with the highest similarity to the current pixel from the image block with the highest similarity, and use the found pixel as the original eyebrow pixel in the original eyebrow area for replacing the current pixel. Then, the pixel value of the current pixel may be modified to the pixel value of the original eyebrow pixel in the original eyebrow region that replaces the current pixel.

According to the method and the device, when the eyebrow shape needs to be adjusted, the relevant area is automatically determined, pixel replacement processing is carried out on the face image, the processed face image is obtained, the shape of the eyebrow in the processed face image is the target eyebrow shape, any eyebrow shape can be adjusted to be the target eyebrow shape, and convenience in modification of the eyebrow shape is improved.

In some embodiments, for any traversed current pixel, when the current pixel is located in the first non-overlapping region, modifying the pixel value of the current pixel based on the pixel values of the skin pixels in the other skin regions used to replace the current pixel comprises: determining skin pixels in other skin areas closest to the current pixel as skin pixels for replacing the current pixel; the pixel value of the current pixel is modified to the pixel value of the skin pixel that replaces the current pixel.

The face image is acquired by an image sensor, and each pixel corresponds to a three-dimensional point in a three-dimensional space. The closer the position of the three-dimensional point corresponding to the pixel point of the same object is, the closer the light intensity of the three-dimensional point corresponding to the pixel point is, and the light intensity directly influences the pixel value, so that under normal conditions, the pixel points belonging to the same object have the following relationship: the closer the positions of the pixel points are, the closer the pixel values of the pixel points are. Otherwise, in the case where a plurality of pixels belonging to an object do not conform to this relationship, the display effect of the area occupied by the object is unnatural, resulting in a situation where an image is distorted after image processing.

In this disclosure, for any traversed current pixel, when the current pixel is located in the first non-overlapping region, the pixel value of the current pixel may be modified to the pixel value of the skin pixel closest to the current pixel in the other skin regions, the distance between the current pixel and the skin pixel closest to the current pixel in the other skin regions is close, and the modified pixel value of the current pixel is the same as the pixel value of the skin pixel closest to the current pixel in the other skin regions, so that after modification, the satisfied relationship of the pixel points belonging to the same object is satisfied, and the situation of image distortion after image processing may be avoided.

In some embodiments, for any traversed current pixel, modifying the pixel value of the current pixel based on the pixel value of the original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is in the second non-overlapping region comprises: determining an original eyebrow pixel closest in distance to the current pixel in the original eyebrow region as an original eyebrow pixel for replacing the current pixel; the pixel value of the current pixel is modified to the pixel value of the original eyebrow pixel that replaced the current pixel.

In this disclosure, for any traversed current pixel, when the current pixel is located in the second non-overlapping region, the pixel value of the current pixel may be modified to be the pixel value of the original eyebrow pixel closest to the current pixel in the original eyebrow region, the distance between the current pixel and the original eyebrow pixel closest to the current pixel in the original eyebrow region is close, the modified pixel value of the current pixel is the same as the pixel value of the original eyebrow pixel closest to the current pixel in the original eyebrow region, so that after modification, the satisfied relationship of the pixel points belonging to the same object is satisfied, and the image distortion after image processing may be avoided.

In some embodiments, after processing the face image to obtain a processed face image, the method further includes: and performing cypress cloning processing on the processed face image to obtain a target face image.

In the processed face image, the pixel values of the pixels in the first non-overlapping area and the pixel values of the pixels in the second non-overlapping area are changed and are not original pixel values any more, so that the display effect of the first non-overlapping area is not natural and/or the display effect of the second non-overlapping area is not natural, and the face image subjected to the processing is subjected to Pinus clone processing, so that the display effect of the first non-overlapping area and the display effect of the second non-overlapping area in the obtained target face image are more vivid and natural.

In some embodiments, before superimposing the target eyebrow shape onto a position in the face image corresponding to the position of the original eyebrow in response to the eyebrow shape modification instruction, the method further includes: detecting key points of the face image to obtain key points of eyebrows; determining the edge of the original eyebrow based on the eyebrow key points; and determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

Eyebrow key points belonging to the original eyebrows in the face image are located on the edges of the original eyebrows in the face image. Eyebrow key points can be obtained through face key point detection. Based on the eyebrow key points, the edge of the original eyebrow is determined. For example, the edge of the original eyebrow can be determined based on the coordinates of the eyebrow key points using the fillconveloxy function in the computer vision and machine learning software library opencv. And determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

In the present disclosure, based on the eyebrow key points, the edge of the original eyebrow can be accurately determined, and thus the original eyebrow region can be accurately determined.

In some embodiments, further comprising: when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the first non-overlapping area is a first value, determining that the current pixel is located in the first non-overlapping area; and when the pixel value of a mask pixel with the same position as the current pixel in the mask image corresponding to the second non-overlapping area is inquired to be the first value, determining that the current pixel is located in the second non-overlapping area.

In the present disclosure, a mask map corresponding to another skin region, a mask map corresponding to an overlapping region, a mask map corresponding to a first non-overlapping region, and a mask map corresponding to a second non-overlapping region may be established.

Each mask pixel in the mask image corresponding to the other skin areas corresponds to a pixel in the face image. For each mask pixel in the mask images corresponding to the other skin areas, the position of the mask pixel in the mask images corresponding to the other skin areas is the same as the position of a pixel corresponding to the mask pixel in the face image. In the mask map corresponding to the other skin region, the pixel value of the mask pixel corresponding to the pixel in the other skin region is a first value, for example, 255, and the pixel value of the mask pixel other than the mask pixel having the pixel value of the first value in the mask map corresponding to the other skin region is a second value, for example, 0.

Each mask pixel in the mask image corresponding to the overlapped region corresponds to one pixel in the face image. For each mask pixel in the mask image corresponding to the overlapping region, the position of the mask pixel in the mask image corresponding to the overlapping region is the same as the position of a pixel corresponding to the mask pixel in the face image. In the mask map corresponding to the overlapping region, the pixel value of the mask pixel corresponding to the pixel in the overlapping region is a first value, for example, 255, and the pixel value of the mask pixel other than the mask pixel whose pixel value is the first value in the mask map corresponding to the overlapping region is a second value, for example, 0.

Each mask pixel in the mask image corresponding to the first non-overlapped region corresponds to one pixel in the face image. For each mask pixel in the mask image corresponding to the first non-overlapping region, the position of the mask pixel in the mask image corresponding to the first non-overlapping region is the same as the position of a pixel corresponding to the mask pixel in the face image. In the mask map corresponding to the first non-overlapping region, the pixel value of the mask pixel corresponding to the pixel in the first non-overlapping region is a first value, for example, 255, and the pixel value of the mask pixel other than the mask pixel whose pixel value is the first value in the mask map corresponding to the first non-overlapping region is a second value, for example, 0.

Each mask pixel in the mask image corresponding to the second non-overlapped region corresponds to one pixel in the face image. For each mask pixel in the mask image corresponding to the second non-overlapping region, the position of the mask pixel in the mask image corresponding to the second non-overlapping region is the same as the position of a pixel corresponding to the mask pixel in the face image. In the mask map corresponding to the second non-overlapping region, the pixel value of the mask pixel corresponding to the pixel in the second non-overlapping region is a first value, and the pixel value of the mask pixel other than the mask pixel whose pixel value is the first value in the mask map corresponding to the second non-overlapping region is a second value, for example, 0.

In the present disclosure, for any traversed current pixel, the pixel value of a mask pixel in each mask map, which is located at the same position as the current pixel, may be queried.

When the pixel value of the mask pixel having the same position as the current pixel in the mask map corresponding to the other skin area is found to be the first value, it may be determined that the current pixel is located in the other skin area, and the next pixel of the current pixel may be continuously accessed without modifying the pixel value of the current pixel.

When the pixel value of the mask pixel having the same position as the current pixel in the mask image corresponding to the overlapping area is found to be the first value, it may be determined that the current pixel is located in the overlapping area, and the next pixel of the current pixel may be continuously accessed without modifying the pixel value of the current pixel.

When the pixel value of a mask pixel having the same position as the current pixel in the mask map corresponding to the first non-overlapping area is found to be the first value, it may be determined that the current pixel is located in the first non-overlapping area. The pixel value of the current pixel may then be modified based on the pixel values of the skin pixels in the other skin regions that were used to replace the current pixel.

And when the pixel value of a mask pixel with the same position as the current pixel in the mask image corresponding to the second non-overlapping area is found to be the first value, determining that the current pixel is located in the second non-overlapping area. Then, the pixel value of the current pixel may be modified based on the pixel value of the original eyebrow pixel in the original eyebrow region that is used to replace the current pixel.

In the present disclosure, for any traversed current pixel, the region where the current pixel is located may be quickly determined by querying the mask map corresponding to each region, and whether to modify the pixel value of the current pixel is determined according to the region where the current pixel is located, so as to reduce the time consumed by image processing and increase the speed of image processing.

Fig. 2 is a block diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment. Referring to fig. 2, the image processing apparatus includes: an overlapping module 201, a determining module 202 and a processing module 203.

The superimposing module 201 is configured to superimpose a target eyebrow shape onto a position in the face image corresponding to a position of an original eyebrow, wherein the position of the original eyebrow is characterized by a position of a center of a contour of the original eyebrow, in response to an eyebrow shape modification instruction;

the determination module 202 is configured to determine a coincidence region, a first non-coincidence region, a second non-coincidence region and other skin regions except the coincidence region, the first non-coincidence region and the second non-coincidence region in the face image, wherein the coincidence region is a region belonging to both an original eyebrow region and a target eyebrow region, the first non-coincidence region is a region belonging to the original eyebrow region and not belonging to the target eyebrow region, and the second non-coincidence region is a region belonging to the target eyebrow region and not belonging to the original eyebrow region;

the processing module 203 is configured to perform a pixel replacement process on the face image, resulting in a processed face image, where the pixel replacement process includes: traversing pixels in the face image, and for the traversed current pixel, modifying the pixel value of the current pixel based on the pixel values of skin pixels used for replacing the current pixel in the other skin areas when the current pixel is located in the first non-overlapped area; modifying a pixel value of the current pixel based on a pixel value of an original eyebrow pixel in the original eyebrow region used to replace the current pixel when the current pixel is located in the second non-overlapping region.

In some embodiments, the processing module comprises, in some embodiments:

a first replacement sub-module configured to determine a skin pixel in the other skin region that is closest in distance to the current pixel as a skin pixel for replacing the current pixel; modifying the pixel value of the current pixel to the pixel value of the skin pixel that replaces the current pixel.

In some embodiments, the processing module comprises:

a second replacement sub-module configured to determine an original eyebrow pixel in the original eyebrow region that is closest in distance to the current pixel as an original eyebrow pixel for replacing the current pixel; and modifying the pixel value of the current pixel into the pixel value of the original eyebrow pixel replacing the current pixel.

In some embodiments, the image processing apparatus further comprises:

the query module is configured to determine that the current pixel is located in the first non-overlapping region when a pixel value of a mask pixel, which is located in a mask image corresponding to the first non-overlapping region and is located at the same position as the current pixel, is queried to be a first value; when inquiring that the pixel value of a mask pixel with the same position as the current pixel in a mask image corresponding to the second non-overlapping area is the first value, determining that the current pixel is located in the second non-overlapping area.

In some embodiments, the image processing apparatus further comprises:

the positioning module is configured to perform face key point detection on the face image to obtain eyebrow key points before overlaying the target eyebrow shape to a position corresponding to the position of the original eyebrow in the face image in response to the eyebrow shape modification instruction; determining the edge of the original eyebrow based on the eyebrow key points; and determining the area surrounded by the edge of the original eyebrow as the original eyebrow area.

In some embodiments, the image processing apparatus further comprises:

and the optimization module is configured to perform cedar clone processing on the processed face image after the processed face image is obtained, so as to obtain a target face image.

Fig. 3 is a block diagram illustrating an electronic device 300 according to an example embodiment. For example, the electronic device 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.

Referring to fig. 3, electronic device 300 may include one or more of the following components: a processing component 302, a memory 304, a power component 306, a multimedia component 308, an audio component 310, an input/output (I/O) interface 312, a sensor component 314, and a communication component 316.

The processing component 302 generally controls overall operations of the electronic device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 302 may include one or more processors 320 to execute instructions to perform all or a portion of the steps of the image processing method described above. Further, the processing component 302 can include one or more modules that facilitate interaction between the processing component 302 and other components. For example, the processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the processing component 302.

The memory 304 is configured to store various types of data to support operations at the electronic device 300. Examples of such data include instructions for any application or method operating on the electronic device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 304 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.

The power supply component 306 provides power to the various components of the electronic device 300. The power components 306 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device electronics 300.

The multimedia component 308 comprises a screen providing an output interface between the electronic device 300 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a target camera and/or a rear camera. When the electronic device 300 is in an operating mode, such as a shooting mode or a video mode, the target camera and/or the rear camera may receive external multimedia data. Each target camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.

The audio component 310 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.

The I/O interface 312 provides an interface between the processing component 302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.

Sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for electronic device 300. For example, sensor assembly 314 may detect an open/closed state of electronic device 300, the relative positioning of components, such as a display and keypad of electronic device 300, sensor assembly 314 may also detect a change in the position of electronic device 300 or a component of electronic device 300, the presence or absence of user contact with electronic device 300, the orientation or acceleration/deceleration of electronic device 300, and a change in the temperature of electronic device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.

The communication component 316 is configured to facilitate wired or wireless communication between the electronic device 300 and other devices. The electronic device 300 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 6G), or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.

In an exemplary embodiment, the electronic device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described image processing methods.

In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by an electronic device to perform the image processing method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by an electronic device to perform the image processing method described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.

In an exemplary embodiment, the present application further provides a computer program product comprising computer readable code which, when run on an electronic device, causes the electronic device to perform the above-mentioned image processing method.

Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于先验知识的单张图片重打光方法、系统、终端及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!