Method for eliminating bright spots of pupils

文档序号:9212 发布日期:2021-09-17 浏览:14次 中文

1. A pupil bright spot elimination method is characterized by comprising the following steps:

acquiring a target image; wherein the target image comprises a pupil region having a bright pupil spot;

aiming at each partition of the pupil area in the target image, correcting the pixel value of the pixel point in the partition into the pixel value mapped by the partition by utilizing a pre-trained target mapping relation between the partition of the pupil area and the pixel value; the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spots are eliminated;

the training method of the target mapping relation comprises the following steps:

acquiring a plurality of first images and second images corresponding to the first images; the first image contains pupil bright spots, the second image is an image without the pupil bright spots, and the first image and the corresponding second image have the same image area except the pupil area;

training a mapping relation between a partition of a preset pupil area and a pixel value based on a plurality of first images and a second image corresponding to each first image to obtain the target mapping relation;

and the sum of loss function values between the pixel value of each pixel point in the pupil area of each first image corrected by using the target mapping relation and the pixel value true value is smaller than a preset threshold, the pixel value true value of each pixel point is the pixel value of a pixel point which has the same position as the pixel point in the pupil area of a second image corresponding to the first image, and the loss function value is determined based on pixel value loss.

2. The method according to claim 1, wherein training a preset mapping relation between each partition in the pupil area and a pixel value based on a plurality of first images and a plurality of second images corresponding to the first images to obtain the target mapping relation comprises:

and aiming at each first image, training a preset mapping relation between each position in the pupil area and the pixel value on the basis of the following modes:

determining pixel values mapped to each partition in the pupil area of the first image based on a preset mapping relation between each partition in the pupil area and the pixel values to obtain a corrected first image;

calculating the pixel value of each pixel point in the pupil area of the corrected first image and the loss function value of the true value of the pixel point as the loss function value of the pixel point;

and judging whether the sum of the loss function values of all the pixel points in the pupil area of the corrected first image is smaller than a preset threshold value, when the sum is not smaller than the preset threshold value, adjusting the mapping relation between all the partitions in the pupil area and the pixel values, carrying out next training, and when the sum is smaller than the preset threshold value, obtaining a trained target mapping relation.

3. The method of claim 2, wherein calculating, for each pixel in the pupil region of the corrected first image, a loss function value between the pixel value of the pixel and a true value of the pixel as the loss function value of the pixel comprises:

and calculating the pixel value of the pixel point and the pixel value loss of the true value of the pixel point aiming at each pixel point in the pupil area of the corrected first image, determining the pixel value of the pixel point and the loss function value of the true value of the pixel point based on the determined pixel value loss, and taking the loss function value as the loss function value of the pixel point.

4. The method of claim 3, wherein determining a loss function value between the pixel value of the pixel and a true value of the pixel based on the determined pixel value loss comprises:

determining the determined pixel value loss as the pixel value of the pixel point and a loss function value of the true value of the pixel point;

alternatively, the first and second electrodes may be,

calculating the auxiliary loss of the pixel point; determining the pixel value of the pixel point and a loss function value of the true value of the pixel point based on the determined pixel value loss and the calculated auxiliary loss; wherein the auxiliary loss comprises at least one of a color loss and a smoothness loss of the pixel point.

5. The method of claim 4, wherein said calculating the assist loss for the pixel comprises:

calculating an included angle between the pixel value of the pixel point and the true value of the pixel point in a color space, and taking the included angle as the color loss of the pixel point; and/or the presence of a gas in the gas,

and calculating the gradient of the pixel point as the smoothness loss of the pixel point.

6. The method of claim 4, wherein determining a loss function value for the pixel value of the pixel point and the true value of the pixel point based on the determined pixel value loss and the calculated penalty loss comprises:

and calculating the weighted sum of the determined pixel value loss, the calculated color loss and the calculated smoothness loss to serve as a loss function value of the pixel point and the true value of the pixel point.

7. The method according to any one of claims 1 to 6, wherein the acquiring a plurality of first images and second images corresponding to the first images comprises:

acquiring each first image and a second image corresponding to the first image by the following method:

acquiring a first initial image and a second initial image corresponding to the first initial image; the first initial image is acquired by a first camera with supplementary lighting, the second initial image is acquired by a second camera without supplementary lighting, and the first camera and the second camera have the same configuration parameters and are directed to the same object when acquiring the first initial image and the second initial image;

updating the pupil area in the second initial image based on the area information of the pupil area of the first initial image to obtain an updated image as a first image, and taking the second initial image as a second image corresponding to the first image; wherein the pupil area of the updated image is the same as the pupil area of the first initial image, and the image areas of the updated image other than the pupil area are the same as the second initial image.

8. The method of claim 7, wherein updating the pupil region in the second initial image based on the region information of the pupil region of the first initial image to obtain an updated image comprises:

and aiming at each pixel point in the pupil area in the second initial image, determining a target pixel point matched with the pixel point in the pupil area of the first initial image, and updating the pixel value of the pixel point to be the pixel value of the target pixel point.

Background

With the development of the technology, the imaging technology is more and more widely applied to various industries, such as photographing through a mobile phone, video monitoring through a camera, and the like.

Generally, in the case of good lighting conditions (sufficient light source), the imaging quality of the image generated by the imaging device is good, and in the case of poor lighting conditions (such as night), the image generated by the imaging device tends to contain more noise and have poor imaging quality. In order to improve the imaging quality of the image under the condition of poor illumination conditions, a special light splitting structure and a double-light fusion technology can be utilized, so that the generated image is fused with the color information of a visible light image and the signal-to-noise ratio advantage of an infrared image, and the imaging quality of the image is further improved.

However, the dual-optical fusion technique requires additional supplementary lighting when the imaging device collects an image, which causes a phenomenon of pupil bright spots (obvious bright spots at the pupil) in the generated image, and affects the imaging effect of the image.

Disclosure of Invention

The embodiment of the invention aims to provide a pupil bright spot eliminating method to eliminate pupil bright spots in an image and improve the imaging effect of the image. The specific technical scheme is as follows:

in a first aspect, an embodiment of the present invention provides a method for eliminating bright spots of a pupil, where the method includes:

acquiring a target image; wherein the target image comprises a pupil region having a bright pupil spot;

aiming at each partition of the pupil area in the target image, correcting the pixel value of the pixel point in the partition into the pixel value mapped by the partition by utilizing a pre-trained target mapping relation between the partition of the pupil area and the pixel value; the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spots are eliminated;

the training method of the target mapping relation comprises the following steps:

acquiring a plurality of first images and second images corresponding to the first images; the first image contains pupil bright spots, the second image is an image without the pupil bright spots, and the first image and the corresponding second image have the same image area except the pupil area;

training a mapping relation between a partition of a preset pupil area and a pixel value based on a plurality of first images and a second image corresponding to each first image to obtain the target mapping relation;

and the sum of loss function values between the pixel value of each pixel point in the pupil area of each first image corrected by using the target mapping relation and the pixel value true value is smaller than a preset threshold, the pixel value true value of each pixel point is the pixel value of a pixel point which has the same position as the pixel point in the pupil area of a second image corresponding to the first image, and the loss function value is determined based on pixel value loss.

Optionally, in an embodiment, the training, based on the plurality of first images and the second image corresponding to the first image, of the preset mapping relationship between each partition in the pupil area and a pixel value to obtain the target mapping relationship includes:

and aiming at each first image, training a preset mapping relation between each position in the pupil area and the pixel value on the basis of the following modes:

determining pixel values mapped to each partition in the pupil area of the first image based on a preset mapping relation between each partition in the pupil area and the pixel values to obtain a corrected first image;

calculating the pixel value of each pixel point in the pupil area of the corrected first image and the loss function value of the true value of the pixel point as the loss function value of the pixel point;

and judging whether the sum of the loss function values of all the pixel points in the pupil area of the corrected first image is smaller than a preset threshold value, when the sum is not smaller than the preset threshold value, adjusting the mapping relation between all the partitions in the pupil area and the pixel values, carrying out next training, and when the sum is smaller than the preset threshold value, obtaining a trained target mapping relation.

Optionally, in an embodiment, the calculating, for each pixel point in the pupil area of the corrected first image, a loss function value of the pixel point and a true value of the pixel point as the loss function value of the pixel point includes:

and calculating the pixel value of the pixel point and the pixel value loss of the true value of the pixel point aiming at each pixel point in the pupil area of the corrected first image, determining the pixel value of the pixel point and the loss function value of the true value of the pixel point based on the determined pixel value loss, and taking the loss function value as the loss function value of the pixel point.

Optionally, in an embodiment, the determining the pixel value of the pixel point and the loss function value of the true value of the pixel point based on the determined pixel value loss includes:

determining the determined pixel value loss as the pixel value of the pixel point and a loss function value of the true value of the pixel point;

alternatively, the first and second electrodes may be,

calculating the auxiliary loss of the pixel point; determining the pixel value of the pixel point and a loss function value of the true value of the pixel point based on the determined pixel value loss and the calculated auxiliary loss; wherein the auxiliary loss comprises at least one of a color loss and a smoothness loss of the pixel point.

Optionally, in an embodiment, the calculating the auxiliary loss of the pixel point includes:

calculating an included angle between the pixel value of the pixel point and the true value of the pixel point in a color space, and taking the included angle as the color loss of the pixel point; and/or the presence of a gas in the gas,

and calculating the gradient of the pixel point as the smoothness loss of the pixel point.

Optionally, in an embodiment, the determining a pixel value of the pixel point and a loss function value of a true value of the pixel point based on the determined pixel value loss and the calculated auxiliary loss includes:

and calculating the weighted sum of the determined pixel value loss, the calculated color loss and the calculated smoothness loss to serve as a loss function value of the pixel point and the true value of the pixel point.

Optionally, in an embodiment, the acquiring a plurality of first images and second images corresponding to the first images includes:

acquiring each first image and a second image corresponding to the first image by the following method:

acquiring a first initial image and a second initial image corresponding to the first initial image; the first initial image is acquired by a first camera with supplementary lighting, the second initial image is acquired by a second camera without supplementary lighting, and the first camera and the second camera have the same configuration parameters and are directed to the same object when acquiring the first initial image and the second initial image;

updating the pupil area in the second initial image based on the area information of the pupil area of the first initial image to obtain an updated image as a first image, and taking the second initial image as a second image corresponding to the first image; wherein the pupil area of the updated image is the same as the pupil area of the first initial image, and the image areas of the updated image other than the pupil area are the same as the second initial image.

Optionally, in an embodiment, the updating the pupil area in the second initial image based on the area information of the pupil area of the first initial image to obtain an updated image includes:

and aiming at each pixel point in the pupil area in the second initial image, determining a target pixel point matched with the pixel point in the pupil area of the first initial image, and updating the pixel value of the pixel point to be the pixel value of the target pixel point.

In a second aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;

a memory for storing a computer program;

a processor adapted to perform the method steps of the first aspect when executing a program stored in the memory.

In a third aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method steps of the first aspect are implemented.

The embodiment of the invention has the following beneficial effects:

in the method for eliminating the pupil hot spots provided in the embodiment of the present invention, the pre-trained target mapping relationship between the partition of the pupil area and the pixel value may be utilized to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition, and meanwhile, since the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the hot spots are eliminated, the calibrated pupil area is the pupil area after the hot spots are eliminated, so that the target image after the pupil hot spots are eliminated can be obtained. Therefore, by the scheme, the pupil bright spots in the image can be eliminated, and the imaging effect of the image is improved.

Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.

Fig. 1 is a flowchart of a pupil speckle reduction method according to an embodiment of the present invention;

fig. 2 is another flowchart of a pupil speckle reduction method according to an embodiment of the present invention;

fig. 3 is another flowchart of a pupil speckle reduction method according to an embodiment of the present invention;

fig. 4 is another flowchart of a pupil speckle reduction method according to an embodiment of the present invention;

fig. 5 is another flowchart of a pupil speckle reduction method according to an embodiment of the present invention;

fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In order to eliminate the pupil bright spots in the image and improve the imaging effect of the image, the embodiment of the invention provides a pupil bright spot elimination method.

First, a method for eliminating bright spots of a pupil provided in an embodiment of the present invention is described below.

The embodiment of the invention can be applied to various electronic devices, such as personal computers, servers, mobile phones and other devices with data processing capability. Moreover, the method for eliminating the bright spots of the pupil provided by the embodiment of the invention can be realized in a software, hardware or software and hardware combined mode.

The method for eliminating the bright spots of the pupil provided by the embodiment of the invention can comprise the following steps:

acquiring a target image; wherein the target image comprises a pupil region having a bright pupil spot;

aiming at each partition of the pupil area in the target image, correcting the pixel value of the pixel point in the partition into the pixel value mapped by the partition by utilizing a pre-trained target mapping relation between the partition of the pupil area and the pixel value; the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spots are eliminated;

the training method of the target mapping relation comprises the following steps:

acquiring a plurality of first images and second images corresponding to the first images; the first image contains pupil bright spots, the second image is an image without the pupil bright spots, and the first image and the corresponding second image have the same image area except the pupil area;

training a mapping relation between a partition of a preset pupil area and a pixel value based on a plurality of first images and a second image corresponding to each first image to obtain the target mapping relation;

and the sum of loss function values between the pixel value of each pixel point in the pupil area of each first image corrected by using the target mapping relation and the pixel value true value is smaller than a preset threshold, the pixel value true value of each pixel point is the pixel value of a pixel point which has the same position as the pixel point in the pupil area of a second image corresponding to the first image, and the loss function value is determined based on pixel value loss.

In the method for eliminating the pupil bright spots provided in the embodiment of the present invention, the pre-trained target mapping relationship between the partition of the pupil area and the pixel value may be utilized to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition, and meanwhile, the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spots are eliminated, so that the calibrated pupil area is the pupil area after the bright spots are eliminated, and thus, the target image after the pupil bright spots are eliminated can be obtained. Therefore, by the scheme, the pupil bright spots in the image can be eliminated, and the imaging effect of the image is improved.

The following describes a message description of a pupil bright spot elimination method according to an embodiment of the present invention with reference to the accompanying drawings.

As shown in fig. 1, a method for eliminating bright spots of a pupil provided in an embodiment of the present invention may include the following steps:

s101, acquiring a target image; wherein the target image comprises a pupil region having a bright pupil spot;

the pupil bright spot is a phenomenon that the pupil area of the target object is obviously reflected in the imaging process due to the self-contained supplementary lighting of the imaging equipment, so that the pupil area of the target object in the finally generated image is represented as a circular bright spot. The pupil area with the bright pupil spot in the target graph may be a human pupil area, or may also be an animal pupil area, for example, a cat pupil area, which is not specifically limited in this embodiment of the present invention.

The target image may be an image captured by a personal consumer electronics device that includes a pupil region with bright spots of the pupil. For example, in a scene with low ambient brightness, a flash is enabled to shoot a resulting image for a target object through a smartphone. Alternatively, the target image may be an image collected by various monitoring devices that includes a pupil region with bright spots of the pupil. For example, when the ambient brightness is low, the monitoring device acquires an obtained image of the target object in a light supplement manner.

In one implementation, a target image may be acquired from an image capture device. The image acquisition equipment comprises an intelligent mobile phone, monitoring equipment and other electronic equipment with an image acquisition function. After the image acquisition device finishes image acquisition, a target image acquired by the image acquisition device can be read from the image acquisition device.

Optionally, in another implementation, the target image may be obtained by: an image of a pupil region having bright spots of the pupil acquired by the image acquisition device is acquired, and a face image is separated from the image as a target image. For example, if the image 1 includes the person a and the pupil area of the person a in the image 1 has bright pupil spots, the face position of the person a in the image 1 may be recognized first, and then the face image of the person a may be separated from the image 1. Note that the separated face image needs to include a pupil area having a bright pupil spot.

Optionally, in another implementation, the target image may also be read from a database. The database stores an image including a pupil area having a pupil flare in advance, and when the pupil flare is to be eliminated, the image including the pupil area having the pupil flare can be read from the database as a target image.

S102, aiming at each partition of the pupil area in the target image, correcting the pixel value of the pixel point in the partition into the pixel value mapped by the partition by utilizing a pre-trained target mapping relation between the partition of the pupil area and the pixel value; and the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spots are eliminated.

For the pupil area with the pupil bright spots, due to the influence of the pupil bright spots, the pupil area is represented as white bright spots, that is, the pixel values of the pixel points in the pupil area with the pupil bright spots are all 0, that is, white. Since the pupil area with the pupil hot spot is completely white, the pupil hot spot elimination can be substantially understood as: and modifying the pixel value of the full white pixel point in the pupil area with the bright pupil spots into the pixel value of the pixel point in the normal pupil area.

For a normal pupil area, the distribution of pixel values of each pixel point in the pupil area is regular, for example, a human eye is taken as an example, in an example of the normal pupil area, a middle region (i.e., a sclera center) of the pupil area is black, and a peripheral region is grayish brown (the above is merely exemplified, and more detailed regions can be continuously divided as needed). Therefore, the target mapping relationship between the partition of the pupil region and the pixel value obtained by pre-training can be used, and further, for each partition of the pupil region in the target image, the pre-trained target mapping relationship between the partition of the pupil region and the pixel value is used to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition. Because the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spot is eliminated, that is, the pixel value mapped by each partition is the pixel value of the pixel point of the partition in the normal pupil area, the elimination of the pupil bright spot in the pupil area can be realized by utilizing the pre-trained target mapping relation between the partition of the pupil area and the pixel value.

It should be noted that the pixel values mapped by a partition may be black or black-gray, etc. The pupil area obtained finally may be matched with the pixel value without the bright spot.

The specific training process of the target mapping relationship will be described in detail later, and will not be described herein again.

In the scheme provided by this embodiment, a pre-trained target mapping relationship between the partition of the pupil area and the pixel value may be used to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition, and meanwhile, since the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spot is eliminated, the calibrated pupil area is the pupil area after the bright spot is eliminated, so that the target image after the bright spot of the pupil is eliminated can be obtained. Therefore, by the scheme, the pupil bright spots in the image can be eliminated, and the imaging effect of the image is improved.

Optionally, as shown in fig. 2, in an embodiment of the present invention, a method for eliminating bright spots of a pupil is further provided, and on the basis of fig. 1, the method further includes the following step of training to obtain a target mapping relationship:

s201, acquiring a plurality of first images and second images corresponding to the first images;

the first image contains pupil bright spots, the second image is an image without the pupil bright spots, and the first image and the corresponding second image are identical in image areas except the pupil area. That is, the first image is acquired such that the pixel values of the pixel points in the image area other than the pupil area having the pupil flare are the same as those of the corresponding second image, the first image is distinguished from the corresponding second image only in the pupil area having the pupil flare, the pupil area of the first image has the pupil flare, and the pupil area of the second image does not have the pupil flare.

S202, training a mapping relation between a partition of a preset pupil area and a pixel value based on a plurality of first images and a second image corresponding to each first image to obtain the target mapping relation;

the preset mapping relationship between the partition of the pupil area and the pixel value may be a mapping function in which each mapping parameter is a null value. When the target mapping relationship needs to be obtained through training, each first image and a second image corresponding to the first image can be used as a set of training data. For each group of training data, the first image may be corrected by using a mapping relationship between a preset pupil region partition and a pixel value to obtain a corrected first image, and then differences between the corrected first image and a corresponding pupil region in the second image are compared, where a smaller difference indicates a better correction effect, and a larger difference indicates a poorer correction effect, and therefore, the mapping relationship between the preset pupil region partition and the pixel value may be adjusted based on the differences between the corrected first image and the corresponding pupil region in the second image to finally obtain a target mapping relationship.

The sum of loss function values between the pixel value of each pixel point in the pupil area of each first image corrected by using the target mapping relation and the pixel value true value is smaller than a preset threshold, the pixel value true value of each pixel point is the pixel value of a pixel point having the same position as the pixel point in the pupil area of the second image corresponding to the first image, and the loss function value is determined based on pixel value loss.

Wherein, because the loss function value between the pixel value of each pixel point in the pupil area of the first image and the true value of the pixel value reflects the difference between the pixel value of each pixel point in the pupil area of the first image corrected by the target mapping relation and the pixel value of the pixel point at the same position as the pixel point in the pupil area of the second image corresponding to the first image, when the sum of the loss function values between the pixel value of each pixel point in the pupil area of each first image after correction and the true value of the pixel value is smaller than the preset threshold, it indicates that the difference between the pixel value of each pixel point in the pupil area of the first image corrected by the target mapping relation and the pixel value of the pixel point at the same position as the pixel point in the pupil area of the second image corresponding to the first image is small enough, and eliminating the bright spots of the pupil in the pupil area of the corrected first image which is corrected by utilizing the target mapping relation.

In the scheme provided by this embodiment, a pre-trained target mapping relationship between the partition of the pupil area and the pixel value may be used to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition, and meanwhile, since the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spot is eliminated, the calibrated pupil area is the pupil area after the bright spot is eliminated, so that the target image after the bright spot of the pupil is eliminated can be obtained. Therefore, by the scheme, the pupil bright spots in the image can be eliminated, and the imaging effect of the image is improved.

Based on the embodiment of fig. 2, as shown in fig. 3, an embodiment of the present invention further provides a method for eliminating bright spots of a pupil, where the step S202 may include:

for each first image, training a preset mapping relation between each position in the pupil area and a pixel value based on the following steps:

s2021: determining pixel values mapped to each partition in the pupil area of the first image based on a preset mapping relation between each partition in the pupil area and the pixel values to obtain a corrected first image;

the corrected first image is an image obtained by processing the first image containing the pupil area with the bright spots of the pupil in the mapping relation of the training stage. It will be understood by those skilled in the art that the pupil bright spots in the pupil area in the corrected first image may or may not be completely eliminated.

S2022: calculating the pixel value of each pixel point in the pupil area of the corrected first image and the loss function value of the true value of the pixel point as the loss function value of the pixel point;

the first image includes a pupil region with pupil bright spots, the second image corresponding to the first image does not include the pupil region with pupil bright spots, and the first image after correction obtained by mapping the first image by using a preset mapping relation between each partition in the pupil region and a pixel value is ideally the same as the second image corresponding to the first image, and the loss function value is a parameter for judging whether the first image after correction is the same as the second image corresponding to the first image.

Therefore, the loss function value of the pixel value true value of each pixel point and the pixel point in the pupil area of the corrected first image can be calculated. The specific calculation method will be described in detail later, and will not be described herein again.

S2023: and judging whether the sum of the loss function values of all the pixel points in the pupil area of the corrected first image is smaller than a preset threshold value, when the sum is not smaller than the preset threshold value, adjusting the mapping relation between all the partitions in the pupil area and the pixel values, carrying out next training, and when the sum is smaller than the preset threshold value, obtaining a trained target mapping relation.

And judging whether the mapping relation between each partition in the pupil area and the pixel value is available or not, wherein the condition that the sum of the loss function values of each pixel in the pupil area of the first image is smaller than a preset threshold value or not is judged. That is, the sum of the loss function values of the pixels in the pupil region of the first image needs to be obtained first, and then whether the calculated sum is smaller than a preset threshold is determined.

And when the sum value is not less than the preset threshold value, the corrected pupil area of the first image is not expected, the mapping relation between each partition in the pupil area and the pixel value is adjusted, and the next training is carried out.

And when the sum is smaller than the preset threshold, indicating that the pupil area of the first image to be corrected reaches the expectation, ending the training, and obtaining the trained target mapping relation.

In the scheme provided by this embodiment, the target mapping relationship is generated through training of the first image and the second image corresponding to the first image, so that when the pupil hot spots in the image need to be eliminated, the image with the pupil hot spots eliminated can be obtained by using the target mapping relationship. Therefore, the scheme provided by the embodiment provides a basis for eliminating bright spots of pupils and improving the imaging quality of the image.

Based on the embodiment of fig. 3, as shown in fig. 4, an embodiment of the present invention further provides a method for eliminating bright spots of a pupil, where the above-mentioned S2022 may include:

S2022A, calculating a pixel value of the pixel point and a loss of the true value of the pixel point for each pixel point in the pupil region of the corrected first image, and determining a loss function value of the pixel point and the true value of the pixel point based on the determined loss of the pixel value as the loss function value of the pixel point.

In one implementation, the difference between the pixel value of the pixel and the true value of the pixel can be calculated as the pixel loss.

Illustratively, the pixel value of the pixel point a is 225, the true value of the pixel point a is 125, and the calculated pixel value difference value of 225 and 125 is 100, which is used as the pixel value loss of the pixel point a.

Optionally, in another implementation, the pixel value loss of each pixel point may also be determined according to the following formula:

wherein the content of the first and second substances,is the corrected loss of pixel value, Y, of pixel point i in the first imageiIs the true value of pixel value, f (x) of pixel point ii) Is the pixel value of pixel point i.

Illustratively, if the pixel value of the pixel a is 225 and the true value of the pixel a is 125, then

Optionally, the pixel value loss of the pixel point under each color component can be calculated for each color component of the pixel point. For example, if the pixel B includes three color components, R, G and B, respectively, then when the pixel value loss of the pixel B is calculated, the pixel value loss of the pixel B under R, G and B color components can be calculated, respectively.

Optionally, according to different requirements, the pixel value of the pixel point is determined based on the determined pixel value loss, and the implementation manner of the loss function value of the pixel value true value of the pixel point at least includes the following:

the first implementation mode comprises the following steps: and determining the determined pixel value loss as the pixel value of the pixel point and the loss function value of the true value of the pixel point. In this embodiment, the determined pixel value loss is directly determined as a loss function value.

The second implementation mode comprises the following steps: calculating the auxiliary loss of the pixel point; determining the pixel value of the pixel point and a loss function value of the true value of the pixel point based on the determined pixel value loss and the calculated auxiliary loss; wherein the auxiliary loss comprises at least one of a color loss and a smoothness loss of the pixel point.

The calculation of the auxiliary loss of the pixel point can be realized by adopting any one of the following two modes:

mode 1: calculating an included angle between the pixel value of the pixel point and the true value of the pixel point in a color space, and taking the included angle as the color loss of the pixel point;

optionally, an included angle between the pixel point and the true value of the pixel point in the color space may be calculated according to the following formula:

wherein the content of the first and second substances,for the corrected pixel point i in the first image and the imageThe true value of pixel i in color space has the same value as the previous ()pRepresenting a pixel, and angle (,) is the operator that computes the angle of the vector, (f (x)i))pIs a three-dimensional vector of a pixel point i in a color space, (Y)i)pIs a three-dimensional vector of the true value of the pixel point i in the color space.

Illustratively, pixel C has a pixel value of (R:255, G:128, B:128), then (f (x)C))pThree-dimensional vector 1: (255,128,128), if the true value of the pixel C is (R:200, G:120, B:150), then (Y)C)pThree-dimensional vector 2: (200,120,150), thenIs the angle between vector 1 and vector 2.

Mode 2: and calculating the gradient of the pixel point as the smoothness loss of the pixel point.

The smoothness loss is used for smoothing the pupil area of the corrected first image to avoid an unnatural phenomenon that the pupil area gradient is excessively obvious after bright spot removal, and is defined as:

wherein the content of the first and second substances,in order to correct the smoothness loss of the pixel point i in the first image, p is the pixel point, SpIs the illumination of pixel p, c is the color channel,andare the partial derivatives in the horizontal and vertical directions in image space,andis the smoothness weight of the spatial variation:

wherein the content of the first and second substances,is a logarithmic image of the input image, theta is a parameter controlling the sensitivity to image gradients, and epsilon is a constant. Alternatively, in one implementation, θ may be set to 1.2 and ε may be set to 0.0001.

Optionally, in another implementation manner, the implementation manner of determining the pixel value of the pixel point based on the determined pixel value loss and the loss function value of the true value of the pixel point may include:

and calculating the weighted sum of the determined pixel value loss, the calculated color loss and the calculated smoothness loss to serve as a loss function value of the pixel point and the true value of the pixel point.

The loss function value of the pixel point and the true value of the pixel point can be calculated according to the following formula:

wherein the content of the first and second substances,in order to correct the loss of pixel value of the pixel point i in the first image,in order to correct the color loss of the pixel point i in the first image,for the corrected smoothness loss, ω, of the pixel point i in the first imager、ωrAnd ωcAnd N is the number of pixels in the sample image after the bright spots are eliminated.

In the scheme provided by this embodiment, the target mapping relationship is generated through training of the first image and the second image corresponding to the first image, so that when the pupil hot spots in the image need to be eliminated, the image with the pupil hot spots eliminated can be obtained by using the target mapping relationship. Therefore, the scheme provided by the embodiment provides a basis for eliminating bright spots of pupils and improving the imaging quality of the image.

Based on the embodiment of fig. 2, as shown in fig. 5, the embodiment of the present invention further provides a method for eliminating bright spots of pupils, where in step S201, acquiring each first image and a second image corresponding to the first image by the following steps, including:

s2011, a first initial image and a second initial image corresponding to the first initial image are obtained; the first initial image is acquired by a first camera with supplementary lighting, the second initial image is acquired by a second camera without supplementary lighting, and the first camera and the second camera have the same configuration parameters and are directed to the same object when acquiring the first initial image and the second initial image;

the second initial image can be acquired by a second camera without supplementary lighting, namely the second initial image is acquired by the second camera under the condition that the supplementary lighting lamp is turned off. Because the pupil bright spots may appear in the image collected by the camera only when the fill-in light is turned on, the pupil bright spots do not appear in the second initial image collected by the second camera without fill-in light, that is, the second initial image does not contain the pupil bright spots and can be used as the second image. Correspondingly, the first initial image acquired by the first camera with supplementary lighting comprises a pupil area with a bright pupil spot.

For example, two cameras with the same configuration parameters are used for image acquisition of the target person. Optionally, the two cameras may be placed in parallel, the actual height of the cameras is about 3 meters, the distance between the target person and the cameras is about 10-15 meters, the human faces in the two cameras are clear through the focal lengths of the two cameras, and the distance between the pupils is appropriate. And one camera turns off the light supplement lamp, so that the shot face image is a second initial image, and the other camera turns on the light supplement lamp, so that the shot face image contains pupil bright spots and serves as a first initial image corresponding to the second initial image.

S2012, updating the pupil region in the second initial image based on the region information of the pupil region of the first initial image to obtain an updated image as a first image, and using the second initial image as a second image corresponding to the first image; wherein the pupil area of the updated image is the same as the pupil area of the first initial image, and the image areas of the updated image other than the pupil area are the same as the second initial image.

The region information may be pixel information of each pixel in the pupil region, such as pixel size and pixel position. Optionally, the pupil area in the second initial image may be updated as follows:

and aiming at each pixel point in the pupil area in the second initial image, determining a target pixel point matched with the pixel point in the pupil area of the first initial image, and updating the pixel value of the pixel point to be the pixel value of the target pixel point.

Illustratively, the pixel point A is contained in the pupil area of the second initial image and has a pixel value of (R:123, G:0, B:255), and the target pixel point in the pupil area of the first initial image, which is matched with the position of the pixel point A, is the pixel point B and has a pixel value of (R:255, G:128, B:128), the pixel value of the pixel point A is modified from (R:123, G:0, B:255) to (R:255, G:128, B: 128).

In the scheme provided by this embodiment, a pre-trained target mapping relationship between the partition of the pupil area and the pixel value may be used to correct the pixel value of the pixel point in the partition to the pixel value mapped by the partition, and meanwhile, since the pixel value mapped by each partition is the pixel value of the pixel point of the partition after the bright spot is eliminated, the calibrated pupil area is the pupil area after the bright spot is eliminated, so that the target image after the bright spot of the pupil is eliminated can be obtained. Therefore, by the scheme, the pupil bright spots in the image can be eliminated, and the imaging effect of the image is improved.

Optionally, in an embodiment, the target mapping relationship may be a trained target pupil speckle elimination model. Correspondingly, the preset mapping relation between each partition in the pupil area and the pixel value can be a pupil bright spot elimination model to be trained.

In one embodiment, the pupil speckle reduction model may be a CNN (Convolutional Neural Networks) model. Optionally, the pupil speckle reduction model may be a CNN model of an encoder-decoder structure, where the processing procedure of the CNN model of the encoder-decoder structure includes two stages: the image restoration method comprises an encoding stage and a decoding stage, wherein the encoding stage is used for extracting features from an input image, and the decoding stage restores the image by using the extracted features.

Optionally, in an implementation, the pupil elimination model may be a ResBlock (re-blocking) structure. The advantage of CNN can be amplified by using a ResBlock network structure as a basic structure, and the feasibility of training is realized. The ResBlock structure is a structure including two convolutional layers, wherein one input path passes through the first convolutional layer Conv1, the function relu is activated, the second convolutional layer Conv2, and the other input path is added with the output obtained by the previous path passing through the two convolutional layers to obtain the final output of the network structure.

Optionally, in an implementation, an encoder-decoder network structure including 18 convolutional layers is provided, as shown in table 1:

TABLE 1

Optionally, in an embodiment, as known to those skilled in the art, the resolution of the image input into the neural network model is fixed for the neural network model, and is no exception to the pupil speckle reduction model mentioned in the embodiments of the present invention.

Since the resolution of the image input to the pupil speckle reduction model is fixed, and the acquired target image may not be an image with a fixed resolution, before the target image is input to the pupil speckle reduction model, the target image needs to be scaled to a resolution suitable for the pupil speckle reduction model.

For example, the image input into the pupil speckle reduction model needs to be an image with a resolution of 256 × 256, when the resolution of the acquired target image is 1280 × 1280, the resolution of the target image needs to be scaled by 5 times, from 1280 × 1280 to 256 × 256, and then the scaled target image is input into the pupil speckle reduction model.

As known to those skilled in the art, zooming an image will result in a lower definition of the image, and therefore, in an implementation manner, when the target image with the eliminated pupil hot spots is output by the pupil hot spot elimination model, the following steps may be further adopted to process the output target image to improve the definition of the target image with the eliminated pupil hot spots, including steps 1 to 4:

step 1: amplifying and restoring the resolution of the target image output by the pupil bright spot elimination model to obtain a resolution restored image; for example, the resolution of the target image output by the pupil speckle reduction model may be 256 × 256, and at this time, the target image may be enlarged and restored to 1280 × 1280.

Step 2: performing RGB-to-YUV color channel conversion on the output target image to obtain an image in a YUV format; performing Gaussian filtering on the Y component of the image in the YUV format to obtain a base layer image;

and step 3: obtaining a detail layer image by using the difference between the output target image and the base layer image;

and 4, step 4: and amplifying the detail layer image by pixel value times, and overlapping the processed image and the resolution ratio restored image to obtain an image with improved definition and eliminated pupil bright spots.

An embodiment of the present invention further provides an electronic device, as shown in fig. 6, including a processor 601, a communication interface 602, a memory 603, and a communication bus 604, where the processor 601, the communication interface 602, and the memory 603 complete mutual communication through the communication bus 604,

a memory 603 for storing a computer program;

the processor 601 is configured to implement the above-mentioned steps of the method when executing the program stored in the memory 603.

The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.

The communication interface is used for communication between the electronic equipment and other equipment.

The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.

The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.

In another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the above-mentioned pupil speckle reduction methods.

In another embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to execute any of the above-mentioned pupil speckle reduction methods.

In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus, device, and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.

The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种医学图像处理方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!