Method, device and equipment for detecting image definition and storage medium
1. A method for detecting image definition comprises the following steps:
determining a texture richness value of a target image, and determining a texture type to which the target image belongs according to the texture richness value of the target image;
selecting a target definition detection model from the candidate definition detection models according to the texture type of the target image;
and performing definition detection on the target image based on a target definition detection model.
2. The method of claim 1, wherein determining a texture richness value for the target image comprises:
based on a texture detection model, carrying out texture richness detection on the target image to obtain a texture richness value of the target image;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as input and performing model training by taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value.
3. The method of claim 2, wherein before performing texture richness detection on the target image based on the texture detection model, further comprising:
the first image is used as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and the second image is used as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
determining a model output probability according to the richness difference value between the first texture richness value and the second texture richness value;
updating model parameters in the first model and the second model according to a probability difference between the label value and the model output probability, and using the trained first model or second model as the texture detection model.
4. The method of claim 2, prior to model training with texture comparison probabilities between the first image and the second image as labels, further comprising:
determining a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image;
updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
5. The method of claim 4, wherein the updating the texture richness value of the first image is based on the texture richness value of the first image, the texture score of the first image, and the first texture comparison probability; and updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability, including:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
R′A=RA+K×(SA-PB>A);
R′B=RB+K×(SB-PA>B);
wherein R isAAnd RBTexture richness value of the first image and texture richness value of the second image, S, respectivelyAAnd SBRespectively texture score of the first image and texture score of the second image, PA>BAnd PB>ARespectively a first texture comparison probability and a second texture comparison probability; k is a reward factor,R′AAnd R'BRespectively an updated texture richness value of the first image and an updated texture richness value of the second image.
6. The method of claim 4, wherein the updating the first and second texture comparison probabilities according to the updated texture richness value of the first image and the updated texture richness value of the second image, respectively, comprises:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formulas:
wherein R'AAnd R'BRespectively the texture richness value of the updated first image and the texture richness value of the updated second image, wherein M is a difference measurement factor, P'A>BAnd P'B>AThe updated first texture comparison probability and the updated second texture comparison probability are respectively.
7. The method of any of claims 1-6, the sharpness detecting the target image based on a target sharpness detection model, comprising:
determining a gray scale map of the target image;
determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
and taking the value of the statistical index as the input of the target definition detection model to obtain the definition detection result of the target image.
8. An apparatus for detecting image sharpness, comprising:
the texture determining module is used for determining the texture richness value of the target image and determining the texture type of the target image according to the texture richness value of the target image;
the model selection module is used for selecting a target definition detection model from the candidate definition detection models according to the texture type to which the target image belongs;
and the definition detection module is used for carrying out definition detection on the target image based on the target definition detection model.
9. The apparatus of claim 8, wherein the texture determination module is specifically configured to:
based on a texture detection model, carrying out texture richness detection on the target image to obtain a texture richness value of the target image;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as input and performing model training by taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value.
10. The apparatus of claim 9, further comprising a texture model building module comprising:
the image input unit is used for taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
a probability output unit, configured to determine a model output probability according to a richness difference between the first texture richness value and the second texture richness value;
and the model parameter updating unit is used for updating model parameters in the first model and the second model according to the probability difference between the label value and the model output probability, and taking the trained first model or the trained second model as the texture detection model.
11. The apparatus of claim 9, the apparatus further comprising a data construction module comprising:
the texture score unit is used for determining the texture score of the first image and the texture score of the second image according to the texture richness comparison result between the first image and the second image;
the richness value updating unit is used for updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and the comparison probability updating unit is used for respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
12. The apparatus of claim 10, wherein the richness value updating unit is specifically configured to:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
R′A=RA+K×(SA-PB>A);
R′B=RB+K×(SB-PA>B);
wherein R isAAnd RBTexture richness value of the first image and texture richness value of the second image, S, respectivelyAAnd SBRespectively texture score of the first image and texture score of the second image, PA>BAnd PB>ARespectively a first texture comparison probability and a second texture comparison probability; k is a reward factor, R'AAnd R'BRespectively an updated texture richness value of the first image and an updated texture richness value of the second image.
13. The apparatus according to claim 10, wherein the comparison probability updating unit is specifically configured to:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formulas:
wherein R'AAnd R'BRespectively the texture richness value of the updated first image and the texture richness value of the updated second image, wherein M is a difference measurement factor, P'A>BAnd P'B>AThe updated first texture comparison probability and the updated second texture comparison probability are respectively.
14. The apparatus of any of claims 8-13, the sharpness detection module comprising:
a grayscale map unit for determining a grayscale map of the target image;
the definition index unit is used for determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
and the definition detection unit is used for taking the definition index value as the input of the target definition detection model to obtain the definition detection result of the target image.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
Background
With the continuous enrichment of image and video contents on the internet, it is very important how to select high-quality contents from mass data. The sharpness, color, brightness, etc. of an image are important factors affecting its quality. Among these factors, sharpness is the most important indicator. The definition refers to the definition of each detail texture and its boundary in the image.
Disclosure of Invention
The disclosure provides a detection method, a device, equipment and a storage medium for image definition.
According to an aspect of the present disclosure, there is provided a method for detecting image sharpness, including:
determining a texture richness value of a target image, and determining a texture type to which the target image belongs according to the texture richness value of the target image;
selecting a target definition detection model from the candidate definition detection models according to the texture type of the target image;
and performing definition detection on the target image based on a target definition detection model.
According to still another aspect of the present disclosure, there is provided an image sharpness detection apparatus including:
the texture determining module is used for determining the texture richness value of the target image and determining the texture type of the target image according to the texture richness value of the target image;
the model selection module is used for selecting a target definition detection model from the candidate definition detection models according to the texture type to which the target image belongs;
and the definition detection module is used for carrying out definition detection on the target image based on the target definition detection model.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method for detecting sharpness of an image provided in any embodiment of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method for detecting image sharpness provided by any of the embodiments of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method of detecting image sharpness as provided by any embodiment of the present disclosure.
According to the technology disclosed by the invention, the accuracy of the image definition can be improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a method for detecting image sharpness according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a sharpness detection scheme provided in accordance with an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another method for detecting image sharpness according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a training process of a texture detection model according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of another image sharpness detection method provided according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an apparatus for detecting image sharpness according to an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing the method for detecting image sharpness according to the embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The scheme provided by the embodiment of the disclosure is described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a method for detecting image sharpness according to an embodiment of the present disclosure, which is applicable to a case of performing quality detection on an image. The method can be executed by an image definition detection device, can be realized in a hardware and/or software mode, and can be configured in electronic equipment. Referring to fig. 1, the method specifically includes the following steps:
s110, determining a texture richness value of a target image, and determining a texture type of the target image according to the texture richness value of the target image;
s120, selecting a target definition detection model from the candidate definition detection models according to the texture type of the target image;
and S130, based on the target definition detection model, performing definition detection on the target image.
In embodiments of the present disclosure, texture richness values are used to characterize the richness of textures in an image. The texture type is a texture richness degree type, and N different texture types may be preset. Moreover, a plurality of texture richness ranges can be divided according to the texture richness values, and each texture richness range is associated with a texture type of the texture richness range.
For each texture class, a candidate sharpness detection model may be constructed, and an association between the candidate sharpness detection model and the texture class may be established. Specifically, for any texture class, the image belonging to the texture class may be used as a sample to perform model training, so as to obtain an associated candidate sharpness detection model.
Referring to fig. 2, after a target image is obtained, texture richness of the target image may be detected to obtain a texture richness value of the target image, the texture richness of the target image is compared with each candidate texture richness range to determine a target texture richness range to which the target image belongs, a texture category associated with the target texture richness range is used as a texture category of the target image, and a candidate sharpness detection model associated with the texture category of the target image is used as a target sharpness detection model.
Referring to fig. 2, after the target image is acquired, feature extraction may also be performed on the target image to obtain feature data of the target image; and taking the characteristic data of the target image as the input of the target definition detection model, and obtaining the definition detection result of the target image according to the output of the target definition detection model.
With the change of shooting scenes, shooting conditions and the like, the definition indexes of the images show different rules, so that the definition indexes in different shooting scenes and different shooting conditions have no comparability. For example, when the photographed image is a page in a book, the texture and the boundary of the text content are very obvious, and high-frequency components in the image are more; however, when the blue sky is photographed, the image has no obvious texture and boundary, and the high-frequency component is small. Therefore, whether the image is clear or not can not be judged simply by using the number of high-frequency components or the difference between adjacent pixel points as a definition index. Moreover, even if the same kind of object is shot, the physical characteristics of the image show different rules, and the clear indexes are difficult to unify. For example, the content of the text in the page is also the same, some page texts are dense, some page texts are sparse, and if gradient or energy is used as the definition index, the difference is large, which results in inaccurate definition.
However, in the embodiment of the present disclosure, different candidate sharpness detection models are selected for each texture class, and the sharpness indexes of the images in the same texture class conform to the same rule, so that the similarity is obtained, the characteristics of the sharpness indexes can be unified, and the sharpness detection accuracy of the images can be improved.
According to the technical scheme of the embodiment of the disclosure, the candidate definition detection model associated with the texture class to which the target image belongs is used as the target definition detection model, so that the target definition detection model can reflect the physical characteristic rule of the target image, and the definition detection accuracy of the image is improved.
Fig. 3 is a schematic diagram of another image sharpness detection method provided according to an embodiment of the present disclosure. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 3, the method for detecting image sharpness provided in this embodiment includes:
s210, based on a texture detection model, carrying out texture richness detection on the target image to obtain a texture richness value of the target image;
s220, selecting a target definition detection model from the candidate definition detection models according to the texture type of the target image;
s230, based on a target definition detection model, performing definition detection on the target image;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as input and performing model training by taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value.
In the embodiment of the present disclosure, the texture detection model is used to determine the texture richness value of the image, and then the texture richness degree of the image may be classified according to the texture richness value of the image. The first texture comparison probability refers to a probability that the first image is more texture rich than the second image, and the second texture comparison probability refers to a probability that the second image is more texture rich than the first image.
The first image and the second image in the sample image pair are respectively used as input, the first texture comparison probability or the second texture comparison probability is used as a label, model training is carried out to obtain a texture detection model, namely, the texture detection model is trained by introducing comparison information between the first image and the second image in the sample image pair, and compared with the method of directly scoring the texture abundance value of the first image and the second image, the robustness of the texture detection model can be improved, and therefore the accuracy of the texture abundance value of a target image is improved.
In an optional implementation manner, before performing texture richness detection on the target image based on a texture detection model, the method further includes: the first image is used as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and the second image is used as the input of a second model in the twin network to obtain a second texture richness value output by the second model; determining a model output probability according to the richness difference value between the first texture richness value and the second texture richness value; updating model parameters in the first model and the second model according to a probability difference between the label value and the model output probability, and using the trained first model or second model as the texture detection model.
Fig. 4 is a schematic diagram of a training process of a texture detection model according to an embodiment of the present disclosure. Referring to fig. 4, a texture detection model is constructed using a twin network including a first model and a second model, which have the same mesh structure and may share model parameters, for example, may have the same neural network structure.
Specifically, a first image is used as the input of a first model to obtain a first texture richness value output by the first model, and a second image is used as the input of a second model to obtain a second texture richness value output by the second model; and determining the model output probability according to the richness difference value between the first texture richness value and the second texture richness value.
Wherein the model output probability is determined in a manner related to the tag value. In the case where the tag value is the first texture comparison probability, the model output probability may be determined by P ═ Sigmoid (S1-S2); in the case where the tag value is the second texture comparison probability, the model output probability may be determined by P ═ Sigmoid (S2-S1); wherein, P is the model output probability, Sigmoid is the neural network activation function, and S1 and S2 are the first texture richness value and the second texture richness value, respectively.
And comparing the label value with the model output probability to obtain a probability difference value between the label value and the model output probability, and establishing a loss function according to the probability difference value to update the model parameters of the first model and the second model so as to realize an end-to-end training process. After the training is completed, the first model or the second model may be used as a texture detection model. The comparison result of the two images is predicted in a twin network modeling mode, the texture richness value of the images is not directly predicted, and the robustness of the texture detection model can be improved.
According to the technical scheme of the embodiment of the disclosure, the texture detection model is constructed in a twin network modeling mode, so that the robustness of the texture detection model can be improved, and the definition detection accuracy of the image is further improved.
The sample construction of the texture detection model is described below.
In an optional embodiment, before performing model training by using the texture comparison probability between the first image and the second image as a label, the method further includes: determining a texture score of the first image and a texture score of the second image according to a texture richness comparison result between the first image and the second image; updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability; and respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
In the embodiment of the disclosure, the first image and the second image in the sample image pair are provided to the annotating person, and the annotating person annotates the texture richness comparison result of the first image and the second image. The annotating personnel judge which texture is richer in the two images, and compared with the method that texture degree scoring is directly carried out on the images, the annotating results of different annotating personnel on the same sample image pair tend to be more consistent. The same initial texture richness value such as 1000 can be given to each image, and the images are continuously refreshed in an iterative solution mode according to the texture richness comparison result until the stable texture richness values of the first image and the second image are obtained.
Specifically, in the case that the texture richness comparison result indicates that the texture of the first image is richer, the texture score of the first image may be 1, and the texture score of the second image may be 0; in the case where the texture richness comparison result is that the second image is richer, the texture score of the first image is 0, and the texture score of the second image is 1.
In an alternative embodiment, the texture richness value of the first image and the texture richness value of the second image are updated by the following formulas:
R′A=RA+K×(SA-PB>A)];
R′B=RB+K×(SB-PA>B)];
wherein R isAAnd RBTexture richness value of the first image and texture richness value of the second image, S, respectivelyAAnd SBRespectively texture score of the first image and texture score of the second image, PA>BAnd PB>ARespectively a first texture comparison probability and a second texture comparison probability; k is a reward factor, R'AAnd R'BRespectively an updated texture richness value of the first image and an updated texture richness value of the second image. K may be empiricalThe value, in proportion to the initial texture richness value, again taking an initial texture richness value of 1000 as an example, K may be 16.
In an alternative embodiment, the first texture comparison probability and the second texture comparison probability may be determined by the following formula:
wherein, PA>BAnd PB>AFirst and second texture comparison probabilities, respectively, MK may be an empirical value in proportion to the initial texture richness value, again taking the initial texture richness value as an example of 1000, and M may be 400.
In an alternative embodiment, the first texture comparison probability and the second texture comparison probability are updated separately by the following formulas:
wherein R'AAnd R'BRespectively the texture richness value of the updated first image and the texture richness value of the updated second image, wherein M is a difference measurement factor, P'A>BAnd P'B>AThe updated first texture comparison probability and the updated second texture comparison probability are respectively.
And continuously refreshing the texture richness value of each image in the data set according to the comparison result of the manually marked texture richness. When the number of annotations is large enough, the texture richness value of each image tends to be stable. By the marking mode, the accuracy of the texture richness data set can be improved.
Fig. 5 is a schematic diagram of another image sharpness detection method provided according to an embodiment of the present disclosure. The present embodiment is an alternative proposed on the basis of the above-described embodiments. Referring to fig. 5, the method for detecting image sharpness provided in this embodiment includes:
s310, determining a texture richness value of the target image, and determining a texture type of the target image according to the texture richness value of the target image;
s320, selecting a target definition detection model from the candidate definition detection models according to the texture type of the target image;
s330, determining a gray scale image of the target image;
s340, determining a definition index value of a target image according to the coordinates of the pixel points in the gray-scale image;
and S350, taking the statistical index value as the input of the target definition detection model to obtain the definition detection result of the target image.
In the disclosed embodiments, the clarity indicators used include, but are not limited to, the following: brenner gradient, gray-scale variance product, image energy.
Wherein the brenner gradient can be determined by the following formula:
S11=∑x,y|I(x+2,y)-I(x,y)|2
wherein, S11 is the Brenner gradient, I is the image gray scale, and x and y are the coordinates of the pixel points. The brenner gradient is used for calculating the sum of squares of pixel value differences between two pixel points with the interval of 1 in the horizontal direction, and the clearer the texture is, the larger the numerical value of the brenner gradient is.
Wherein the gray variance product can be determined by the following formula:
S22=∑x,y|I(x,y)-I(x,y-1)|+|I(x,y)-I(x+1,y)|
the method calculates the pixel difference between adjacent points and sums the absolute values thereof, and the image is clearer when the value of the gray variance product is larger.
Wherein the image energy may be determined by the following formula:
S33=∑x,y|I(x+1,y)-I(x,y)|2×|I(x,y+1)-I(x,y)|2
the method calculates the square of the pixel difference between adjacent points and sums the products of the horizontal and vertical directions, and the image is clearer when the value of the image energy is larger.
And obtaining the definition detection result of the target image by counting different definition index values of the target image and taking the definition index value as the input of the target definition detection model. The sharpness detection result may be in two categories of sharpness and blur. Each candidate sharpness detection model may be a binary model, which may be modeled using machine learning methods such as a binary tree model, a K-nearest neighbor model, a logistic regression model, etc., for example, and model parameters are fitted by sharpness labeled data. The relative relation between the pixel points in the target image is calculated by introducing different definition index values, and the relative relation is used as a mode for judging whether the target image is clear, so that the accuracy of the definition detection result can be improved.
According to the technical scheme of the embodiment of the disclosure, the texture abundance degree of the image is classified by means of an image classification method, different model parameters, namely different definition index thresholds, are used according to different texture categories, and the index thresholds are applied to the characteristics extracted by the traditional algorithm, so that whether the image is clear or not is determined according to the characteristics, and the problem of different judgment standards caused by the change of shooting conditions and shooting objects can be well solved. The classification capability of the neural network and the interpretability of the traditional algorithm to the image definition are fused, and the problem that the traditional algorithm is easy to fail when the scene changes thousands of times on the Internet is solved.
Fig. 6 is a schematic diagram of an apparatus for detecting image sharpness according to an embodiment of the present disclosure, where the embodiment is applicable to a case where an autonomous vehicle is taken, and the apparatus is configured in an electronic device, and may implement a method for detecting image sharpness according to any embodiment of the present disclosure. Referring to fig. 6, the apparatus 400 for detecting image sharpness specifically includes the following components:
a texture determining module 401, configured to determine a texture richness value of a target image, and determine a texture category to which the target image belongs according to the texture richness value of the target image;
a model selection module 402, configured to select a target sharpness detection model from the candidate sharpness detection models according to a texture type to which the target image belongs;
a definition detecting module 403, configured to perform definition detection on the target image based on the target definition detection model.
In an optional implementation manner, the texture determining module 401 is specifically configured to:
based on a texture detection model, carrying out texture richness detection on the target image to obtain a texture richness value of the target image;
the texture detection model is obtained by taking a first image and a second image in a sample image pair as input and performing model training by taking a first texture comparison probability or a second texture comparison probability between the first image and the second image as a label value.
In an alternative embodiment, the apparatus 400 for detecting sharpness of an image further includes a texture model building module, where the texture model building module includes:
the image input unit is used for taking the first image as the input of a first model in the twin network to obtain a first texture richness value output by the first model, and taking the second image as the input of a second model in the twin network to obtain a second texture richness value output by the second model;
a probability output unit, configured to determine a model output probability according to a richness difference between the first texture richness value and the second texture richness value;
and the model parameter updating unit is used for updating model parameters in the first model and the second model according to the probability difference between the label value and the model output probability, and taking the trained first model or the trained second model as the texture detection model.
In an alternative embodiment, the apparatus 400 for detecting image sharpness further includes a data construction module, where the data construction module includes:
the texture score unit is used for determining the texture score of the first image and the texture score of the second image according to the texture richness comparison result between the first image and the second image;
the richness value updating unit is used for updating the texture richness value of the first image according to the texture richness value of the first image, the texture score of the first image and the first texture comparison probability; updating the texture richness value of the second image according to the texture richness value of the second image, the texture score of the second image and the second texture comparison probability;
and the comparison probability updating unit is used for respectively updating the first texture comparison probability and the second texture comparison probability according to the updated texture richness value of the first image and the updated texture richness value of the second image.
In an alternative embodiment, the richness value updating unit is specifically configured to:
updating the texture richness value of the first image and the texture richness value of the second image by the following formula:
R′A=RA+K×(SA-PB>A);
R′B=RB+K×(SB-PA>B);
wherein R isAAnd RBTexture richness value of the first image and texture richness value of the second image, S, respectivelyAAnd SBRespectively texture score of the first image and texture score of the second image, PA>BAnd PB>ARespectively a first texture comparison probability and a second texture comparison probability; k is a reward factor, R'AAnd R'BRespectively an updated texture richness value of the first image and an updated texture richness value of the second image.
In an optional implementation manner, the comparison probability updating unit is specifically configured to:
updating the first texture comparison probability and the second texture comparison probability respectively by the following formulas:
wherein R'AAnd R'BRespectively the texture richness value of the updated first image and the texture richness value of the updated second image, wherein M is a difference measurement factor, P'A>BAnd P'B>AThe updated first texture comparison probability and the updated second texture comparison probability are respectively.
In an alternative embodiment, the sharpness detection module comprises:
a grayscale map unit for determining a grayscale map of the target image;
the definition index unit is used for determining the definition index value of the target image according to the coordinates of the pixel points in the gray level image;
and the definition detection unit is used for taking the definition index value as the input of the target definition detection model to obtain the definition detection result of the target image.
According to the technical scheme, the texture richness degree of the image is classified by means of an image classification method, different definition index thresholds are used according to different texture classes, and the problem that the judgment standards are different due to the change of shooting conditions and shooting objects can be well solved.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 shows a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units that perform machine learning model algorithms, a digital information processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as the detection method of image sharpness. For example, in some embodiments, the method of detecting image sharpness may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the image sharpness detection method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the method of detecting image sharpness by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs executing on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.