Image determination method and device, storage medium and electronic equipment
1. An image determination method, comprising:
acquiring an object image set, wherein the object image set comprises a plurality of candidate images of a detected target object;
identifying each candidate image in the candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
and determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
2. The method of claim 1, wherein the identifying, by using an image identification model, each candidate image of the plurality of candidate images to obtain a quality parameter corresponding to each candidate image comprises:
for each candidate image, the following processing is respectively carried out:
determining a key area containing each key part in the candidate image;
identifying key areas of the key parts based on task sub-models corresponding to the key parts in the image identification model to obtain part parameters corresponding to the key parts, wherein the part parameters are used for indicating the integrity degree of the corresponding key parts;
and obtaining the quality parameters according to the position parameters of the key positions.
3. The method according to claim 2, wherein the obtaining of the location parameter corresponding to each of the key locations comprises:
identifying a completeness level of the key part included in region information of the key region;
acquiring a grade parameter matched with the complete grade;
using the grade parameter as the part parameter of the key part.
4. The method of claim 2, wherein the deriving the quality parameter from the site parameter of each of the key sites comprises:
acquiring part parameters of the key parts output by each task sub-model;
determining the part weight corresponding to each key part;
and performing weighting operation on the part parameters according to the part weights to obtain the quality parameters.
5. The method according to claim 1, wherein the determining the candidate image corresponding to the quality parameter satisfying the quality condition as the target image comprises:
based on the parameter numerical value of the quality parameter, sorting the quality parameter corresponding to each candidate image, and determining the candidate image corresponding to the quality parameter sorted at a designated sequence as the target image; or
And determining the candidate image corresponding to the quality parameter exceeding the quality parameter threshold value as the target image.
6. The method according to any one of claims 1 to 5, wherein after obtaining the quality parameter corresponding to each of the candidate images, the method further comprises:
and prompting that the target image is not determined in the object image set under the condition that the quality parameters of all the candidate images do not meet the quality condition.
7. The method of any of claims 1 to 5, wherein prior to acquiring the set of subject images, the method further comprises:
acquiring a sample image set, wherein the sample image set comprises a plurality of sample images, the sample images comprise labeling labels of key parts of sample objects, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
training an initial image recognition model by using the sample image set, wherein the key region of the initial image recognition model is optimized by using the region label, and a corresponding initial task sub-model in the initial image recognition model is optimized by using the quality label;
and under the condition that the determination accuracy of the key area is higher than a first threshold value and the identification accuracy of the initial task sub-model is higher than a second threshold value, determining to acquire the image identification model comprising a plurality of task sub-models.
8. An image determining apparatus, comprising:
an acquisition unit configured to acquire an object image set including a plurality of candidate images from which a target object is detected;
the identification unit is used for identifying each candidate image in the candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of the target object in the corresponding candidate image, the image identification model is a neural network model comprising a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
a determining unit, configured to determine the candidate image corresponding to the quality parameter that satisfies a quality condition as a target image, where the target image is used to identify the target object.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored program which when executed performs the method of any of claims 1 to 7.
10. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 7 by means of the computer program.
Background
With the coverage of video surveillance, video analysis technology plays an increasingly critical role in public safety and security, wherein the target is preferably an important component of video analysis technology. The vehicle target preferably means that in the monitoring video, the image of a single vehicle target in each frame from appearance to disappearance is scored, so that the preferred image with the highest quality of the target is obtained. The preferred image is important for the identification of the attributes (e.g., color, vehicle type, etc.) of the subsequent target vehicle. The high-quality optimal image can effectively improve the success rate and the accuracy rate of the attribute identification of the target vehicle.
In the attribute identification of the vehicle, the integrity of the vehicle has a serious influence on the success rate and accuracy of the attribute identification. However, the current target optimization, such as portrait target optimization, usually only focuses on the definition of the image, so that the integrity of the vehicle in the optimized image cannot be guaranteed, resulting in a low recognition success rate and accuracy of the target vehicle.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an image determination method and device, a storage medium and electronic equipment, which are used for at least solving the technical problem of low success rate of object recognition caused by inaccurate image determination for object recognition.
According to an aspect of an embodiment of the present invention, there is provided an image determination method including: acquiring a target image set, wherein the target image set comprises a plurality of candidate images for detecting a target object; identifying each candidate image in the plurality of candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of the target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part; and determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
According to another aspect of the embodiments of the present invention, there is also provided an image determining apparatus including: an acquisition unit configured to acquire a target image set including a plurality of candidate images from which a target object is detected; an identifying unit, configured to identify each of the plurality of candidate images by using an image identification model, to obtain a quality parameter corresponding to each of the candidate images, where the quality parameter indicates a degree of completeness of a key portion of the target object in the corresponding candidate image, the image identification model includes a plurality of task sub-models, and the task sub-models are used to identify degrees of completeness of the corresponding key portion; and a determining unit, configured to determine the candidate image corresponding to the quality parameter satisfying a quality condition as a target image, where the target image is used to identify the target object.
According to a further aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned image determination method when running.
According to still another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored, and a processor configured to execute the image determination method described above by the computer program.
In the embodiment of the invention, in an object image set containing a target object, each candidate image is obtained through an image recognition model to carry out integrity recognition to obtain quality parameters of the candidate image, the integrity of each key part in the target object is recognized through a task sub-model in the image recognition model to obtain quality parameters indicating the integrity of the key parts in the target object in the candidate image, the candidate image corresponding to the quality parameters meeting quality conditions is determined as the target image, each task sub-model in the image recognition model carries out integrity recognition to each key part of the target object contained in the image to obtain the quality parameters capable of indicating the integrity of the target object in the candidate image, the target image is determined from the candidate image through the quality parameters, and the aim of determining the target image containing the target object with higher integrity from the image set is achieved, the target image containing the target object with higher integrity is used for identifying the target object, so that the technical effect of improving the identification success rate of the target object is achieved, and the technical problem that the target identification success rate is low due to inaccurate image determination for identifying the target is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a schematic diagram of an application environment of an alternative image determination method according to an embodiment of the invention;
FIG. 2 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 4 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 5 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 6 is a flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 7 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 8 is a schematic flow chart diagram of an alternative image determination method according to an embodiment of the invention;
FIG. 9 is a schematic diagram of an alternative image determination apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of an embodiment of the present invention, there is provided an image determination method, which may optionally be applied, but not limited to, in an environment as shown in fig. 1. The terminal device 102 performs data interaction with the server 112 through the network 110. The terminal device 102 has a video capture function and transmits captured video data to the server 112 via the network 110. A database 114 is operative in the server 112 for storing the received video data and a processing engine 116 is operative in the database 114 for processing the video data to determine image frames for object recognition.
The processing engine 116 of the server 112 is not limited to executing S102 to S106 in order to determine the target image from the video data. S102, acquiring a target image set. The target image set includes a plurality of candidate images, and the candidate images are video frames in which the target object is detected. And intercepting a target video containing a target object from the received video data, and dividing the target video according to frames to obtain an object image set containing the target object. And S104, obtaining the quality parameters of the candidate images. The candidate image is identified through the image identification model to obtain quality parameters corresponding to the candidate image, the quality parameters are used for indicating the integrity degree of key parts of the target object in the candidate image, the image identification model is a neural network model comprising a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key parts. And S106, determining a target image. And determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Optionally, in this embodiment, the terminal device 102 may be a terminal device configured with a video capture client, and may include but is not limited to at least one of the following: mobile phones (such as Android phones, IOS phones, etc.), notebook computers, tablet computers, palm computers, MID (Mobile Internet Devices), PAD, desktop computers, smart televisions, smart cameras, etc. The video acquisition client can be a video client, an instant messaging client, a browser client, an educational client, etc. The network 110 may include, but is not limited to: a wired network, a wireless network, wherein the wired network comprises: a local area network, a metropolitan area network, and a wide area network, the wireless network comprising: bluetooth, WIFI, and other networks that enable wireless communication. The server 112 may be a single server, a server cluster composed of a plurality of servers, or a cloud server. The above is merely an example, and this is not limited in this embodiment.
As an alternative implementation, as shown in fig. 2, the image determining method includes:
s202, acquiring an object image set, wherein the object image set comprises a plurality of candidate images of a detected target object;
s204, identifying each candidate image in the candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of a target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
and S206, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Optionally, the object image set is not limited to an image set obtained by performing video processing on a target video including a target object, and the video processing is not limited to segmenting the target video by frames to obtain a plurality of video frames including the target object. The target video is not limited to a video including a target object captured from video data captured by the camera terminal.
Optionally, the image recognition model includes a region recognition model for recognizing a key region from the candidate image and a task sub-model for recognizing the integrity of the key region. The region identification model is used for identifying key regions where all key parts are located from the complete candidate image, inputting all the key regions into the task sub-model, and identifying the integrity degree of the key parts in the key regions by using the task sub-model corresponding to the key parts to obtain the integrity degree of the key parts.
Alternatively, the key site is a site divided for the composition of the subject. Taking the object as a vehicle as an example, the key parts are not limited to include: license plate, car logo, front/rear window, side window, left/right car light, roof, wheel. And determining the number of task submodels included in the image recognition model according to the key parts, and setting the association corresponding relation between the task submodels and the key parts. And utilizing the task sub-model corresponding to the key part to complete the key part in the key region where the key part is located in the candidate image.
Alternatively, the integrity of the key part is not limited to be a preset integrity level, and a matching integrity parameter is set for each integrity level. And determining the integrity level of the corresponding key part or the integrity parameter corresponding to the integrity level through the task sub-model. And a statistical module of the image recognition model obtains a quality parameter indicating the integrity degree of the target object in the candidate image based on the integrity level or the integrity parameter output by each task sub-model.
Optionally, the determination of the target object from the target image set is not limited to sequentially obtaining the quality parameters of each candidate image in the target image set, and the target image is determined according to the quality parameters of all candidate images.
The process of determining the target image from the image set with the target object as the target a is not limited to the process shown in fig. 3. And S302 is executed, and a sample image containing the integrity annotation of the key part of the target A is obtained. In the case where the sample image is acquired, S304 is performed, and the key component multitask model M, that is, the image recognition model is trained using the sample image. When the training of the multitask model M is completed, S306 is executed to set K to 1. K is used to indicate the round in which the poll is currently located. S308 is executed, and whether K is less than or equal to the total number of the images in the image set is judged. If the judgment in S308 is yes, and the current value of K is less than or equal to the total number of images in the image set, S310 is executed to obtain the kth image containing the target a. The kth image is acquired from the set of images containing object a. In the case of acquiring the image, S312 is executed, and integrity recognition of each component in the image is performed using M. And identifying the integrity of each part of the target A in the image through the multitask model M to obtain the integrity parameters of each part. And S314, determining quality parameters according to the integrity and the importance degree of each component. After the integrity of each component is acquired, the importance degree of each component is determined so as to determine the quality parameter of the image. S316 is performed, and K +1 is added.
And completing the polling of the current round, and circularly executing S308 to S316 until the current K is larger than the total number of images in the image set. When the quality parameter of each image in the image set is acquired, S318 is executed to select 1 or more target images with the highest quality parameter as the target a.
In the embodiment of the application, in an object image set containing a target object, each candidate image is obtained through an image recognition model to carry out integrity recognition to obtain quality parameters of the candidate image, the integrity of each key part in the target object is recognized through a task sub-model in the image recognition model to obtain quality parameters indicating the integrity of the key parts in the target object in the candidate image, the candidate image corresponding to the quality parameters meeting quality conditions is determined as the target image, each task sub-model in the image recognition model carries out integrity recognition to each key part of the target object contained in the image to obtain the quality parameters capable of indicating the integrity of the target object in the candidate image, the target image is determined from the candidate image through the quality parameters, and the aim of determining the target image containing the target object with higher integrity from the image set is achieved, the target image containing the target object with higher integrity is used for identifying the target object, so that the technical effect of improving the identification success rate of the target object is achieved, and the technical problem that the target identification success rate is low due to inaccurate image determination for identifying the target is solved.
As an alternative implementation, as shown in fig. 4, the identifying, by using the image identification model, each candidate image in the plurality of candidate images to obtain the quality parameter corresponding to each candidate image includes:
the following processing is performed for each candidate image:
s402, determining key areas containing all key parts in the candidate images;
s404, identifying the key areas of the key parts based on the task sub-models corresponding to the key parts in the image identification model to obtain part parameters corresponding to the key parts, wherein the part parameters are used for indicating the integrity degree of the corresponding key parts;
and S406, obtaining quality parameters according to the position parameters of each key position.
Optionally, in order to facilitate the recognition of the completion degree of each task sub-model for the corresponding key part, the image recognition model is used to divide and extract the key region where each key part is located in the candidate image. And inputting the extracted regional information of the key region containing the key part into the task submodel to realize the identification of the task submodel on the integrity degree of the key part and obtain the quality parameters which are output by the task submodel and correspond to the key part.
Optionally, a key area corresponding to each key part is determined according to the position of the key part in the target object. Under the condition that the distance between the areas where the adjacent key parts are located is smaller than an area threshold, dividing the key area containing two or more key parts into key areas serving as the two or more key parts, and respectively identifying the integrity of the key parts contained in the key areas by utilizing the task sub-model. Taking the target object as a vehicle as an example, the region interval between the regions of the license plate and the vehicle logo is small, so that the image region containing the license plate and the vehicle logo can be used as the same key region, and the key region is respectively input into the task sub-model corresponding to the license plate and the task sub-model corresponding to the vehicle logo, so as to respectively identify the integrity degree of the license plate and the integrity degree of the vehicle logo.
Alternatively, the site parameter is not limited to being a numerical value indicating the degree of completeness of a critical site. For example, the region parameter is set to a value between 0 and 1, where 1 indicates that the key region in the image is complete, and 0 indicates that the key region in the image is completely absent. The above is merely an example and is not a limitation on the part parameter.
In the embodiment of the application, the integrity of each key part is identified through each task sub-model in the image identification model to obtain the part parameter, and the quality parameter for indicating the integrity of the target object in the image is obtained by using the part parameter corresponding to each key part. The target object is divided into a plurality of key parts, the integrity degrees of the key parts are identified by the task sub-models respectively, the integrity degree of the target object is obtained from the integrity degree of the key parts, and the accuracy of calculating the integrity degree of the target object is improved.
As an alternative embodiment, as shown in fig. 5, the obtaining of the location parameters of each critical location includes:
s502, identifying the complete grade of a key part contained in the regional information of the key region;
s504, obtaining a grade parameter matched with the complete grade;
and S506, taking the grade parameter as a part parameter of the key part.
Alternatively, the integrity level is not limited to being a preset integrity level, and a matching level parameter is set for each integrity level. The level parameter is not limited to be a numerical type, and the integrity level is converted into a numerical part parameter, so that the digitization of the integrity level is realized.
Optionally, different integrity levels are set for each key part according to the part condition, and the integrity levels of each key part may be the same or different. For example, in the case where the target object is a vehicle, the full level of the license plate is set to three levels of full, incomplete, and invisible. The license plate in the complete indication image is visible and complete, the license plate in the incomplete indication image is visible but displayed incompletely, and the license plate is not displayed in the invisible indication image. The integrity level may also be set to be complete (fully visible), more complete (visible portion greater than 50%), partially visible (visible portion less than or equal to 50%), invisible (no visible portion). The complete grade can be set to be the area proportion of the visible part, the complete grade is set to be the ten grade, the area proportion of the visible part is indicated by the first grade to be 0-10%, and the rest is done in sequence. The above completeness levels are examples and are not intended to be limiting.
Alternatively, the grade parameter matching each full grade is not limited to setting a matching grade score for each full grade, and the part parameter describing the key part is converted into a grade score corresponding to the full grade. A numerical range is set for the ranking parameters, and a ranking score is set for each ranking parameter within the numerical range. Taking the numerical range of the grade parameter as 0-1, the value is not limited to corresponding to 1 completely, not limited to corresponding to 0.5 incompletely, and not limited to corresponding to 0 invisibly.
In the embodiment of the application, the integrity level of the key part is identified through the task sub-model, and the integrity level is matched with the level parameter, so that the numeralization of the integrity level of the key part is realized, and the quality parameter of the target object can be conveniently obtained through the integrity level of the key part.
As an alternative implementation, as shown in fig. 6, the obtaining the quality parameter according to the location parameter of each key location includes:
s602, acquiring part parameters of key parts output by each task sub-model;
s604, determining the part weight corresponding to each key part;
s606, weighting operation is carried out on the part parameters according to the part weights, and quality parameters are obtained.
Optionally, when acquiring the part parameters output by each task sub-model, determining the part weight corresponding to each key part. The site weight is not limited to a weight set in advance for each key site. The weight of each key portion is not limited to be set according to the importance of the degree of completeness of the key portion in each portion of the target object.
Optionally, when the target object is a vehicle, the key parts include: the calculation of the mass parameters is not limited to the following formula (1):
where obj _ score is used to indicate the quality parameter, i is used for k _ i to indicate the site weight of the critical site, and score _ i is used to indicate the site parameter.
In the embodiment of the application, the importance degree of each key part on the integrity degree of the target object is adjusted by setting the part weight for each key part, so that the calculated quality parameter can represent the overall integrity of the target object.
Optionally, as shown in fig. 7, after obtaining the quality parameter corresponding to the candidate image, the method further includes:
s702, acquiring the quality parameter of each candidate image in the object image set;
s704, comparing the quality parameters;
and S706, determining the target image meeting the quality condition according to the comparison result of the quality parameters.
Optionally, in a case that the quality parameter of each candidate image in the object image set is obtained through calculation, the quality parameters of all candidate images are compared to determine the target image. The comparison of the quality parameters of all the candidate images is not limited to that all the candidate images are sorted from large to small according to the quality parameters to obtain the candidate image sequence. And determining a target image in the object image set according to the candidate image sequence.
As an optional implementation manner, the determining, as the target image, the candidate image corresponding to the quality parameter that satisfies the quality condition includes: based on the parameter value of the quality parameter, sorting the quality parameter corresponding to each candidate image, and determining the candidate image corresponding to the quality parameter sorted at the designated sequence as a target image; or determining the candidate image corresponding to the quality parameter exceeding the quality parameter threshold value as the target image.
Optionally, after determining the quality parameter of each candidate image in the target image set and the candidate image sequence obtained by sorting the quality parameters, the quality condition is determined. In the case where the quality condition indicates the designated rank, a candidate image in the candidate image sequence located on the designated rank is determined as the target image. For example, if the quality condition indicates that the assigned rank is one, the candidate image with the highest value of the quality parameter, which is the first rank, is taken as the target image. If the quality condition indicates that the execution order is one, two, or three, the first order, the second order, the third order, that is, the three candidate images of the first three of the quality parameter values are all used as the target image.
Optionally, when the quality condition indicates a quality parameter threshold, the candidate image corresponding to the quality parameter exceeding the quality parameter threshold is used as the target image, and the number of the target images is not limited. For example, if the quality parameter threshold is 0.9, all candidate images corresponding to quality parameters whose quality parameters exceed 0.9 are regarded as target images.
As an optional implementation manner, after obtaining the quality parameter corresponding to the candidate image, the method further includes: and under the condition that the quality parameters of all the candidate images do not meet the quality condition, prompting that the target image is not determined in the target image set.
In the embodiment of the application, the quality of the image is evaluated and screened in the target image set by matching the preset quality condition with the quality parameter, so that the target image meeting the quality condition is obtained. And identifying the target object by using the target image with the integrity degree meeting the quality condition so as to improve the identification success rate and the accuracy of the target object.
As an alternative implementation, as shown in fig. 8, before the acquiring the object image set, the method further includes:
s802, a sample image set is obtained, wherein the sample image set comprises a plurality of sample images, the sample images comprise labeling labels of key parts of sample objects, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
s804, training the initial image recognition model by using the sample image set, wherein the region label is used for optimizing a key region of the initial image recognition model, and the quality label is used for optimizing a corresponding initial task sub-model in the initial image recognition model;
s806, under the condition that the determination accuracy of the key area is higher than a first threshold value and the identification accuracy of the initial task sub-model is higher than a second threshold value, determining to acquire the image identification model comprising the plurality of task sub-models.
Optionally, the initial image recognition model is trained by using the sample image with the label to obtain the image recognition model. The annotation labels in the sample image include a region label and a quality label. The area label is used for indicating the division of the key area where the key part is located, and whether the division of the key area is correct or not is judged through the area label.
Optionally, the quality label is used to indicate the integrity of the key part, and the quality label is used to determine whether the output of the part parameter of the key part is correct. And training each initial task sub-model by using the identified regional information of the key region to obtain the part parameters output by each initial task sub-model.
Optionally, whether the training of the initial image recognition model is terminated is determined by the determination accuracy of the key area and the recognition accuracy of the task sub-model. And under the condition that the determination accuracy of the key area is lower than a first threshold value or the identification accuracy of any initial task sub-model is lower than a second threshold value, continuing to use the sample image to optimize the initial image identification model so as to improve the accuracy of the quality parameters output by the initial image identification model.
In the embodiment of the application, in the training stage of the image recognition model, the accuracy of key region division and the accuracy of key part recognition are improved through key region division and task sub-model training, so that the accuracy of the quality parameters output by the trained image recognition model is ensured.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an image determination apparatus for implementing the image determination method described above. As shown in fig. 9, the apparatus includes:
an acquiring unit 902, configured to acquire a target image set, where the target image set includes a plurality of candidate images for detecting a target object;
an identifying unit 904, configured to identify, by using an image identification model, each candidate image in the multiple candidate images to obtain a quality parameter corresponding to each candidate image, where the quality parameter is used to indicate a degree of completeness of a key portion of a target object in the corresponding candidate image, the image identification model includes multiple task sub-models, and the task sub-models are used to identify the degree of completeness of the corresponding key portion;
a determining unit 906, configured to determine a candidate image corresponding to the quality parameter meeting the quality condition as a target image, where the target image is used to identify the target object.
Optionally, the identifying unit 904 is configured to perform processing on each candidate image, and includes:
the region module is used for determining key regions containing all key parts in the candidate images;
the input module is used for identifying the key areas of the key parts based on the task sub-models corresponding to the key parts in the image identification model to obtain part parameters corresponding to the key parts, wherein the part parameters are used for indicating the integrity degree of the corresponding key parts;
and the part module is used for obtaining quality parameters according to the part parameters of each key part.
Optionally, the area module includes:
identifying a completeness level of a key part included in region information for identifying a key region;
the matching module is used for acquiring a grade parameter matched with the complete grade;
and the grade module is used for taking the grade parameter as the part parameter of the key part.
Optionally, the portion module includes:
the first acquisition module is used for acquiring the part parameters of the key parts output by each task sub-model;
the weighting module is used for determining the weight of the part corresponding to each key part;
and the calculation module is used for performing weighting operation on the part parameters according to the part weights to obtain the quality parameters.
Optionally, the determining module includes:
the first determining module is used for sequencing the quality parameters corresponding to the candidate images based on the parameter values of the quality parameters, and determining the candidate images corresponding to the quality parameters sequenced in the designated sequence as target images;
and the second determining module is used for determining the candidate image corresponding to the quality parameter exceeding the quality parameter threshold value as the target image.
Optionally, the image determining apparatus further includes a prompting unit, configured to, after obtaining the quality parameters corresponding to the candidate images, prompt that the target image is not determined in the target image set when the quality parameters of all the candidate images do not satisfy the quality condition.
Optionally, the image determining apparatus further includes a training unit, configured to, before acquiring the object image set, include:
the system comprises a sample module, a quality analysis module and a processing module, wherein the sample module is used for acquiring a sample image set, the sample image set comprises a plurality of sample images, the sample images comprise labeling labels of key parts of sample objects, and the labeling labels comprise area labels of key areas where the key parts are located and quality labels of the key parts;
the training module is used for training the initial image recognition model by utilizing the sample image set, wherein the area label is utilized for optimizing a key area of the initial image recognition model, and the quality label is utilized for optimizing a corresponding initial task sub-model in the initial image recognition model;
and the completion module is used for determining and acquiring the image recognition model comprising the plurality of task sub-models under the condition that the determination accuracy of the key area is higher than a first threshold and the recognition accuracy of the initial task sub-model is higher than a second threshold.
In the embodiment of the application, in an object image set containing a target object, each candidate image is obtained through an image recognition model to carry out integrity recognition to obtain quality parameters of the candidate image, the integrity of each key part in the target object is recognized through a task sub-model in the image recognition model to obtain quality parameters indicating the integrity of the key parts in the target object in the candidate image, the candidate image corresponding to the quality parameters meeting quality conditions is determined as the target image, each task sub-model in the image recognition model carries out integrity recognition to each key part of the target object contained in the image to obtain the quality parameters capable of indicating the integrity of the target object in the candidate image, the target image is determined from the candidate image through the quality parameters, and the aim of determining the target image containing the target object with higher integrity from the image set is achieved, the target image containing the target object with higher integrity is used for identifying the target object, so that the technical effect of improving the identification success rate of the target object is achieved, and the technical problem that the target identification success rate is low due to inaccurate image determination for identifying the target is solved.
According to still another aspect of the embodiment of the present invention, there is also provided an electronic device for implementing the image determination method, where the electronic device may be the terminal device or the server shown in fig. 1. The present embodiment takes the electronic device as a server as an example for explanation. As shown in fig. 10, the electronic device comprises a memory 1002 and a processor 1004, the memory 1002 having stored therein a computer program, the processor 1004 being arranged to execute the steps of any of the method embodiments described above by means of the computer program.
Optionally, in this embodiment, the electronic device may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
s2, identifying each candidate image in the candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of a target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
and S3, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an IOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 10 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 1002 may be used to store software programs and modules, such as program instructions/modules corresponding to the image determining method and apparatus in the embodiments of the present invention, and the processor 1004 executes various functional applications and data processing by running the software programs and modules stored in the memory 1002, that is, implements the image determining method described above. The memory 1002 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1002 may further include memory located remotely from the processor 1004, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 1002 may be used to store information such as a target image set, quality conditions, and target images, but is not limited thereto. As an example, as shown in fig. 10, the memory 1002 may include, but is not limited to, the acquiring unit 902, the identifying unit 904, and the determining unit 906 in the image determining apparatus. In addition, other module units in the image determination apparatus may also be included, but are not limited to, and are not described in detail in this example.
Optionally, the above-mentioned transmission device 1006 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 1006 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices so as to communicate with the internet or a local area Network. In one example, the transmission device 1006 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In addition, the electronic device further includes: a display 1008 for displaying the set of object images and the target object; and a connection bus 1010 for connecting the respective module parts in the above-described electronic apparatus.
In other embodiments, the terminal device or the server may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting a plurality of nodes through a network communication. Nodes can form a Peer-To-Peer (P2P, Peer To Peer) network, and any type of computing device, such as a server, a terminal, and other electronic devices, can become a node in the blockchain system by joining the Peer-To-Peer network.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the methods provided in the various alternative implementations of the image determination aspect described above. Wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring an object image set, wherein the object image set comprises a plurality of candidate images for detecting a target object;
s2, identifying each candidate image in the candidate images by using an image identification model to obtain a quality parameter corresponding to each candidate image, wherein the quality parameter is used for indicating the integrity degree of a key part of a target object in the corresponding candidate image, the image identification model comprises a plurality of task sub-models, and the task sub-models are used for identifying the integrity degree of the corresponding key part;
and S3, determining the candidate image corresponding to the quality parameter meeting the quality condition as a target image, wherein the target image is used for identifying the target object.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种群发性滑坡的空间尺度选择方法及装置