Image recognition method and device, and training method and device of image recognition model

文档序号:8664 发布日期:2021-09-17 浏览:25次 中文

1. A method of training an image recognition model, the method comprising:

inputting an annotation image into a first image recognition model, and inputting a real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained;

generating a loss value of the first image recognition model based on a result of a target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model;

and training the first image recognition model based on the loss value to obtain the trained first image recognition model.

2. The method of claim 1, wherein the generating a loss value for the first image recognition model based on the results of the target processing layer in the first image recognition model and the results of the target processing layer in the second image recognition model comprises:

inputting a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model into a preset cross-domain loss function to obtain a first loss value;

generating a loss value for the first image recognition model based on the first loss value, wherein the cross-domain loss function is generated based on a maximum mean difference.

3. The method of claim 1, wherein the training comprises a plurality of training processes;

each training process comprises the following steps:

and updating the weight of the second image recognition model adopted in the previous training process according to the weight of the first image recognition model obtained in the previous training process to obtain the second image recognition model adopted in the current training process.

4. The method of claim 1, wherein the target processing layer is an intermediate processing layer, the target processing layer being included in a training structure of both the first image recognition model and the second image recognition model, the target processing layer being a fully connected layer.

5. The method of claim 2, wherein the generating a loss value for the first image recognition model based on the first loss value comprises:

generating a second loss value according to a cross entropy loss function and the output of the first image recognition model;

generating a third loss value according to the output of the logas loss function and the first image recognition model;

and generating a loss value of the first image recognition model according to the first loss value, the second loss value and the third loss value.

6. The method according to one of claims 1 to 5, wherein the output of the image recognition model comprises a mask, the image recognition model being used for identifying a target object in the image, the mask being used for indicating the class and the position of the target object in the image input to the image recognition model.

7. The method of claim 6, wherein the annotation information for the annotated image comprises the mask, the mask comprising masks for respective pixels of the image, the masks for the pixels comprising predetermined color information, different color information indicating different classes of traffic markings.

8. The method of claim 1, wherein the image recognition model comprises an encoder and a decoder;

the forward propagation process in the training includes:

acquiring a characteristic diagram of an image input into the image recognition model through the encoder, and carrying out pyramid pooling on the characteristic diagram;

generating a feature coding result of the encoder according to the pyramid pooling result;

performing feature fusion on the feature encoding result and the feature map through the decoder;

obtaining a mask of the input image according to the feature fusion result;

the image recognition model comprises a target convolutional layer, and the target convolutional layer is used for performing depth separable convolution processing and expansion convolution processing.

9. An image recognition method, wherein the method adopts the image recognition model trained in any one of claims 1 to 8, the output of the image recognition model comprises a mask, and the image recognition model is used for recognizing a target object in an image, and the target object is a traffic marking.

10. The method of claim 9, wherein the method further comprises:

acquiring positioning information of the target traffic marking indicated by the output mask;

determining a traffic marking reference image corresponding to the positioning information in a traffic marking set;

and determining the missing condition information of the target traffic marking according to the traffic marking reference picture, wherein the missing condition information indicates whether the traffic marking has missing or not.

11. The method of claim 9, wherein the determining the absence information for the target traffic marking from the traffic marking reference map comprises:

determining an area ratio between the target traffic marking and the traffic marking reference map;

and determining whether the target traffic marking has the deficiency or not according to the area ratio.

12. An apparatus for training an image recognition model, the apparatus comprising:

an input unit configured to input an annotation image into a first image recognition model and a real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained;

a generating unit configured to generate a loss value of the first image recognition model based on a result of a target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model;

and the training unit is configured to train the first image recognition model based on the loss value to obtain the trained first image recognition model.

13. The apparatus of claim 12, wherein the generating unit is further configured to perform the generating the loss value of the first image recognition model based on the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model as follows:

the generating a loss value of the first image recognition model based on a result of a target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model comprises:

inputting a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model into a preset cross-domain loss function to obtain a first loss value;

generating a loss value for the first image recognition model based on the first loss value, wherein the cross-domain loss function is generated based on a maximum mean difference.

14. The apparatus of claim 12, wherein the training comprises a plurality of training procedures;

each training process comprises the following steps:

and updating the weight of the second image recognition model adopted in the previous training process according to the weight of the first image recognition model obtained in the previous training process to obtain the second image recognition model adopted in the current training process.

15. The apparatus of claim 12, wherein the target processing layer is an intermediate processing layer included in training structures of both the first image recognition model and the second image recognition model, the target processing layer being a fully connected layer.

16. The apparatus of claim 13, wherein the generating unit is further configured to perform the generating the loss value of the first image recognition model based on the first loss value as follows:

generating a second loss value according to a cross entropy loss function and the output of the first image recognition model;

generating a third loss value according to the output of the logas loss function and the first image recognition model;

and generating a loss value of the first image recognition model according to the first loss value, the second loss value and the third loss value.

17. The apparatus according to one of claims 12-16, wherein the output of the image recognition model comprises a mask, the image recognition model being used for identifying a target object in the image, the mask being used for indicating the class and the position of the target object in the image input to the image recognition model.

18. The apparatus of claim 17, wherein the annotation information for the annotation image comprises the mask, the mask comprising masks for respective pixels of the image, the masks for the pixels comprising predetermined color information, different color information indicating different classes of traffic markings.

19. The apparatus of claim 12, wherein the image recognition model comprises an encoder and a decoder;

the forward propagation process in the training includes:

acquiring a characteristic diagram of an image input into the image recognition model through the encoder, and carrying out pyramid pooling on the characteristic diagram;

generating a feature coding result of the encoder according to the pyramid pooling result;

performing feature fusion on the feature encoding result and the feature map through the decoder;

obtaining a mask of the input image according to the feature fusion result;

the image recognition model comprises a target convolutional layer, and the target convolutional layer is used for performing depth separable convolution processing and expansion convolution processing.

20. An image recognition apparatus, wherein the apparatus employs the image recognition model trained in any one of claims 12-19, the output of the image recognition model including a mask, the image recognition model being used to identify a target object in an image, the target object being a traffic marking.

21. The apparatus of claim 20, wherein the apparatus further comprises:

an acquisition unit configured to acquire the positioning information of the target traffic marking indicated by the outputted mask;

the reference determining unit is configured to determine a traffic marking reference map corresponding to the positioning information in the traffic marking set;

an information determination unit configured to determine, based on the traffic marking reference map, missing condition information of the target traffic marking, wherein the missing condition information indicates whether a traffic marking is missing.

22. The apparatus of claim 20, wherein the information determining unit is further configured to perform the determining the absence information of the target traffic marking from the traffic marking reference map as follows:

determining an area ratio between the target traffic marking and the traffic marking reference map;

and determining whether the target traffic marking has the deficiency or not according to the area ratio.

23. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.

24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.

25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.

Background

Image recognition, which refers to a technique for processing, analyzing and understanding images by a computer to recognize various targets and objects, is a practical application of applying deep learning algorithms. For example, image recognition includes face recognition and merchandise recognition.

In the related art, image recognition may be employed to identify traffic markings in an image. This mode can replace the artifical discernment of consuming the manpower to reduce the identification error, improve the stability and the degree of accuracy of discernment.

Disclosure of Invention

Provided are a training method and device for an image recognition model, an electronic device and a storage medium.

According to a first aspect, there is provided a training method of an image recognition model, comprising: inputting the marked image into a first image recognition model, and inputting the real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained; generating a loss value of the first image recognition model based on a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model; and training the first image recognition model based on the loss value to obtain the trained first image recognition model.

According to a second aspect, an image recognition method is provided, wherein the method adopts the trained image recognition model in the first aspect, the output of the image recognition model comprises a mask, and the image recognition model is used for recognizing a target object in an image, and the target object is a traffic marking.

According to a third aspect, there is provided a training apparatus for an image recognition model, comprising: an input unit configured to input an annotation image into a first image recognition model and a real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained; a generating unit configured to generate a loss value of the first image recognition model based on a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model; and the training unit is configured to train the first image recognition model based on the loss value, so as to obtain the trained first image recognition model.

According to a fourth aspect, there is provided an image recognition apparatus, wherein the apparatus adopts the trained image recognition model in the third aspect, the output of the image recognition model includes a mask, and the image recognition model is used for recognizing a target object in an image, and the target object is a traffic marking.

According to a fifth aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the embodiments of the method of training an image recognition model.

According to a sixth aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to any one of the embodiments of the training method of the image recognition model.

According to a seventh aspect, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the method according to any one of the embodiments of the training method of the image recognition model.

According to the scheme disclosed by the invention, two branches can be adopted for training, and the loss value is generated by utilizing the result of the target processing layer in the two branches, so that the difference between the sample for training and the real sample can be reduced, the training deviation caused by data difference is reduced, and the training accuracy can be improved.

Drawings

Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:

FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present disclosure may be applied;

FIG. 2 is a flow diagram of one embodiment of a method of training an image recognition model according to the present disclosure;

FIG. 3A is a schematic diagram of an application scenario of a training method of an image recognition model according to the present disclosure;

FIG. 3B is a mask schematic of a training method of an image recognition model according to the present disclosure;

FIG. 4 is a flow diagram of yet another implementation of a training method of an image recognition model according to the present disclosure;

FIG. 5 is a schematic diagram illustrating an embodiment of an apparatus for training an image recognition model according to the present disclosure;

FIG. 6 is a block diagram of an electronic device for implementing a training method of an image recognition model according to an embodiment of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.

It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 illustrates an exemplary system architecture 100 to which an embodiment of the training method of the image recognition model or the training apparatus of the image recognition model of the present disclosure may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as video applications, live applications, instant messaging tools, mailbox clients, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.

Here, the terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server can analyze and process the data such as the annotated image and the real image, and feed back the processing result (for example, the trained first image recognition model) to the terminal device.

It should be noted that the training method of the image recognition model provided by the embodiment of the present disclosure may be executed by the server 105 or the terminal devices 101, 102, and 103, and accordingly, the training device of the image recognition model may be disposed in the server 105 or the terminal devices 101, 102, and 103.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to FIG. 2, a flow 200 of one embodiment of a method of training an image recognition model according to the present disclosure is shown. The training method of the image recognition model comprises the following steps:

step 201, inputting the annotation image into a first image recognition model, and inputting the real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained.

In this embodiment, an execution subject (for example, the server or the terminal device shown in fig. 1) on which the training method of the image recognition model is executed may obtain the annotation image and the real image. And then, the execution subject inputs the annotation image into the first image recognition model and inputs the real image into the second image recognition model.

The marked image is an image marked and provided with corresponding marking information. The output here refers to the final output of the image recognition model, i.e. the output of the final layer in the image recognition model. The real image refers to a real image, such as an image of a road side device taken of a road surface. In practice, the annotation image may be an image made when the sample is expanded, or may also be a real image.

The first image recognition model and the second image recognition model are the same image recognition model. That is, two branches for training may be generated from the image recognition model, each branch employing one image recognition model. The execution main body can input the annotation image into one branch and input the real image into the other branch.

The image recognition model is a deep neural network for image recognition. The deep neural network may be various, for example, the deep neural network may be a convolutional neural network.

Step 202, generating a loss value of the first image recognition model based on the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model.

In this embodiment, the executing entity may generate a loss value for training the first image recognition model based on a result, i.e., an output, of the target processing layer in the first image recognition model and a result, i.e., an output, of the target processing layer in the second image recognition model.

In practice, the execution subject described above may generate the loss value of the first image recognition model in various ways. For example, the executing entity may obtain a preset loss function, and input both the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model into the preset loss function, so as to obtain a loss value.

The target processing layer is a processing layer (layer) in the image recognition model, and may be, for example, a convolutional layer or a pooling layer.

And step 203, training the first image recognition model based on the loss value to obtain the trained first image recognition model.

In this embodiment, the executing entity may train the first image recognition model based on the loss value, so as to obtain the trained first image recognition model. In practice, the executing entity may back-propagate the loss value in the first image recognition model. And obtaining the trained first image recognition model through a plurality of training processes (convergence processes), namely, a plurality of generation loss values and a plurality of back propagation.

Specifically, because only the annotated image has corresponding annotation information in the annotated image and the real image, the training process is only performed on the first image recognition model.

The method provided by the embodiment of the disclosure can adopt two branches for training, and generate a loss value by using results of the target processing layer in the two branches, so that the difference between a sample for training and a real sample can be reduced, the training deviation caused by data difference is reduced, and the training accuracy can be improved.

With continued reference to fig. 3A, fig. 3A is a schematic diagram of an application scenario of the training method of the image recognition model according to the present embodiment. In the application scenario of fig. 3, the execution subject 301 inputs the annotation image 302 into a first image recognition model, and inputs the real image 303 into a second image recognition model, where the first image recognition model and the second image recognition model are the same image recognition model to be trained. The execution subject 301 generates a loss value 306 for the first image recognition model based on the result 304 of the target processing layer in the first image recognition model and the result 305 of the target processing layer in the second image recognition model. The executing entity 301 trains the first image recognition model based on the loss value 306, resulting in a trained first image recognition model 307.

In some optional implementations of any embodiment of the present disclosure, the generating a loss value of the first image recognition model based on the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model may include: inputting a result of a target processing layer in the first image recognition model and a result of a target processing layer in the second image recognition model into a preset cross-domain loss function to obtain a first loss value; based on the first loss value, a loss value of the first image recognition model is generated, wherein the cross-domain loss function is generated based on the maximum mean difference.

In these optional implementations, the execution subject may generate a cross-domain loss function based on the maximum mean difference, and input both the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model into the cross-domain loss function, where the obtained loss value is the first loss value.

The execution subject generates a loss value for training the first image recognition model based on the first loss value. In practice, the executing entity may generate a loss value for training the first image recognition model based on the first loss value in various ways. For example, the executing body may directly use the first loss value as a loss value for training the first image recognition model.

In particular, a cross-domain loss function LMMDCan be expressed as:

wherein xsFor the result of the target processing layer in the first image recognition model, xtThe results of the target processing layer in the second image recognition model are identified. Xs、XTRespectively, x at each convergence time (training process) in the trainingsSet of (2) and xtPhi () is a function. ║ ║ refers to a norm, the lower subscript 2 of the norm represents a 2 norm and the upper subscript represents the square. x is the number ofs∈XSFinger sigma phi (x)s) Is XsSmallest x insThe upper bound is XSMiddle maximum xs。xt∈XTSigma of fingerφ(xt)Is XTSmallest x intThe upper bound is XTMiddle maximum xt

The implementation modes can reduce the difference between a source domain and a target domain (real domain) through a cross-domain loss function, improve the self-adaptive capacity of the model in training, and further improve the convergence speed and accuracy of the model.

Optionally, the generating a loss value of the first image recognition model based on the first loss value may include: generating a second loss value according to the cross entropy loss function and the output of the first image recognition model; generating a third loss value according to the output of the logas loss function and the first image recognition model; and generating a loss value of the first image recognition model according to the first loss value, the second loss value and the third loss value.

In these alternative implementations, three loss functions may be utilized to generate the loss values used to train the first image recognition model. Specifically, the three loss functions may include not only a cross-domain loss function but also a cross-entropy loss function and a lovas (lovasz softmax) loss function.

The execution body may generate the loss value from the first loss value, the second loss value, and the third loss value in various ways. For example, the execution subject may use a sum of the first loss value, the second loss value, and the third loss value as the loss value of the first image recognition model. The execution agent may input the sum to a predetermined model or formula, and obtain a loss value output from the predetermined model. The specified model or formula is used to predict a loss value of the first image recognition model by the input sum.

In particular, the blending loss function L of the first image recognition modelmixCan be expressed as:

Lmix=LCE+λ·LLovasz+γ·LMMD

wherein L isCEWhich is a cross entropy loss function, can be used to generate a second loss value. L isLovWhich is a function of the logas loss, can be used to generate a third loss value. L isMMDWhich is a cross-domain penalty function, may be used to generate a first penalty value. And lambda and gamma are preset hyper-parameters. The loss value of the first image recognition model may be derived from the blending loss function.

The optional implementation modes can determine the loss of the first image recognition model more comprehensively through various loss values, and the training accuracy of the model is improved.

In some optional implementations of any embodiment of the present disclosure, the training comprises a plurality of training processes; each training process comprises the following steps: and updating the weight of the second image recognition model adopted in the previous training process according to the weight of the first image recognition model obtained in the previous training process to obtain the second image recognition model adopted in the current training process.

In these optional implementations, the executing entity may share the weight of the second image recognition model by using the model weight, that is, the model parameter, obtained by converging the first image recognition model each time in each training process (convergence). In this way, the weights of both the second image recognition model and the first image recognition model are made the same before each training process.

The implementation manners can share the weights of the two image recognition models participating in training, so that the weights of the two trained branches are unified, and the deviation of results of the two branches (such as results of a target processing layer) caused by the asynchronous weights is avoided.

In some optional implementations of any embodiment of the present disclosure, the target processing layer is an intermediate processing layer, the target processing layer is included in training structures of both the first image recognition model and the second image recognition model, and the target processing layer is a fully connected layer.

In these alternative implementations, the target processing layer is an intermediate processing layer of the image recognition model, i.e., not the leading or trailing layer. The target processing layer may only exist in the training structure of the image recognition model, i.e. the image recognition model does not exist in the target processing layer at the time of prediction. In particular, the target processing layer may be a fully connected layer.

In practice, the target process layer, which is a fully connected layer, may be located before the last fully connected layer. The final fully connected layer may be used to output the class and confidence of the target object. Other fully connected layers may also be present before the target processing layer in the image recognition model (and after the decoder).

The implementation modes can set the full connection layer as a target processing layer, so that the loss value can be determined according to the characteristics of the two branches with higher dimensionality and higher fusion degree, more accurate loss value is obtained, and the accuracy of model training is improved.

In some optional implementations of any embodiment of the disclosure, the output of the image recognition model includes a mask, the image recognition model is for recognizing the target object in the image, and the mask is for indicating a category and a location of the target object in the image input to the image recognition model.

In these alternative implementations, the output of the image recognition models (the first image recognition model and the second image recognition model) includes a mask. The image recognition model is used to recognize a target object in the image, which may be arbitrary, such as a flower or a human face, etc.

The category of the target object may be, for example, "chinese rose", "lily", or the like. The position may be expressed in various ways, such as the coordinates of a rectangular box.

These implementations may distinguish the target object from the background through a mask.

In some optional application scenarios of these implementations, the annotation information for annotating the image includes a mask, the mask includes masks for respective pixels of the image, the masks for the pixels include preset color information, and different color information indicates different types of traffic markings.

In these alternative application scenarios, the annotation information may comprise a mask. The mask may refer to a mask for each pixel in the image. The mask for each pixel is represented using color information. The different color information represents different classes of traffic markings. For example, if a pixel is represented in red, the pixel indicates a straight line in the traffic marking. If a pixel is represented in pink, the pixel indicates a plurality of parallel line segments in the traffic marking, namely zebra stripes. A pixel, if represented in black, indicates a non-traffic marking. In addition, the output of the annotation information and the model can include not only the mask of the pixels, but also the confidence of the mask.

As shown in FIG. 3B, the annotation image is shown on the left and the mask on the right.

The application scenes can carry out pixel-level annotation and prediction on the image in the identification scene of the traffic marking, thereby improving the accuracy of the image identification model.

In some optional implementations of any embodiment of the present disclosure, the image recognition model may include an encoder and a decoder; the forward propagation process of the image recognition model during the training and the image recognition process using the model may each include: acquiring a characteristic diagram of an image of an input image recognition model through an encoder, and carrying out pyramid pooling on the characteristic diagram; generating a feature coding result of the encoder according to the pyramid pooling result; performing feature fusion on the feature coding result and the feature map through a decoder; and obtaining a mask of the input image according to the feature fusion result, wherein the image recognition model comprises a target convolutional layer, and the target convolutional layer is used for performing depth separable convolution processing and expansion convolution processing.

In these alternative implementations, the execution subject may input an image (such as an annotation image or an image to be identified that participates in prediction) to an encoder, where a feature map of the image is obtained. Specifically, the encoder may determine the feature map by using a feature map generation step in a deep neural network. For example, the feature map is generated by convolutional layers cascaded in a deep neural network, and the feature map may be generated by using the convolutional layers and the fully-connected layers.

Then, the execution subject may perform pyramid pooling on the feature map in the encoder to obtain a pyramid pooling result. In practice, the execution body may generate the feature encoding result of the encoder, i.e. the output of the encoder, according to the pyramid pooling result in various ways. For example, the execution subject may directly pool the pyramid result as the feature encoding result of the encoder. Or the execution main body can perform preset processing on the pyramid pooling result and take the preset processing result as a feature encoding result. For example, the preset process may include at least one of: further convolution, passing through fully connected layers, changing dimensions, etc.

The execution body may perform feature fusion (concat) on the feature encoding result and the feature map in the decoder. Optionally, at least one of the feature encoding result and the feature map may be preprocessed prior to the fusing. For example, the feature encoding result may be upsampled, and the feature map may be upscaled.

The execution subject may obtain the mask of the input image according to the feature fusion result in various ways. For example, the execution subject may input the feature fusion result into the fully-connected layer to obtain the mask. In addition, the execution body can also perform upsampling processing on the result of the full connection layer to obtain a mask.

The convolution layer included in the image recognition model includes a target convolution layer, and the convolution layer can perform not only depth separable convolution processing but also expansion convolution processing.

In practice, the network structure of the image recognition model is a preset network structure (such as depeplabv 3 +).

The implementation modes can fuse the shallow-level features and the deep-level features, so that richer and comprehensive features are obtained, and more accurate masks can be obtained in training or prediction.

In some optional implementations of any embodiment of the present disclosure, an image recognition method is provided, where the trained image recognition model (i.e., the trained first image recognition model) in any embodiment of the present disclosure is used, an output of the image recognition model includes a mask, the image recognition model is used to recognize a target object in an image, and the target object is a traffic marking.

The realization modes can adopt a masking mode to accurately identify the traffic marking.

With further reference to fig. 4, a flow 400 of yet another implementation of an image recognition method is shown. The process 400 includes the following steps:

step 401, obtaining the positioning information of the target traffic marking indicated by the outputted mask.

In this embodiment, an execution subject (for example, the server or the terminal device shown in fig. 1) on which the image recognition method is executed may acquire the positioning information of the traffic marking indicated by the mask, that is, the target traffic marking. The positioning information herein may refer to geographical location information. The traffic markings in the annotation image may have positioning information corresponding thereto.

Step 402, determining a traffic marking reference map corresponding to the positioning information in the traffic marking set.

In this embodiment, the executing body may determine a traffic marking reference map corresponding to the positioning information in a traffic marking set. The traffic marked line set can comprise multiple categories of standard traffic marked lines, namely traffic marked line reference maps. The executing body can also acquire the corresponding relation between the positioning information and the traffic marking reference picture. Thus, the execution body can find the traffic marking reference map corresponding to the positioning information through the positioning information.

And step 403, determining the missing condition information of the target traffic marking according to the traffic marking reference map, wherein the missing condition information indicates whether the traffic marking is missing or not.

In this embodiment, the execution body may determine the missing condition information of the target traffic marking according to the traffic marking reference map in various ways. For example, the executing body may input both the traffic marking reference map and the target traffic marking into a preset model, and obtain the missing condition information output from the preset model. The preset model can predict the missing condition information by utilizing the traffic marking reference picture and the target traffic marking.

The absence here indicates discontinuity of lines, defect of pattern, discoloration of marked line, and the like.

The implementation modes can accurately judge whether the traffic marking lines have defects or not by utilizing the mask predicted by the image recognition model.

Optionally, step 403 may include: determining the area ratio of the target traffic marking and the traffic marking reference image; and determining whether the target traffic marking has the deficiency or not according to the area ratio.

In these alternative implementations, the execution body may determine an area ratio of the target traffic marking occupying the traffic marking reference map, and determine whether the target traffic marking is missing according to the area ratio. Specifically, the execution body may compare the area ratio with a ratio threshold, and determine whether there is a deficiency according to the comparison result. For example, if the area ratio reaches the ratio threshold, it may be determined that there is no deficiency, and if not, it may be determined that there is a deficiency.

The implementation modes can accurately judge whether the target traffic marking has the deficiency or not according to the area ratio.

The image recognition process using the image recognition model may include: acquiring a characteristic diagram of an image of an input image recognition model through an encoder, and carrying out pyramid pooling on the characteristic diagram; generating a feature coding result of the encoder according to the pyramid pooling result; performing feature fusion on the feature coding result and the feature map through a decoder; and obtaining a mask of the input image according to the feature fusion result, wherein the image recognition model comprises a target convolutional layer, and the target convolutional layer is used for performing depth separable convolution processing and expansion convolution processing.

With further reference to fig. 5, as an implementation of the method shown in fig. 2, the present disclosure provides an embodiment of an apparatus for training an image recognition model, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and besides the features described below, the embodiment of the apparatus may further include the same or corresponding features or effects as the embodiment of the method shown in fig. 2. The device can be applied to various electronic equipment.

As shown in fig. 5, the training apparatus 500 for an image recognition model of the present embodiment includes: input section 501, generation section 502, and training section 503. The input unit 501 is configured to input an annotation image into a first image recognition model and a real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained; a generating unit 502 configured to generate a loss value of the first image recognition model based on a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model; a training unit 503 configured to train the first image recognition model based on the loss value, resulting in the trained first image recognition model.

In this embodiment, specific processes of the input unit 501, the generating unit 502, and the training unit 503 of the training apparatus 500 for image recognition models and technical effects thereof can refer to the related descriptions of step 201, step 202, and step 203 in the corresponding embodiment of fig. 2, respectively, and are not described herein again.

In some optional implementations of the embodiment, the generating unit is further configured to perform generating the loss value of the first image recognition model based on the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model as follows: generating a loss value for the first image recognition model based on the result of the target processing layer in the first image recognition model and the result of the target processing layer in the second image recognition model, comprising: inputting a result of a target processing layer in the first image recognition model and a result of a target processing layer in the second image recognition model into a preset cross-domain loss function to obtain a first loss value; based on the first loss value, a loss value of the first image recognition model is generated, wherein the cross-domain loss function is generated based on the maximum mean difference.

In some optional implementations of this embodiment, the training includes a plurality of training processes; each training process comprises the following steps: and updating the weight of the second image recognition model adopted in the previous training process according to the weight of the first image recognition model obtained in the previous training process to obtain the second image recognition model adopted in the current training process.

In some optional implementations of this embodiment, the target processing layer is an intermediate processing layer, and the target processing layer is included in the training structures of both the first image recognition model and the second image recognition model, and is a fully connected layer.

In some optional implementations of the embodiment, the generating unit is further configured to perform generating the loss value of the first image recognition model based on the first loss value as follows: generating a second loss value according to the cross entropy loss function and the output of the first image recognition model; generating a third loss value according to the output of the logas loss function and the first image recognition model; and generating a loss value of the first image recognition model according to the first loss value, the second loss value and the third loss value.

In some optional implementations of the embodiment, the output of the image recognition model includes a mask, the image recognition model is used for recognizing the target object in the image, and the mask is used for indicating the category and the position of the target object in the image input to the image recognition model.

In some optional implementations of the embodiment, the annotation information for annotating the image includes a mask, the mask includes masks for respective pixels of the image, the masks for the pixels include preset color information, and different color information indicates different types of traffic markings.

In some optional implementations of this embodiment, the image recognition model includes an encoder and a decoder; a forward propagation process in training, comprising: acquiring a characteristic diagram of an image of an input image recognition model through an encoder, and carrying out pyramid pooling on the characteristic diagram; generating a feature coding result of the encoder according to the pyramid pooling result; performing feature fusion on the feature coding result and the feature map through a decoder; obtaining a mask of the input image according to the feature fusion result; the image recognition model comprises a target convolutional layer, and the target convolutional layer is used for performing depth separable convolution processing and expansion convolution processing.

The present disclosure provides one embodiment of a training apparatus for an image recognition model. The device can be applied to various electronic equipment.

The device adopts the trained image recognition model, the output of the image recognition model comprises a mask, the image recognition model is used for recognizing a target object in an image, and the target object is a traffic marking.

In some optional implementations of this embodiment, the apparatus further includes: an acquisition unit configured to acquire the positioning information of the target traffic marking indicated by the outputted mask; the reference determining unit is configured to determine a traffic marking reference map corresponding to the positioning information in the traffic marking set; an information determination unit configured to determine, based on the traffic marking reference map, deletion condition information of the target traffic marking, wherein the deletion condition information indicates whether there is a deletion in the traffic marking.

In some optional implementations of the embodiment, the information determining unit is further configured to perform determining the absence information of the target traffic marking according to the traffic marking reference map as follows: determining the area ratio of the target traffic marking and the traffic marking reference image; and determining whether the target traffic marking has the deficiency or not according to the area ratio.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

As shown in fig. 6, it is a block diagram of an electronic device of a training method of an image recognition model according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the electronic apparatus includes: one or more processors 601, memory 602, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 6, one processor 601 is taken as an example.

The memory 602 is a non-transitory computer readable storage medium provided by the present disclosure. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of training an image recognition model provided by the present disclosure. A non-transitory computer-readable storage medium of the present disclosure stores computer instructions for causing a computer to perform a training method of an image recognition model provided by the present disclosure.

The memory 602, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the training method of the image recognition model in the embodiments of the present disclosure (e.g., the input unit 501, the generation unit 502, and the training unit 503 shown in fig. 5). The processor 601 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 602, namely, implements the training method of the image recognition model in the above method embodiment.

The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the training electronic device of the image recognition model, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 optionally includes memory located remotely from the processor 601, and these remote memories may be connected to the training electronics of the image recognition model via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The electronic device of the training method of the image recognition model may further include: an input device 603 and an output device 604. The processor 601, the memory 602, the input device 603 and the output device 604 may be connected by a bus or other means, and fig. 6 illustrates the connection by a bus as an example.

The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the training electronics of the image recognition model, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output devices 604 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an input unit, a generation unit, and a training unit. Where the names of the units do not in some cases constitute a limitation of the units themselves, the input unit may also be described as a "unit inputting an annotation image into a first image recognition model and a real image into a second image recognition model", for example.

As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: inputting the marked image into a first image recognition model, and inputting the real image into a second image recognition model, wherein the first image recognition model and the second image recognition model are the same image recognition model to be trained; generating a loss value of the first image recognition model based on a result of the target processing layer in the first image recognition model and a result of the target processing layer in the second image recognition model; and training the first image recognition model based on the loss value to obtain the trained first image recognition model.

As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: in the image recognition model obtained by adopting any training method, the output of the image recognition model comprises a mask, the image recognition model is used for recognizing a target object in an image, and the target object is a traffic marking.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:融合模型构建方法、融合模型使用方法、装置和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!