Single-sample target detection method based on mutual global context attention mechanism
1. The single-sample target detection method based on the mutual global context attention mechanism is characterized by comprising the following steps of: the method comprises the following steps:
s1: constructing a feature extraction module to obtain the features of the input query image and the features of the support image;
s2: the global context building module is used for obtaining the global context characteristics of the query image according to the characteristics of the query image and obtaining the global context characteristics of the support image according to the characteristics of the support image;
s3: the method comprises the steps of constructing a feature migration module, acquiring channel-level dependence information of global context features for enhancing the feature information of channel levels, and acquiring the channel-level dependence information of a support image according to the global context features of the support image and acquiring the channel-level dependence information of a query image according to the global context features of the query image;
s4: constructing a fusion module, fusing the channel-level dependence information of the query image and the characteristics of the support image, and fusing the channel-level dependence information of the support image and the characteristics of the query image;
s5: a region-of-interest suggestion module is constructed, and a region of interest is obtained from the fused query image;
s6: constructing a category-independent classifier, splicing the features of the region of interest and the features of the fused support image, and determining whether the region of interest is a region with a target category through classification features; and outputting the position and category information of the target category in the predicted query image by the model, and optimizing the model aiming at a single-sample scene by adopting an LOSS function for calculating position LOSS and classification LOSS during model training.
2. The method of claim 1, wherein the method comprises: in the step S1, the specific steps are as follows:
s11: acquiring similarity between category names in the COCO data set and the category names in the IMGNET data set according to the IMGNET2012 data set and the label information WORNET of the COCO data set;
s12: removing the classes with the similarity higher than 0.3 from the IMGNET2012 data set so as to prevent the pre-training model from seeing the classes in the COCO data set;
s13: training a feature extraction module REST50 using the removed data set;
s14: inputting a query image and a support image to a feature extraction module REST50, and respectively obtaining the features Q of the query imagejAnd features S of the support imagei。
3. The method of claim 2, wherein the method comprises: in the step S2, the specific steps are as follows:
s21: constructing a global context module, comprising in sequence 1 × 1 convolutional layers wkSoftmax function; respectively acquiring attention weights a of support images through a global context moduleiAnd attention weight a of the query imagej;
S22: features S of the image to be supportediAttention weight a with support imageiPerforming matrix multiplication to obtain global context characteristics of the support imageComprises the following steps:
s23: feature Q of the query imagejAttention weight a with query imagejPerforming matrix multiplication to obtain global context characteristics of the query imageComprises the following steps:
4. the method of claim 3, wherein the method comprises: in the step S3, the specific steps are as follows:
s31: constructing a feature migration module which comprises a query migration module and a support migration module; the query migration module sequentially comprises 1 × 1 convolution layer WV1Layer normalization function and RELU activation function, 1 × 1 convolution layer WV2(ii) a The module supporting migration sequentially comprises 1 × 1 convolutional layer WC1Layer normalization function and RELU activation function, 1 × 1 convolution layer WC2;
S32: if the RELU activation function is RELU and the layer normalization function is LN, the channel-level dependence of the migrated query image is determinedComprises the following steps:
post-migration image-supporting channel-level dependenciesRespectively as follows:
5. the method of claim 1, wherein the method comprises: in the step S4, the specific steps are as follows:
s41: constructing a feature fusion module;
s42: let the feature of the support image be S, and the channel-level dependency information of the query image beMerging channel-level dependency information of query images with features of support images into
S43: let the characteristics of the query image be Q and the channel-level dependency information of the support image beFusing image-supporting channel-level dependency information with features of a query image into
6. The method of claim 5, wherein the method comprises: in the step S5, the specific steps are as follows: inputting the characteristics of the fused query image into a region suggestion module RPNHEADA series of region of interest boxes and corresponding confidences are output.
7. The method of claim 6, wherein the method comprises: in the step S6, the specific steps are as follows:
s61: constructing a category-independent classifier which sequentially comprises a first layer of full-link layer, a RELU activation function and a second layer of full-link layer; setting the dimensionality of the image features output by the feature extraction module to be N, changing the dimensionality of a first full-connection layer from 2N to 512, and changing the dimensionality of a second full-connection layer from 512 to 2;
s62: let the Mth region of interest of the query image beStitching features of the region of interest with features of the fused support imageThe vector which is obtained after splicing and simultaneously has the support image characteristic and the query image characteristic is as follows:
inputting the vector into a category-independent classifier, and acquiring the probability that the object in the region of interest and the support image is in the same category and the probability that the region of interest is the background;
s63: let the output of the full connection layer be FCC (F)C) The real label of the ith sample class is yiThe value of the class of the model output is PiIf the constant M is equal to-0.3, the MarginRankingLoss distance-based ranking loss function LMRComprises the following steps:
LMR{FCC(FC})=max(0,-yi*Pi+M),
let the cross entropy loss function be LCEThe bounding box regression loss function is LReg(ii) a Then, aiming at a single sample scene, adopting a Loss function optimization model for calculating position Loss and classification Loss in the model training process:
Loss=LCE+LReg+LMR。
8. a computer storage medium, characterized in that: stored with a computer program executable by a computer processor, the computer program performing the method of single sample target detection based on the mutual global context attention mechanism as claimed in any one of claims 1 to 7.
Background
Single sample Object Detection (One Shot Object Detection) is a special scenario for Object Detection (Object Detection). Object detection refers to the determination of the location of an object class from an image given the image, and its class information. The single-sample object detection refers to finding out the position of an object class from an object image and determining the class of the object class under the condition that only one new class sample exists. Where this sample is generally referred to herein as a support image, the target image becomes the query image herein.
At present, the computer vision algorithm based on DNN obtains the best expression effect in the fields of image classification, target detection, instance segmentation and the like. However, in order to obtain a deep learning model with excellent performance, a great deal of manpower and material resources are needed to collect data, and a great deal of calculation power is consumed for iteration. And in some cases, such as rare animal classification and industrial product defect detection, sufficient sample data cannot be obtained, making it difficult to use a deep learning-based method.
The few-sample learning refers to a scene in which only a few samples are used for training, and is proposed for solving the machine learning problem in the scene with limited samples. With good progress being made in few sample image classification. The method for solving the problem of few-sample image classification can be roughly divided into two types, wherein the first type is metric learning and the second type is an original learning method. A normal form of the meta-learning-based low-sample image classification method is that after the features of an image are extracted by using a feature extractor, the features of the image or the distance of a vector after mapping is calculated by using a certain measurement mode, and whether a test image and a sample image belong to the same category or not is judged according to the distance between the image and the sample image. The idea of meta-learning is more complex, and meta-learning attempts to let the model learn how to do it. Specifically, the task is divided into small tasks of each small sample, and then the model learns the path of each learning small task, so that the model can rapidly obtain a relatively ideal effect by using a small number of samples during testing. Because the problem of target detection may be more complex relative to the problem of image classification, less sample target detection is of less interest and less correlation effort. At present, achievements in the field of few-sample target detection mainly focus on transfer learning, meta learning and metric learning.
Recently, Hao Chen proposes a regularization method for reducing overfitting of a few-sample target detection model during transfer learning, and such a method inevitably loses a part of recognition accuracy of a seen class when recognition of a new class is realized. The method paradigm based on metric learning is that a classifier in target detection is directly replaced by a method of few-sample image classification, so that few-sample target detection is realized. Ting-I Hsieh proposes a brand-new mechanism CO-Attention and Co-Excitation, and the information promotion model using the support Image has no recognition effect on the model of the type, but the used Non local mechanism does not achieve the expected effect, and the calculation amount is large.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a single-sample target detection method based on a mutual global context attention mechanism is provided for improving the accuracy of the single-sample target detection method.
The technical scheme adopted by the invention for solving the technical problems is as follows: the single-sample target detection method based on the mutual global context attention mechanism comprises the following steps:
s1: constructing a feature extraction module to obtain the features of the input query image and the features of the support image;
s2: the global context building module is used for obtaining the global context characteristics of the query image according to the characteristics of the query image and obtaining the global context characteristics of the support image according to the characteristics of the support image;
s3: the method comprises the steps of constructing a feature migration module, acquiring channel-level dependence information of global context features for enhancing the feature information of channel levels, and acquiring the channel-level dependence information of a support image according to the global context features of the support image and acquiring the channel-level dependence information of a query image according to the global context features of the query image;
s4: constructing a fusion module, fusing the channel-level dependence information of the query image and the characteristics of the support image, and fusing the channel-level dependence information of the support image and the characteristics of the query image;
s5: a region-of-interest suggestion module is constructed, and a region of interest is obtained from the fused query image;
s6: constructing a category-independent classifier, splicing the features of the region of interest and the features of the fused support image, and determining whether the region of interest is a region with a target category through classification features; and outputting the position and category information of the target category in the predicted query image by the model, and optimizing the model aiming at a single-sample scene by adopting an LOSS function for calculating position LOSS and classification LOSS during model training.
According to the scheme, in the step S1, the specific steps are as follows:
s11: acquiring similarity between category names in the COCO data set and the category names in the IMGNET data set according to the IMGNET2012 data set and the label information WORNET of the COCO data set;
s12: removing the classes with the similarity higher than 0.3 from the IMGNET2012 data set so as to prevent the pre-training model from seeing the classes in the COCO data set;
s13: training a feature extraction module REST50 using the removed data set;
s14: inputting a query image and a support image to a feature extraction module REST50, and respectively obtaining the features Q of the query imagejAnd features S of the support imagei。
Further, in step S2, the specific steps include:
s21: constructing a global context module, comprising in sequence 1 × 1 convolutional layers WkSoftmax function; respectively acquiring attention weights a of support images through a global context moduleiAnd query graphAttention weight of image aj;
S22: features S of the image to be supportediAttention weight a with support imageiPerforming matrix multiplication to obtain global context characteristics of the support imageComprises the following steps:
s23: feature Q of the query imagejAttention weight a with query imagejPerforming matrix multiplication to obtain global context characteristics of the query imageComprises the following steps:
further, in step S3, the specific steps include:
s31: constructing a feature migration module which comprises a query migration module and a support migration module; the query migration module sequentially comprises 1 × 1 convolution layer WV1Layer normalization function and RELU activation function, 1 × 1 convolution layer WV2(ii) a The module supporting migration sequentially comprises 1 × 1 convolutional layer WC1Layer normalization function and RELU activation function, 1 × 1 convolution layer WC2;
S32: if the RELU activation function is RELU and the layer normalization function is LN, the channel-level dependence of the migrated query image is determinedComprises the following steps:
post-migration image-supporting channel-level dependenciesRespectively as follows:
according to the scheme, in the step S4, the specific steps are as follows:
s41: constructing a feature fusion module;
s42: let the feature of the support image be S, and the channel-level dependency information of the query image beMerging channel-level dependency information of query images with features of support images into
S43: let the characteristics of the query image be Q and the channel-level dependency information of the support image beFusing image-supporting channel-level dependency information with features of a query image into
Further, in step S5, the specific steps include: inputting the characteristics of the fused query image into a region suggestion module RPNHEADA series of region of interest boxes and corresponding confidences are output.
Further, in step S6, the specific steps include:
s61: constructing a category-independent classifier which sequentially comprises a first layer of full-link layer, a RELU activation function and a second layer of full-link layer; setting the dimensionality of the image features output by the feature extraction module to be N, changing the dimensionality of a first full-connection layer from 2N to 512, and changing the dimensionality of a second full-connection layer from 512 to 2;
s62: let the Mth region of interest of the query image beStitching features of the region of interest with features of the fused support imageThe vector which is obtained after splicing and simultaneously has the support image characteristic and the query image characteristic is as follows:
inputting the vector into a category-independent classifier, and acquiring the probability that the object in the region of interest and the support image is in the same category and the probability that the region of interest is the background;
s63: let the output of the full connection layer be FCC (F)C) The real label of the ith sample class is yiThe value of the class of the model output is PiIf the constant M is equal to-0.3, the MarginRankingLoss distance-based ranking loss function LMRComprises the following steps:
LMR{FCC(FC)}=max(0,-yi*Pi+M),
let the cross entropy loss function be LCEThe bounding box regression loss function is LReg(ii) a Aiming at a single-sample scene, adopting a Loss function optimization model for calculating position Loss and classification Loss in the model training process:
Loss=LCE+LReg+LMR。
A computer storage medium having stored therein a computer program executable by a computer processor, the computer program performing a single sample target detection method based on a mutual global contextual attention mechanism.
The invention has the beneficial effects that:
1. the invention discloses a single-sample target detection method based on a mutual global context attention mechanism, which comprises the steps of constructing a feature extraction module for extracting feature information of an input image, a global context feature module for extracting context features of a query image and a support image, a migration module for respectively acquiring channel-level dependency information of the support image and the query image from the context features of the query image and the support image, a fusion module for fusing the channel-level dependency information of the support image and the features of the query image and the channel-level dependency information of the query image and the features of the support image, a region suggestion module for generating a region in which a target class possibly exists, and a fully-connected class independent classifier for inputting the features of the support image and the features of a region of interest of the query image and outputting the probability of whether the features and the region of interest of the query image are in the same class or not, the function of improving the accuracy of the single-sample target detection method is realized.
2. The invention enables the classification of new classes to be achieved without retraining the model of the invention.
3. The invention can obtain better detection effect under the condition of single sample.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a network configuration diagram of an embodiment of the present invention.
Fig. 3 is a network architecture diagram of an attention mechanism of an embodiment of the present invention.
Fig. 4 is an image feature thermodynamic diagram of an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a single sample target detection method based on a mutual global context attention mechanism according to an embodiment of the present invention includes the following steps:
s1: constructing a feature extraction module, and acquiring features of the input query image and the support image by using the feature extraction module;
acquiring similarity between category names in the COCO data set and the category names in the IMGNET data set according to the IMGNET2012 data set and the label information WORNET of the COCO data set; removing the classes with the similarity higher than 0.3 from the IMGNET2012 data set so as to prevent the pre-training model from seeing the classes in the COCO data set; the feature extraction module REST50 is trained using the culled data set.
S2: constructing a global context module, and using the global context module to respectively obtain corresponding global context characteristics from the characteristics of the support image and the query image generated in the last step;
using the same 1 x 1 convolutional layer WkAnd acquiring attention weights of the support image and the query image by a softmax function, performing matrix multiplication on the features of the support image and the attention weights of the support image, and performing matrix multiplication on the features of the query image and the attention of the query image. And acquiring global context characteristics of the support image and the query image. The image context feature is represented as:
in the formula, FGCRepresenting a global context feature of the support image or query image, aiRepresenting the acquired attention weight, SiRepresenting the supported image feature, Q, acquired in step S1jRepresenting the features of the query image acquired in step S1.
S3: a feature migration module is constructed, global context features of the support image and the query image obtained in the last step are calculated, and corresponding channel-level interdependence information is obtained;
and constructing a feature migration module, acquiring channel level dependence information in the global context feature, and enhancing the feature information of the channel level.
The feature migration module is composed of two layers of 1 × 1 convolutional layers, a layer normalization function and a RELU activation function are inserted between the two convolutional layers, and the expression is as follows:
wherein W represents 1 × 1 convolutional layers with different indices representing parametric convolutional layers, RELU represents the RELU activation function, LN represents the layer normalization function,andrepresenting the global context channel level dependencies of the query image after migration and the global context channel level dependencies in the support image, respectively.Andthe global context feature of the query image and the global context feature of the support image acquired in step S2 are represented.
S4: constructing a fusion module, fusing the channel-level dependence information of the query image acquired in the previous step with the support image characteristics, and fusing the channel-level dependence information of the support image acquired in the previous step with the query image characteristics;
and constructing a feature fusion module, fusing the support image features acquired in the step S1 with the channel-level global context dependencies of the query image features acquired in the step S4, and fusing the query image features acquired in the step S1 with the channel-level global context dependencies of the support image features acquired in the step S4. The fusion method in the fusion module is represented as:
where S denotes the supported image feature acquired in step S1, and Q denotes the query image feature acquired in step S1.Andthe steps represent the global context channel-level dependencies of the query image and the support image acquired in S3, respectively.
S5: a construction region suggestion module for acquiring the region of interest, namely the region where the target category possibly exists, from the fused query image;
the region suggestion module is RPNHEAD and inputs the query image characteristics after fusionA series of region of interest boxes and their corresponding confidences are output.
S6: and constructing a classification module, connecting the image features of the region of interest acquired in the previous step with the fused support image features acquired in step S5, and classifying the features to determine whether the region of interest is a region with a target category. The output model predicts the location and class information of the target class in the query image. And calculating LOSS functions designed for a single sample scene during training to optimize the model.
Constructing a category-independent classifier, and integrating the characteristics of the region of interest and the characteristics of the support image after integrationAnd splicing to obtain a vector which simultaneously has the support image characteristic and the query image characteristic. Inputting the vector into a classifier to obtain the probability that the region of interest is in the same category as the object in the support image and the probability that the region of interest is the background. The features after splicing are expressed as:
in the formula (I), the compound is shown in the specification,representing the mth region of interest in the query image,representing the support image features after fusion.
The class-independent classifier is composed of two full-link layers, and a RELU activation function is added in the middle, wherein the dimensionality of the first full-link layer is changed to be 2N- >512, and the dimensionality of the second full-link layer is changed to be 512- > 2. Where N denotes the dimension of the image feature output by the feature extractor in step S1.
The Loss function used in training is expressed as:
Loss=LCE+LReg+LMR,
the first two parameters in the formula represent a cross entropy loss function and a bounding box regression loss function, respectively, where LMRRepresenting a MarginRankingLoss distance-based ranking loss function, expressed as:
LMR{FCC(FC)}=max(0,-yi*Pi+M),
wherein FCC (F)C) Representing the output of the fully connected layer, M is a constant, taking M ═ 0.3. And calculating LOSS functions designed for a single sample scene during training to optimize the model.
The embodiments of the present invention test-validate the method under single-sample conditions using VOCs as the data set. Dividing the VOC type, taking plant, sofa, tv, car, bottle, boat, chair, person, bus, train, horse, bike, dog, bird, mbike and table as training types to train the model, taking cow sheet cat aero as a testing type to test the trained model, and inputting a support image and a target image which possibly contains the type in the support image during testing. And comparing the other labels and the final target frame output by the model with the real target frame. AP was used as the evaluation criterion for the model. In this process, the model does not see the test class during training, and only one support image input during testing contains the test class.
The model was trained by the SGD optimizer with a pick of 0.9. Initial learning rate is set to 10-1And then decreases to 0.1 every 4 cycles. The model was trained using a pytorech platform and using two GTX2080 graphics cards. Table 1 the experimental results of the model under single sample conditions were evaluated by the AP standard provided by VOC.
The single sample target detection method for selective comparison comprises the following steps: SiamFC, SiamRPN, CompNet, OSOD. CompNet is based on the fast-RCNN, which directly replaces the classifier in the fast-RCNN with a metric-based classifier. The SiamFC and SiamRPN (better than CompNet) methods are designed to solve the visual tracking problem, not for the target detection of a single sample. OSOD proposes an attention mechanism for the field of single sample target detection. The invention provides a novel attention mechanism, which improves the detection precision of a sample target. The first row is a support image, the second row is a query image thermodynamic diagram without the attention mechanism of the present invention, and the third row is a feature diagram after the attention mechanism of the present invention is activated, as shown in fig. 4. It can be seen from fig. 4 that the image attention after the attention mechanism of the present invention is clearly focused on the area of the target category.
TABLE 1 comparison of the present invention with five existing algorithms
As can be seen from the experimental results in the table, the invention has obvious advantages compared with other four methods.
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种多时相多极化SAR滑坡提取方法