Semi-supervised three-dimensional shape recognition method based on consistency training

文档序号:8652 发布日期:2021-09-17 浏览:31次 中文

1. The semi-supervised three-dimensional shape recognition method based on consistency training is characterized by comprising the following steps of:

A. preparing a three-dimensional shape data set comprising a tagged data set and an untagged data set;

B. adding micro disturbance to the non-tag data to obtain a non-tag data set of a disturbed version;

C. the consistency constraint branch encouraging model is designed to predict similar samples consistently, and the generalization capability of the model is improved;

D. designing a pseudo label generation branch to generate a pseudo label for label-free data, and providing a consistency filtering mechanism to filter the pseudo label with uncertain models so as to realize the expansion of a labeled data set;

E. training the model by combining the labeled data and the unlabeled data to obtain a trained model;

F. and (4) carrying out three-dimensional shape recognition by using the trained model, and taking the prediction of the model as a final recognition result.

2. The semi-supervised, consistency training-based, three-dimensional shape recognition method as recited in claim 1, wherein in step a, the preparing the three-dimensional shape data set further comprises the sub-steps of:

A1. preparing tagged data sets, using Dl={(xi,yi) I ∈ (1,..., m) } denotes tagged data, where xi∈RN ×FRepresenting a three-dimensional shape composed of N points with F-dimensional features, yiE { 1.. C } represents data xiC represents the total number of categories of three-dimensional shapes contained in the data set, and m represents the number of tagged data;

A2. preparing unlabeled data sets, using Du={xjJ ∈ (1.,. n) } denotes unlabeled data, where xj∈RN×FA three-dimensional shape consisting of N points with F-dimensional features is shown, and N represents the amount of unlabeled data.

3. The semi-supervised three-dimensional shape recognition method based on consistency training as recited in claim 1, wherein in the step B, the step of adding the small perturbation to the unlabeled data to obtain the perturbed version of the unlabeled data set further comprises the following sub-steps:

B1. adding the micro disturbance r to xyz coordinate information of the three-dimensional shape to slightly deform the three-dimensional shape without changing the category semantics of the three-dimensional shape; due to the fact that the three-dimensional shapes are different in size, if disturbances with the same size are added to all the three-dimensional shapes, some three-dimensional shapes can be seriously deformed, the disturbances are scaled according to the radius R of the three-dimensional shapes, and label-free shapes x 'of disturbed versions are obtained'j,x'jThe calculation method of (2) is as follows:

x'j=xj+R·r (1)。

4. the semi-supervised three-dimensional shape recognition based on consistency training as recited in claim 1, wherein in the step C, the designing consistency-constrained branch encouragement model to predict consistency for similar samples further comprises the sub-steps of:

C1. because the number of the labeled data is limited, a consistency constraint branch is designed to improve the generalization capability of the model, and the branch requires that the model is predicted to be the same type for similar samples, thereby playing a role of smoothing the model; for raw unlabeled point cloud data x'jAnd a perturbed version of unlabeled point cloud data x'jThe predictions of the model should be consistent; predicting raw unlabeled point cloud data x using modelsjObtaining a predicted distribution f (x)j) Predicting perturbed versions of unlabeled point cloud data x 'using models'jObtaining a predicted distribution f (x'j) The consistency constraint loss is calculated as follows:

wherein KL is the Kullback-Leibler divergence and is used to measure the difference between two predicted distributions.

5. The semi-supervised three-dimensional shape recognition method based on consistency training as recited in claim 1, wherein in step D, the designing of the pseudo label generation branch generates pseudo labels for the non-labeled data, and proposes a consistency filtering mechanism to filter out pseudo labels of which the model is uncertain, and implementing the expansion of the labeled data set further comprises the following sub-steps:

D1. using current model to unlabeled data xjMaking a prediction to obtain f (x)j) The category with the highest category probability of the prediction distribution is used as the pseudo label y of the datap=argmax(f(xj));

D2. A consistency filtering mechanism is provided to filter pseudo labels of uncertain models, and only when the models are selected to have consistent prediction on the original point cloud data and the point cloud data of the disturbed version, the original point cloud data are added into a candidate set; label-free data x 'of perturbed version using current model'jPredicted to give f (x'j) If argmax (f (x)j))=argmax(f(x'j) X) the original data x is then extractedjAnd its pseudo label ypAdding a candidate set;

D3. selecting non-label data with pseudo labels with confidence degrees larger than a certain threshold value from the candidate set, and adding the non-label data with pseudo labels into a final pseudo label data set Dp

D4. Pseudo label dataset DpThe data in (1) will be used together with the labeled data for the calculation of the supervision loss in training, and the calculation formula of the supervision loss is as follows:

where β is a hyper-parameter, representing the relative weight of supervision loss for the pseudo-tag data.

6. The semi-supervised three-dimensional shape recognition based on consistency training as recited in claim 1, wherein in the step E, the model is trained by combining labeled data and unlabeled data, and the step of obtaining the trained model further comprises the following sub-steps:

E1. the total loss function of the model is the sum of the consistency loss function and the supervision loss function, and the calculation method is as follows:

lsum=lsup+α·lcon (4)

wherein α is a hyperparameter;

E2. and training by combining the consistency constraint branch and the pseudo label generation branch to obtain a trained model for three-dimensional shape recognition.

Background

The research of three-dimensional vision plays an important role in applications such as automatic driving, augmented reality, robots and the like. With the rapid development of deep learning, researchers have proposed many methods for three-dimensional shape recognition tasks. Currently, three-dimensional shape recognition methods are mainly classified into three types. The first is a multi-view based method that projects a point cloud onto multiple two-dimensional views, which is then processed directly using a classical two-dimensional convolutional neural network. Any view angle obtained by projection is processed by a two-dimensional convolutional neural network independently, and then the features of all the generated view angles are fused by using a view angle-pooling layer. The multi-view approach may lose some critical information due to self-occlusion. The second is a voxel-based method that voxelizes the point cloud into a regular three-dimensional grid, which is then processed using three-dimensional volume and pooling operations, which consumes significant time and space resources. The sparsity of the three-dimensional grid also results in a waste of resources. In recent years much attention has been paid to point cloud based methods which directly utilize the original point cloud data as input. Among them, the method proposed by Qi, C. et al (Qi, C., HaoSu, Kaichun Co., et al, "Point Net: Deep Learning on Point settings for 3D Classification and segmentation." 2017IEEE Conference on Computer Vision and Pattern Registration (CVPR) (2017):77-85.) is an explorator for directly processing the original Point cloud data, and it encodes each Point individually, and finally uses global pooling to gather feature information of all points. It cannot capture local details of a three-dimensional object. Therefore, Qi, c, et al (Qi, c., l.yi, Hao Su, et al, "PointNet + +: Deep historical Feature Learning on Point Sets in a metric space," NIPS (2017)), propose a Hierarchical neural network to extract local features. Wang, Yue et al (Wang, Yue, Yongbin Sun, Z. Liu, et al, "Dynamic Graph CNN for Learning on Point cloud," ACM Transformations On Graphics (TOG)38(2019):1-12.) propose edge convolution operations and dynamically update the local groupings during the convolution process. The above methods achieve good performance, but these methods are all based on a fully supervised setup, requiring a large amount of tagged data.

The success of point cloud data research is mainly attributed to powerful convolutional neural networks and large amounts of labeled point cloud data. While most methods address increasing the accuracy of the model itself, obtaining large-scale tagged datasets is also a difficult problem. Currently, the acquisition of point cloud data becomes more convenient and cheaper due to advances in depth sensors. Because the data labeling requires a lot of manpower and requires a labeling person to have strong professional knowledge, the cost for acquiring the labeled point cloud data is very expensive.

Semi-supervised learning addresses this problem by utilizing a small amount of labeled data and a large amount of unlabeled data. In recent years, semi-supervised learning has achieved great success in two-dimensional image processing, achieving performance comparable to that of supervised methods. However, semi-supervised methods for three-dimensional point cloud classification are digressive. The method proposed by Song, Mofei et al (Song, Mofei, Y.Liu and Xiao Fan Liu. "Semi-Supervised 3D Shape registration via Multimodal Deep Co-training." Computer Graphics Forum 39(2020): n.pag.) is the first Semi-Supervised method for three-dimensional Shape classification. The method uses a multi-modal network for collaborative training, and needs two classification networks based on point cloud data and multi-view data for training at the same time. It therefore requires two data representations to train, which makes data acquisition of the training set more difficult.

Disclosure of Invention

The invention aims to solve the problems in the prior art, and provides a semi-supervised three-dimensional shape recognition method based on consistency training, which realizes the expansion of a limited labeled data set, combines a consistency constraint branch and a pseudo label to generate a branch training depth model, and utilizes the trained model to classify the three-dimensional shape.

The invention comprises the following steps:

A. preparing a three-dimensional shape data set comprising a tagged data set and an untagged data set;

B. adding micro disturbance to the non-tag data to obtain a non-tag data set of a disturbed version;

C. the consistency constraint branch encouraging model is designed to predict similar samples consistently, and the generalization capability of the model is improved;

D. designing a pseudo label generation branch to generate a pseudo label for label-free data, and providing a consistency filtering mechanism to filter the pseudo label with uncertain models so as to realize the expansion of a labeled data set;

E. training the model by combining the labeled data and the unlabeled data to obtain a trained model;

F. and (4) carrying out three-dimensional shape recognition by using the trained model, and taking the prediction of the model as a final recognition result.

In step a, the preparing the three-dimensional shape data set further comprises the sub-steps of:

A1. preparing tagged data sets, using Dl={(xi,yi) I ∈ (1,..., m) } denotes tagged data, where xi∈RN×FRepresenting a three-dimensional shape composed of N points with F-dimensional features, yiE { 1.. C } represents data xiC represents the total number of categories of three-dimensional shapes contained in the data set, and m represents the number of tagged data;

A2. preparing unlabeled data sets, using Du={xjJ ∈ (1.,. n) } denotes unlabeled data, where xj∈RN×FA three-dimensional shape consisting of N points with F-dimensional features is shown, and N represents the amount of unlabeled data.

In step B, the adding a small perturbation to the unlabeled data to obtain a perturbed version of the unlabeled data set further includes the following sub-steps:

B1. adding the micro disturbance r to xyz coordinate information of the three-dimensional shape to slightly deform the three-dimensional shape without changing the category semantics of the three-dimensional shape; due to the fact that the three-dimensional shapes are different in size, if disturbances with the same size are added to all the three-dimensional shapes, some three-dimensional shapes can be seriously deformed, the disturbances are scaled according to the radius R of the three-dimensional shapes, and label-free shapes x 'of disturbed versions are obtained'j,x'jThe calculation method of (2) is as follows:

x'j=xj+R·r (1)

in step C, the design-consistency-constrained branch encouragement model to predict consistency for similar samples further comprises the sub-steps of:

C1. because the number of the labeled data is limited, a consistency constraint branch is designed to improve the generalization capability of the model, and the branch requires that the model is predicted to be the same type for similar samples, thereby playing a role of smoothing the model; for raw unlabeled point cloud data x'jAnd a perturbed version of unlabeled point cloud data x'jThe predictions of the model should be consistent; predicting raw unlabeled point cloud data x using modelsjObtaining a predicted distribution f (x)j) Predicting perturbed versions of unlabeled point cloud data x 'using models'jObtaining a predicted distribution f (x'j) The consistency constraint loss is calculated as follows:

wherein KL is the Kullback-Leibler divergence and is used to measure the difference between two predicted distributions.

In step D, the designing a pseudo tag generation branch generates a pseudo tag for non-tag data, and a consistency filtering mechanism is proposed to filter out pseudo tags of which models are uncertain, and implementing the expansion of the tagged data set further includes the following substeps:

D1. using current model to unlabeled data xjMaking a prediction to obtain f (x)j) The category with the highest category probability of the prediction distribution is used as the pseudo label y of the datap=argmax(f(xj));

D2. A consistency filtering mechanism is provided to filter pseudo labels of uncertain models, and only when the models are selected to have consistent prediction on the original point cloud data and the point cloud data of the disturbed version, the original point cloud data are added into a candidate set; label-free data x 'of perturbed version using current model'jPredicted to give f (x'j) If argmax(f(xj))=argmax(f(x'j) X) the original data x is then extractedjAnd its pseudo label ypAdding a candidate set;

D3. selecting non-label data with pseudo labels with confidence degrees larger than a certain threshold value from the candidate set, and adding the non-label data with pseudo labels into a final pseudo label data set Dp

D4. Pseudo label dataset DpThe data in (1) will be used together with the labeled data for the calculation of the supervision loss in training, and the calculation formula of the supervision loss is as follows:

where β is a hyper-parameter, representing the relative weight of supervision loss for the pseudo-tag data.

In step E, the training the model by combining the labeled data and the unlabeled data, and obtaining the trained model further includes the following substeps:

E1. the total loss function of the model is the sum of the consistency loss function and the supervision loss function, and the calculation method is as follows:

lsum=lsup+α·lcon (4)

wherein α is a hyperparameter;

E2. and training by combining the consistency constraint branch and the pseudo label generation branch to obtain a trained model for three-dimensional shape recognition.

The method comprises the steps of establishing a depth model, wherein the depth model comprises a consistency constraint branch and a pseudo label generation branch; first, a three-dimensional shape dataset is prepared, including a tagged dataset and an untagged dataset. And adding small disturbance to the non-tag data to obtain a non-tag data set of a disturbed version. The generalization capability of the model is improved by using the consistency constraint branch of the design. And generating a pseudo label for the label-free data by using the designed pseudo label generation branch, and providing a consistency filtering mechanism to filter the pseudo label with uncertain models, thereby expanding the labeled data set. And generating a branch training depth model by combining the consistency constraint branch and the pseudo label, and performing three-dimensional shape classification by using the trained model.

Drawings

Fig. 1 is a schematic diagram of a semi-supervised three-dimensional shape recognition framework according to an embodiment of the present invention.

Fig. 2 is a comparison of the semi-supervised method of the present invention with the supervised method for labeled data of different scales on a three-dimensional shape data set model net 40.

Detailed Description

The method of the present invention will be described in detail with reference to the accompanying drawings and examples, which are provided for the purpose of describing the technical solutions of the present invention and are not limited to the following examples.

The method comprises the steps of firstly preparing a tag three-dimensional shape data set and a non-tag three-dimensional shape data set, and adding micro-disturbance to the non-tag data to obtain a non-tag three-dimensional shape data set of a disturbed version. The use of the consistency constraint branch encourages the model to predict consistency for the original unlabeled shape and the unlabeled shape of the perturbed version, thereby improving the generalization capability of the model. And generating pseudo labels for the label-free data by using the pseudo label generation branch, and providing a consistency filtering mechanism to filter the pseudo labels with uncertain models, thereby realizing the expansion of the limited labeled data set. And training by combining the labeled data and the unlabeled data to obtain a trained model for three-dimensional shape recognition.

Referring to fig. 1 and 2, an implementation of an embodiment of the present invention includes the steps of:

1. three-dimensional shape data sets, including labeled and unlabeled data sets, are prepared.

A. Using the three-dimensional shape reference dataset ModelNet40(Wu, Zhirong, Shuran Song, A.Khosla, et al, "3D Shapelets: A deep representation for volumetric shapes." 2015IEEE Conference on Computer Vision and Pattern Registration (CVPR) (2015):1912-1920.), ModelNet40 has 12311 shapes for a total of 40 classes, of which 9843 shapes are used for training and 3991 shapes are used for verification.

B. Randomly sampling 10% of the data from the training set and its labelAs tagged data, use Dl={(xi,yi) I ∈ (1.,. m) } denotes a tagged dataset, where xi∈R1024×3Representing a three-dimensional shape consisting of 1024 points with only xyz coordinate information, yiE { 1.. C } represents data xiC represents the total number of categories of three-dimensional shapes included in the data set, and m represents the number of labeled data.

C. All data in the training set were used as unlabeled data, using Du={xjJ ∈ (1.,. n) } denotes an unlabeled dataset, where xj∈R1024×3Representing a three-dimensional shape consisting of 1024 points with xyz coordinate information only, n representing the amount of unlabeled data.

2. And adding small disturbance to the unlabeled data to obtain a perturbed version of the unlabeled data set.

A. As the added perturbation r, a Virtual countermeasure perturbation (Miyato, Takeru, S.Maeda, Masanori Koyama and S.Ishii. "Virtual adaptive Training: A Regualization Method for Supervised and Semi-Supervised learning." IEEE Transactions on Pattern Analysis and Machine Analysis 41(2019): 1979-.

B. Adding micro disturbance r to unlabeled three-dimensional point cloud data xjThe three-dimensional shape is slightly deformed on the xyz coordinates, and the category semantics of the three-dimensional shape is not changed. Due to the fact that the three-dimensional shapes are different in size, if virtual counterdisturbance with the same size is added to all the three-dimensional shapes, the category semantics of some three-dimensional shapes can be changed, the disturbance is scaled according to the radius R of the three-dimensional shapes, and finally the label-free point cloud data x 'of the disturbance version is obtained'j,x'jThe calculation method is as follows:

x'j=xj+R·r (1)

3. and designing a consistency constraint branch.

A. Due to the limited number of labeled data, the model is easily over-fitted by directly utilizing the labeled data for training. Therefore, a consistency constraint branch is designed to improve the generalization capability of the model. The branch is toThe calculation model is used for predicting similar samples into the same category and plays a role of a smooth model. For unlabeled data xjUsing model prediction to obtain a prediction result of f (x)j) For perturbed version of unlabeled data x'jF (x 'is predicted from the model'j) The consistency loss function is calculated as follows:

where KL is the Kullback-Leibler divergence.

4. And designing a pseudo label generation branch to generate pseudo labels for label-free data, and providing a consistency filtering mechanism to filter the pseudo labels with uncertain models so as to realize the expansion of the labeled data set.

A. Using current model to unlabeled data xjMaking a prediction to obtain f (x)j) The category with the highest category probability of the prediction distribution is used as the pseudo label y of the datap=argmax(f(xj))。

B. Since the performance of the model is poor initially, many false tags are generated in error. If a large number of false labels are directly used for training, noise training may result. Therefore, a consistency filtering mechanism is provided to filter the pseudo labels of the model uncertain, and only when the model is selected to have consistent prediction on the original point cloud data and the point cloud data of the disturbance version, the original point cloud data is added into the candidate set. Label-free data x 'of perturbed version using current model'jPredicted to give f (x'j) If argmax (f (x)j))=argmax(f(x'j) X) the original data x is then extractedjAnd its pseudo label ypAnd adding the candidate set.

C. Then, non-label data with pseudo labels with confidence degrees larger than a certain threshold value are selected from the candidate set and added into a final pseudo label data set Dp

D. Pseudo label dataset DpThe data in (1) will be used together with the labeled data for the calculation of the supervision loss in training, and the calculation formula of the supervision loss is as follows:

where β is a hyperparameter representing the relative weight of supervision loss for the pseudo-tag data.

5. The model is trained in conjunction with labeled and unlabeled data.

A. The total loss function of the model is the sum of the consistency loss function and the supervision loss function, and the calculation method is as follows:

lsum=lsup+α·lcon (4)

where α is a hyperparameter.

B. And training by combining the consistency constraint branch and the pseudo label generation branch to obtain a trained model for three-dimensional shape recognition.

Table 1 shows the results of the semi-supervised method proposed by the present invention compared to other methods on the three-dimensional shape dataset ModelNet 40. Therefore, compared with other methods, the method provided by the invention has higher accuracy.

TABLE 1

In table 1, other methods are as follows:

OctNet corresponds to Riegler, G.et al (Riegler, G.G., Ali O. Ulucoy and Andrea Geiger. "OctNet: Learning Deep 3D responses at High resolution." 2017IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 2017):6620 eye 6629.)

MVCNN corresponds to that proposed by Su, Hang, et al (Su, Hang, Subhransu major, E.Kalogerakis, et al, "Multi-view relational Neural Networks for 3D Shape registration." 2015IEEE International Conference on Computer Vision (ICCV) (2015):945 953.)

Methods proposed by PointNet for Qi, C. et al (Qi, C., Hao Su, Kaichun Co., et al. "PointNet: Deep Learning on Point settings for 3D Classification and Segmentation." 2017IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017):77-85.)

Method proposed by Qi, C et al corresponding to PointNet + + (Qi, C., L.Yi, Hao Su, et al. "" PointNet + +: Deep Hierarchical Feature Learning on Point Sets in a Metric space. "" NIPS (2017).)

DGCNN corresponds to the method proposed by Wang, Yue et al (Wang, Yue, Yongbin Sun, Z. Liu, et al, "Dynamic Graph CNN for Learning on Point cloud." ACM Transformations On Graphics (TOG)38(2019):1-12.)

Methods proposed by FoldingNet corresponding to Yang, Y, et al (Yang, Y., Chen Feng, Y. Shen, et al. "FoldingNet: Point Cloud Auto-Encoder via Deep Grid formation." 2018IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018): 206. sup. 215.)

The method proposed by PointGLR corresponding to Rao, Yongming et al (Rao, Yongming, Jiwen Lu and J.Zhou. "Global-Local binary reading for unapplied reproduction Learning of 3D Point cloud." 2020IEEE/CVF Conference Computer Vision and Pattern Recognition (CVPR): 5375-5384.)

MDC corresponds to the method proposed by Song, Mofei et al (Song, Mofei, Y.Liu and Xiao Fan Liu. "Semi-Supervised 3D Shape registration vision Multimodal Deep Co-training." Computer Graphics Forum 39(2020): n.pag.)

The present invention requires only such a data representation as a point cloud. To reduce the cost of data annotation, only 10% of the tagged data is used. In order to avoid overfitting of the model on limited labeled data, the invention provides a consistency constraint branch to improve the generalization capability of the model. In addition, pseudo tags are generated for the non-tag data to augment the existing tag data. Under the combined action of consistency constraint and pseudo labels, the unlabeled point cloud data is better utilized, and the requirement of the classification model on the labeled data is effectively reduced.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于局部判别性增强的无监督三维物体分类方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!