PolSAR image classification method based on multi-model joint learning network
1. A PolSAR image classification method based on a multi-model joint learning network is characterized by comprising the following steps:
(1) inputting a PolSAR image to be classified, expanding a tag matrix of the PolSAR image by using a semi-supervised fuzzy clustering algorithm to obtain a pseudo-tag without tag pixels in the PolSAR image to be classified, and marking the expanded tag matrix as lp;
(2) Respectively extracting a coherent matrix T of each pixel in the PolSAR imageiVector of (i.e. extracting T firstiThe real and imaginary parts of the upper triangular element form a 9-dimensional vector xi1Then extracting TiThe upper triangular element of (a) constitutes a 6-dimensional vector xi2(ii) a Then for xi1And xi2Normalizing each dimension of the vector by using a z-score function to obtain a normalized 9-dimensional vector xi'1And a 6-dimensional vector xi'2;
(3) For xi'1、xi'2And lpAll using the same sliding window operation to respectively obtain three corresponding data sets s1,s2And slThe three data sets are then randomized in the same manner and from s1And s2The first 5% of data are selected to obtain a training sample set s1 T、s2 TThen from slThe first 5% of the data are selected to obtain s1 TAnd s2 TCorresponding marking matrix sl T;
(4) Randomly initializing 9-dimensional convolution kernels and 6-dimensional convolution kernels by utilizing Gaussian distribution with the mean value of 0 and the standard deviation of 0.02, and respectively constructing a full convolution neural network (FCN) model and a complex value full convolution neural network (CVFCN) model according to the initialized 9-dimensional convolution kernels and 6-dimensional convolution kernels, wherein the FCN model and the CVFCN model respectively comprise 7 convolution layers, and a pooling layer, a Relu activation function layer and a Batch normalization layer are sequentially cascaded behind the first four convolution layers;
(5) constructing cavity convolution layers, namely sequentially cascading cavity convolutions with cavity factors of 1, 2 and 3, and replacing the convolution layers in the first four layers in the CVFCN with the cavity convolution layers to form a complex-value stack cavity full convolution neural network CVSDFCN model;
(6) respectively taking the FCN model, the CVFCN model and the CVSDFCN model as submodels of a joint learning network, and respectively taking a 9-dimensional training sample set s1 TAnd its corresponding marking matrix sl TAre all input into the FCN model, and a 6-dimensional training sample set s is obtained2 TAnd the corresponding marking matrix sl TAll input into CVFCN and CVSDFCN models, and feature learning is carried out on the three sub-models by utilizing a forward propagation algorithm and a backward propagation algorithm to obtain three different expected result matrixes xa,xb,xc;
(7) Three expected result matrices x to be obtaineda,xb,xcAnd fusing to obtain a 3-dimensional matrix M, initializing a weight N of 3 multiplied by 1, and multiplying M and N to obtain a final classification result.
2. The method of claim 1, wherein the tag matrix of the PolSAR image is augmented by using a semi-supervised fuzzy clustering algorithm in (1) as follows:
(1a) setting parameters: the classification number is C, and the maximum iteration number is 50;
(1b) randomly selecting 1% of pixels from the marked pixels as monitoring information, and obtaining a monitoring membership matrix of the PolSAR image according to the monitoring information
Wherein the content of the first and second substances,a pixel representing the information that is to be supervised,representing the remaining pixels outside the supervisory information;
(1c) constructing an intra-class compact target function through maximum entropy regularization, and introducing supervision information into the target function to obtain a target function J after the supervision information is introduced:
wherein d (x)i,vj) Represents the ith pixel xiAnd the jth cluster center vjThe Wishart distance between, the parameter λ represents the ambiguity factor, uijRepresents xiBelong to vjThe degree of membership of (a) is,represents xiBelong to vjThe supervision membership degree of (a) is in the range of {1, ·, N }, j is in the range of {1, ·, C }, and λ is greater than 0;
(1d) solving the membership u of the formula 2) respectivelyijAnd a clustering center vjPartial derivative of (a) to obtain uijAnd vjThe update formulas of (1) are respectively as follows:
(1e) updating the membership degree u according to the formula 3) and the formula 4) respectivelyijAnd a clustering center vjWhen the iteration number reaches 50, the final updated u is obtainedij' and vj’;
(1f) According to the finally updated uij' and vj', clustering all pixels to obtain a result matrix l';
(1g) according to the result matrix l', setting a pseudo class label of unmarked pixels in the artificial marking matrix l to obtain an expanded marking matrix lp。
3. The method of claim 1, wherein the coherence matrix T in (2)iAnd a 9-dimensional vector xi1Respectively, as follows:
xi1=[Ti 11,Ti 22,Ti 33,Re(Ti 12),Re(Ti 13),Re(Ti 23),Im(Ti 12),Im(Ti 13),Im(Ti 23)],
where Re (-) denotes the real part of the complex field and Im (-) denotes the imaginary part of the complex field.
4. The method of claim 1, wherein the 6-dimensional vector x in (2)i2Is represented as follows:
xi2=[Ti 11,Ti 12,Ti 13,Ti 22,Ti 23,Ti 24],
wherein, Ti 11,Ti 12,Ti 13Representing a coherence matrix TiMain diagonal element of (1), Ti 22,Ti 23,Ti 24Representing a coherence matrix TiThe minor diagonal elements of (1).
5. The method of claim 1, wherein the sliding window operation in (3) is formulated as follows:
Num=(ceil((H-W)/S)+1)·(ceil((W-L)/S)+1)
wherein ceil represents an rounding-up function, H and W represent the height and width of the input PolSAR image, respectively, L is the size of the sliding window, and S represents the sliding step.
6. The method according to claim 1, wherein the full convolution neural network (FCN) model in (6) has the following structure:
9-dimensional convolution layer → first pooling layer → first Relu activation function layer → first Batch normalization layer → 60-dimensional convolution layer → second pooling layer → second Relu activation function layer → second Batch normalization layer → 120-dimensional convolution layer → third pooling layer → third Relu activation function layer → third Batch normalization layer → 240-dimensional convolution layer → fourth pooling layer → fourth Relu activation function layer → fourth Batch normalization layer → 240-dimensional convolution layer → 1024-dimensional convolution layer;
the size of the convolution kernel in each convolution layer is 3 x 3, and the step size of each pooling layer is 2.
7. The method according to claim 1, wherein the CVFCN model in (6) has the following structure:
6-dimensional convolution layer → 1 st pooling layer → 1Relu activation function layer → 1Batch normalization layer → 60-dimensional convolution layer → 2 nd pooling layer → 2 nd Relu activation function layer → 2 nd Batch normalization layer → 120-dimensional convolution layer → 3 rd pooling layer → 3 rd Relu activation function layer → 3 rd Batch normalization layer → 240-dimensional convolution layer → 4 th pooling layer → 4 th Relu activation function layer → 4 th Batch normalization layer → 240-dimensional convolution layer → 1024-dimensional convolution layer;
the size of the convolution kernel in each convolution layer is 3 x 3, and the step size of each pooling layer is 2.
8. The method according to claim 1, wherein three different desired result matrices x in (7) are defineda,xb,xcFusing in a Stacking mode to obtain a 3-dimensional matrix M which is expressed as M(i,j,3):
Where i, j represent the three different desired result matrices xa,xb,xcRow i and column j.
Background
The polarimetric synthetic aperture radar PolSAR adopts a multi-frequency and multi-channel imaging mode, can monitor the ground almost under all conditions without day and night, and has the advantages of strong penetrating power and high resolution. Under various transceiving combinations, PolSAR can comprehensively describe the scattering characteristics of the surface feature target and can more accurately invert parameters such as physical characteristics, geometric characteristics and electrolytic factors of the target. As one of the key technologies for polisar image interpretation, the polisar image classification has been a hot spot of research in recent years. The PolSAR image classification is to classify each pixel into different classes, such as farmland, grassland, cities, rivers, etc., according to polarization information and spatial location. Generally, pixels belonging to the same terrain have similar physical characteristics and continuity in space. According to the trend of PolSAR image research in recent years, the classification method of PolSAR images can be divided into three major categories: a classification method based on object decomposition, a classification method based on statistical analysis, and a classification method based on machine learning, which are the focus of research in recent years. Machine learning, which uses example data and past experience to enable computer learning or to simulate human behavior, has developed as a research hotspot in the field of artificial intelligence, attracting the attention of more and more researchers. Deep learning is used as an important branch of machine learning, a strong framework is provided, deeper features can be automatically extracted through multilayer representation learning, complex structures in high-dimensional data can be found, and the development of PolSAR image classification is further promoted.
When Chen et al applied convolutional neural network CNN to PolSAR image classification proposed in 2018, there are two problems: the first problem is that the importance of phase information in the PolSAR image is ignored, and the phase information plays an important role in the classification effect of the PolSAR image; the second problem is that CNN uses the neighborhood of each pixel as input for model training and prediction for PolSAR image classification, and thus has the disadvantages of repeated computation and memory occupation. In order to solve the first problem, Zhang et al fully utilizes the amplitude and phase of the PolSAR image in 2017, expands each element of CNN to a complex field, and proposes a PolSAR image classification method based on a complex convolution neural network, but the method still does not solve the defects of repeated calculation and memory occupation. In order to solve the second problem, in 2018, Li et al apply an end-to-end pixel-to-pixel dense classification network-full convolution neural network FCN to polarisar image classification, and propose a polarisar image classification method based on sparse coding and a sliding window full convolution neural network, but the method does not consider phase information of the polarisar image, so that the classification accuracy of the polarisar image is poor. Therefore, in 2019, Cao et al expand each element of the full convolution neural network to a complex field, and propose a complex-valued full convolution neural network CVFCN, but the method still has the problem of excessive loss of the PolSAR image detail information caused by continuous downsampling, so that the classification result map of the PolSAR image is not fine enough. In addition, the above methods extract single features, and cannot sufficiently mine the multi-scale features of the PolSAR image, so that the classification effect of the PolSAR image is not ideal.
Disclosure of Invention
The invention aims to provide a PolSAR image classification method based on a multi-model joint learning network aiming at the defects of the prior art, so that different features extracted by a plurality of models are fused by using amplitude information and phase information of the PolSAR image, and the classification precision of the PolSAR image is improved.
In order to achieve the purpose, the technical scheme of the invention comprises the following steps:
1. a PolSAR image classification method based on a multi-model joint learning network is characterized by comprising the following steps:
(1) inputting a PolSAR image to be classified, and expanding a tag matrix of the PolSAR image by using a semi-supervised fuzzy clustering algorithm to obtain a non-class tag in the PolSAR image to be classifiedThe pseudo class label of the pixel records the label matrix after expansion as lp;
(2) Respectively extracting a coherent matrix T of each pixel in the PolSAR imageiVector of (i.e. extracting T firstiThe real and imaginary parts of the upper triangular element form a 9-dimensional vector xi1Then extracting TiThe upper triangular element of (a) constitutes a 6-dimensional vector xi2(ii) a Then for xi1And xi2Each dimension of the vector is normalized by a z-score function to obtain a normalized 9-dimensional vector x'i1And 6-dimensional vector x'i2;
(3) To x'i1、x'i2And lpAll using the same sliding window operation to respectively obtain three corresponding data sets s1,s2And slThe three data sets are then randomized in the same manner and from s1And s2The first 5% of data are selected to obtain a training sample set s1 T、s2 TThen from slThe first 5% of the data are selected to obtain s1 TAnd s2 TCorresponding marking matrix sl T;
(4) Randomly initializing 9-dimensional convolution kernels and 6-dimensional convolution kernels by utilizing Gaussian distribution with the mean value of 0 and the standard deviation of 0.02, and respectively constructing a full convolution neural network (FCN) model and a complex value full convolution neural network (CVFCN) model according to the initialized 9-dimensional convolution kernels and 6-dimensional convolution kernels, wherein the FCN model and the CVFCN model respectively comprise 7 convolution layers, and a pooling layer, a Relu activation function layer and a Batch normalization layer are sequentially cascaded behind the first four convolution layers;
(5) constructing cavity convolution layers, namely sequentially cascading cavity convolutions with cavity factors of 1, 2 and 3, and replacing the convolution layers in the first four layers in the CVFCN with the cavity convolution layers to form a complex-value stack cavity full convolution neural network CVSDFCN model;
(6) respectively taking the FCN model, the CVFCN model and the CVSDFCN model as submodels of a joint learning network, and respectively taking a 9-dimensional training sample set s1 TAnd its corresponding marking matrix sl TAre all input into the FCN model, and a 6-dimensional training sample set s is obtained2 TAnd the corresponding marking matrix sl TAll input into CVFCN and CVSDFCN models, and feature learning is carried out on the three sub-models by utilizing a forward propagation algorithm and a backward propagation algorithm to obtain three different expected result matrixes xa,xb,xc;
(7) Three expected result matrices x to be obtaineda,xb,xcAnd fusing to obtain a 3-dimensional matrix M, initializing a weight N of 3 multiplied by 1, and multiplying M and N to obtain a final classification result.
Compared with the prior art, the invention has the following advantages:
firstly, the invention introduces semi-supervised fuzzy clustering into the pretreatment of PolSAR images to obtain pseudo labels without label labels, thereby expanding the labeled samples.
Secondly, the invention constructs the void convolution layer to extract the multi-scale features of the PolSAR image, thereby improving the classification precision of the PolSAR image.
Thirdly, the invention fuses three independent full convolution neural network FCN, complex value full convolution neural network CVFCN and complex value stack full convolution neural network CVSDFCN, and can further obtain better classification result than single model.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a comparison graph of the classification results of PolSAR images in the Weinwei region by using the present invention and the existing method;
FIG. 3 is a comparison graph of the classification results of PolSAR images in the Germany ESAR area by using the present invention and the existing method;
FIG. 4 is a comparison of the classification results of PolSAR images in san Francisco, USA by using the present invention and the existing method.
Detailed Description
The following detailed description of the embodiments and effects of the invention is provided in conjunction with the accompanying drawings:
referring to fig. 1, the implementation steps of this example include the following:
step 1: inputting PolSAR images X to be classified and the corresponding artificial marking matrix l, and performing data preprocessing.
1.1) carrying out label expansion on the PolSAR image by using semi-supervised fuzzy clustering:
1.1.1) setting parameters: the classification number is C, and the maximum iteration number is 50;
1.1.2) randomly selecting 1% of pixels from the marked pixels as supervision information, and obtaining a supervision membership matrix of the PolSAR image according to the supervision information
Wherein the content of the first and second substances,represents the ith pixel xiBelongs to the jth cluster center vjThe degree of supervision of the degree of membership,a pixel representing the information that is to be supervised,representing the rest pixels except the supervision information, wherein N is the total number of the pixels of the input PolSAR image X;
1.1.3) constructing an internal compact target function through maximum entropy regularization, and introducing supervision information into the target function to obtain a target function J after the supervision information is introduced:
wherein d (x)i,vj) Represents the ith pixel xiAnd the jth cluster center vjThe Wishart distance between, λ represents the ambiguity factor, uijRepresents the ith pixel xiBelongs to the jth cluster center vjThe degree of membership of (a) is,represents the ith pixel xiBelongs to the jth cluster center vjThe supervision membership degree of (a) is i belongs to {1, ·, N }, j belongs to {1, ·, C }, and λ is 2;
1.1.4) solving the relation of membership u to equation 2) respectivelyijAnd a clustering center vjPartial derivative of (a) to obtain uijAnd vjThe update formulas of (1) are respectively as follows:
1.1.5) updating the membership u according to the equations 3) and 4) respectivelyijAnd a clustering center vjUntil the iteration times reach 50 times, the final updated membership degree u is obtainedij' and clustering center vj’;
1.1.6) according to the finally updated membership degree uij' and clustering center vj'clustering the input PolSAR image X to obtain a result matrix l';
1.1.7) according to the result matrix l', setting a pseudo-class label of unmarked pixels in the artificial marking matrix l to obtain an expanded marking matrix lp;
1.2) normalizing the input PolSAR image X:
1.2.1) PolSAR image X is composed of a 3 × 3 coherence matrix TiRepresenting, respectively extracting the coherence matrix TiThe real part and imaginary part of the upper triangular element form a 9-dimensional vector xi1:
xi1=[Ti 11,Ti 22,Ti 33,Re(Ti 12),Re(Ti 13),Re(Ti 23),Im(Ti 12),Im(Ti 13),Im(Ti 23)],
Where Re (-) denotes the real part of the complex field, Im (-) denotes the imaginary part of the complex field,
1.2.2) extraction of TiForm a 6-dimensional vector xi2:
xi2=[Ti 11,Ti 12,Ti 13,Ti 22,Ti 23,Ti 24],
Wherein, Ti 11,Ti 12,Ti 13Representing a coherence matrix TiMain diagonal element of (1), Ti 22,Ti 23,Ti 24Representing a coherence matrix TiA minor diagonal element of (a);
1.2.3) to xi1And xi2Each dimension of the vector is normalized by z-score to obtain a normalized 9-dimensional vector x'i1And 6-dimensional vector x'i2;
Step 2: from a normalized 9-dimensional vector x'i16-dimensional vector x'i2And selecting a training sample set from the expanded artificial mark matrix l'.
2.1) pairs of normalized 9-dimensional vectors x'i16-dimensional vector x'i2And an artificial marking matrix lpAll using the same sliding window operation to respectively obtain three corresponding data sets s1,s2And slThe formula for the sliding window operation is as follows:
Num=(ceil((H-W)/S)+1)·(ceil((W-L)/S)+1)
where ceil denotes an upward rounding function, H and W denote the height and width of the input PolSAR image X, respectively, L is the size of the sliding window, and S denotes the sliding step size, which is taken in this example but not limited to W-128 and S-32;
2.2) the three data sets s1,s2And slIn (1)Data are randomly scrambled;
2.3) from s1And s2The first 5% of data are selected to obtain a training sample set s1 T、s2 TThen from slThe first 5% of the data are selected to obtain s1 TAnd s2 TCorresponding marking matrix sl T。
And step 3: and respectively constructing a full convolution neural network (FCN) model, a complex value full convolution neural network (CVFCN) model and a stack cavity full convolution neural network (CVSDFCN) model.
3.1) constructing a full convolution neural network FCN model:
3.1.1) setting the hyper-parameters of the FCN model: learning rate of 10-3Batch size 32;
3.1.2) randomly initializing a 9-dimensional convolution kernel by utilizing Gaussian distribution with the mean value of 0 and the standard deviation of 0.02;
3.1.3) construct the full convolution neural network FCN model according to the initialized 9-dimensional convolution kernel, this FCN model includes 7 layers of convolution layer, and all cascaded pooling layer, Relu activation function layer and Batch normalization layer behind the first four layers of convolution layer in proper order, and the concrete structure is as follows:
9-dimensional convolution layer → first pooling layer → first Relu activation function layer → first Batch normalization layer → 60-dimensional convolution layer → second pooling layer → second Relu activation function layer → second Batch normalization layer → 120-dimensional convolution layer → third pooling layer → third Relu activation function layer → third Batch normalization layer → 240-dimensional convolution layer → fourth pooling layer → fourth Relu activation function layer → fourth Batch normalization layer → 240-dimensional convolution layer → 1024-dimensional convolution layer;
the size of the convolution kernel in each convolution layer is 3 multiplied by 3, and the step length of each pooling layer is 2;
3.2) constructing a complex value full convolution neural network CVFCN model:
3.2.1) setting the hyper-parameters of the CVFCN model: learning rate of 10-3Batch size 32;
3.2.2) randomly initializing a 6-dimensional convolution kernel by utilizing Gaussian distribution with the mean value of 0 and the standard deviation of 0.02;
3.2.3) construct the complex value complete convolution neural network CVFCN model according to the initialized 6-dimensional convolution kernel, the CVFCN model comprises 7 convolution layers, and a pooling layer, a Relu activation function layer and a Batch normalization layer are sequentially cascaded behind the front four convolution layers, and the specific structure is as follows:
6-dimensional convolution layer → 1 st pooling layer → 1Relu activation function layer → 1Batch normalization layer → 60-dimensional convolution layer → 2 nd pooling layer → 2 nd Relu activation function layer → 2 nd Batch normalization layer → 120-dimensional convolution layer → 3 rd pooling layer → 3 rd Relu activation function layer → 3 rd Batch normalization layer → 240-dimensional convolution layer → 4 th pooling layer → 4 th Relu activation function layer → 4 th Batch normalization layer → 240-dimensional convolution layer → 1024-dimensional convolution layer;
the size of the convolution kernel in each convolution layer is 3 multiplied by 3, and the step length of each pooling layer is 2;
3.3) constructing a complex-valued stack cavity full-convolution neural network CVSDFCN model:
3.3.1) setting the hyper-parameters of the CVFCN model: learning rate of 10-3Batch size 32;
3.3.2) constructing a cavity convolution layer, namely sequentially cascading cavity convolutions with cavity factors of 1, 2 and 3 respectively to obtain the cavity convolution layer;
3.3.3) replacing the convolutional layers in the first four layers in the CVFCN with void convolutional layers to form a CVSDFCN model, wherein the specific structure is as follows:
the 6-dimensional cavity convolution layer → the 1 st pooling layer → the 1 st Relu activation function layer → the 1 st Batch normalization layer → the 60-dimensional cavity convolution layer → the 2 nd pooling layer → the 2 nd Relu activation function layer → the 2 nd Batch normalization layer → the 120 th cavity convolution layer → the 3 rd pooling layer → the 3 rd Relu activation function layer → the 3 rd Batch normalization layer → the 240-dimensional cavity convolution layer → the 4 th pooling layer → the 4 th Relu activation function layer → the 4 th Batch normalization layer → the 240-dimensional volume convolution layer → the 1024-dimensional volume layer;
the size of the convolution kernel in each hole convolution layer is 3 x 3, and the step size of each pooling layer is 2.
Step four: and obtaining a final classification result by using the three models.
4.1) training 9 dimensionsTraining sample set s1 TAnd its corresponding marking matrix sl TAll input into a full convolution neural network FCN model;
4.2) combining the 6-dimensional training sample set s2 TAnd the corresponding marking matrix sl TThe complex value full convolution neural network CVFCN model and the complex value stack cavity full convolution neural network CVSDFCN model are input;
4.3) carrying out feature learning on the three models of the full convolution neural network FCN, the complex value full convolution neural network CVFCN and the complex value stack full convolution neural network CVSDFCN by utilizing a forward propagation algorithm and a backward propagation algorithm to obtain three different expected result matrixes xa,xb,xcWherein x isaExpected result matrix, x, representing full convolution neural network FCN modelbExpected result matrix, x, representing a complex-valued fully-convolutional neural network CVFCN modelcRepresenting an expected result matrix of a complex-valued stacked full-convolution neural network CVSDFCN model;
4.4) three expected result matrices x to be obtaineda,xb,xcPerforming fusion by using a Stacking method to obtain a 3-dimensional matrix M which is expressed as M(i,j,3):
Where i, j represent the three different desired result matrices xa,xb,xcThe position of the ith row and the jth column;
4.5) initializing a weight N of 3 multiplied by 1, and multiplying the 3-dimensional matrix M and the weight N to obtain a final classification result.
The technical effects of the invention are further explained by combining simulation experiments as follows:
1. simulation conditions are as follows:
the simulation experiments were performed in the environment of computer Inter (R) core (TM) i9-9900K 3.60GHZ CPU, 32G memory, Tensorflow1.13.1 software.
2. Simulation content:
simulation 1, selecting PolSAR images of the Weian river region, and classifying the PolSAR images of the Weian river region by using the method of the invention and the existing SVM method, Wishart method, Bagging method, CNN method, FCN method and CVFCN method respectively, wherein the result is shown in figure 2, wherein:
2(a) is an artificial labeling diagram of PolSAR images in the Weinwei region of Weinwei;
2(b) is a classification result graph of PolSAR images in the Weinwei region by using the existing SVM method;
2(c) is a classification result graph of PolSAR images in the Weinwei region by using the existing Wishart method;
2(d) is a classification result graph of PolSAR images in the Weinwei region by using the existing Bagging method;
2(e) a classification result graph of PolSAR images in the Weinwuhe region by using the existing CNN method;
2(f) is a classification result graph of PolSAR images in the Weinwei region by using the existing FCN method;
2(g) is a classification result graph of PolSAR images in the Weinwei region by using the existing CVFCN method;
2(h) is a classification result graph of PolSAR images in the Weinwei region by using the method;
as can be seen from fig. 2, the classification result obtained by using SVM, Wishart and Bagging has a large number of misclassified pixels and a large number of independent pixel points; although the classification result of the CNN model is more continuous than the three algorithms and independent pixel points are obviously reduced, for example, grassland objects out of an oval frame still have a plurality of wrong pixels; the result graph of the FCN model is clearer compared with the former methods on the whole, but a plurality of error pixels still exist in the classification of river ground objects; CVFCN performs better in elliptically highlighted waters than using FCN; compared with other methods, the classification result graph is smoother, each type of ground object of the Sedan data set can be clearly distinguished, and the classification result graph is particularly more prominent in areas framed by ellipses and squares.
Simulation 2, selecting PolSAR images in the German ESAR area, and classifying the PolSAR images respectively by using the method of the invention and the existing SVM method, Wishart method, Bagging method, CNN method, FCN method and CVFCN method, wherein the results are shown in FIG. 3, wherein:
3(a) is an artificial labeling map of PolSAR images in the German ESAR area;
3(b) is a classification result graph of the German ESAR area by using the existing SVM method;
3(c) is a classification result graph of German ESAR areas by using the existing Wishart method;
3(d) a classification result graph of PolSAR images in the Germany ESAR area by using the existing Bagging method;
3(e) a classification result graph of PolSAR images in the Germany ESAR area by using the existing CNN method;
3(f) is a classification result graph of PolSAR images in the Germany ESAR area by using the existing FCN method;
3(g) is a classification result graph of PolSAR images in the Germany ESAR area by using the existing CVFCN method;
3(h) is a classification result graph of PolSAR images in the German ESAR area by using the invention;
as can be seen from fig. 3, the classification result graphs using SVM, Wishart and Bagging algorithms are mixed very severely among three land features of a building area, an open area and a forest area; the classification result graph of the CNN is clearer than the first three result graphs as a whole, but many wrongly-divided pixels still exist, for example, in an area framed by an oval, many pixels of a building area ground feature are wrongly divided into an open area and a forest area ground feature; the classification result graph using the FCN model is smoother on the whole than the first four algorithms, and is more obvious in the classification of land features of continuous areas, but for the land features of building areas, such as the areas framed by rectangles, a plurality of mistaken pixels still exist; the classification result of the CVFCN model is superior to that of the FCN model; the classification result graph of the invention is closer to the artificial mark graph and smoother relative to other algorithms.
Simulation 3, selecting PolSAR images in the san Francisco region of America, and classifying the PolSAR images by using the method of the invention and the existing SVM method, Wishart method, Bagging method, CNN method, FCN method and CVFCN method respectively, wherein the results are shown in FIG. 3, wherein:
4(a) is an artificial label map of PolSAR images in the san Francisco region of the United states;
4(b) is a classification result graph of the United states san Francisco region by using the existing SVM method;
4(c) is a classification result graph of the United states san Francisco region by using the existing Wishart method;
3(d) a classification result graph of PolSAR images in the san Francisco region of the United states by using the conventional Bagging method;
4(e) a classification result graph of PolSAR images in the san Francisco region of the United states by using the conventional CNN method;
4(f) is a classification result graph of PolSAR images in the san Francisco region of the United states by using the conventional FCN method;
4(g) is a classification result graph of PolSAR images in the san Francisco region of the United states by using the conventional CVFCN method;
4(h) is a classification result graph of PolSAR images in the san Francisco region of the United states by using the method;
as can be seen from fig. 3, the misclassification phenomenon in the classification result map of SVM and Wishart is very serious, for example, in the area enclosed by the rectangle, many pixels in the developed urban land feature are misclassified into the plant land feature; the Bagging classification result graph has a plurality of misclassified pixels in low-density city ground features and high-density city ground features; the outline of the result graph of the CNN is clear compared with the first three algorithms, but a plurality of error pixels still exist in high-density cities and developed urban land features; the classification result graph of the FCN has poor classification effect on land features in developed cities; the classification effectiveness of the CVFCN in low-density cities is superior to that of the former algorithms; the classification result graph of the invention is clearer and smoother in a continuous area and is closer to an artificial marking graph.