Dual-polarization channel fusion ship size estimation method based on CNN

文档序号:9383 发布日期:2021-09-17 浏览:44次 中文

1. A CNN-based dual-polarization channel fusion ship target size estimation method is used for SAR image interpretation and SAR image ship target identification and classification, and is characterized by comprising the following steps:

step 1, acquiring an experiment data set, wherein the experiment data set comprises training sample data and test sample data; wherein each sample in the experimental data set comprises images of two polarization channels VH and VV and size truth values provided by an automatic identification system AIS of the ship; the size truth value comprises a length truth value and a width truth value;

step 2, constructing a ship target size estimation network framework psi based on a convolutional neural network;

step 3, inputting the training sample data into the constructed ship target size estimation network framework Ψ based on the convolutional neural network for training to obtain a trained convolutional neural network framework Ψ';

and 4, inputting the test sample data into the trained convolutional neural network framework Ψ' to obtain a size estimation result of the test sample.

2. The CNN-based dual-polarized channel fusion ship target size estimation method according to claim 1, wherein the step 2 comprises the following substeps:

substep 2.1 of setting N convolutional layers and the first maximum pooling layer P respectively1And a G layer full-link layer; wherein the N layers of convolution layers are respectively the first layer of convolution layer L1A second layer of the convolution layer L2… …, Nth layer of convolution layer LN(ii) a The G layer full connecting layer is a first layer full connecting layer Fc1Second layer full connection layer Fc2… …, layer G full connecting layer FcG

Substep 2.2. depositing the N convolutional layers and the first max pooling layer P1And GAnd the layer full-connection layers are sequentially arranged in a cross mode to form a ship target size estimation network framework psi based on the convolutional neural network.

3. The CNN-based dual-polarization channel fusion ship target size estimation method of claim 2, wherein in substep 2.1, the N convolutional layers are seventeen convolutional layers; the G layer full-connection layer is four layers of full-connection layers.

4. The CNN-based dual-polarization channel fusion ship target size estimation method according to claim 2, wherein in sub-step 2.2, the convolutional neural network-based ship target size estimation network framework Ψ is composed of a first layer of convolutional layers L1The first maximum pooling layer P1A second layer of the convolution layer L2… …, Nth layer of convolution layer LNFirst layer of fully connected layer Fc1Second layer full connection layer Fc2… …, layer G full connecting layer FcGAre sequentially connected in series.

5. The CNN-based dual-polarization channel fusion ship target size estimation method according to claim 4, wherein in sub-step 2.2, the full-connected layer is nonlinearly mapped by using a ReLU activation function, and a mathematical expression f (x) of the ReLU activation function is:

f(x)=max(0,x)

where x represents the input value of the ReLU activation function, and the output value of the ReLU activation function f (x) is the maximum value of x and 0.

6. The CNN-based dual-polarization channel fusion ship target size estimation method according to claim 1, wherein step 3 specifically comprises the following substeps:

and substep 3.1, inputting the training sample data into the constructed ship target size estimation network framework Ψ based on the convolutional neural network, and calculating a loss function loss of a network output layer:

loss=(ytrue-yprediction)2

wherein, ytrueIs the true size of the sample ship, ypredictionAn estimated size for the sample vessel target;

and substep 3.2, training the network by a back propagation algorithm and an impulse random gradient descent method, and updating parameters of each layer in the network, wherein the parameter updating formula comprises the following steps:

ωi+1=ωi+vi+1

wherein v isiThe velocity parameter at the i-th iteration is 0.9 impulse parameter, 0.0005 weight attenuation coefficient, i is iteration number, ε is learning rate, ω isiThe weight parameter of the ith iteration is L, and the loss function loss of the network output layer is L;

and 3.3, repeating the substep 3.2, and repeatedly iterating and continuously updating the parameters until the Loss function Loss is converged to obtain a trained convolutional neural network framework Ψ'.

7. The CNN-based dual-polarization channel fusion ship target size estimation method of claim 1, wherein in step 1, the experimental dataset selects M types of ship targets from a public OpenSARShip dataset as the experimental dataset.

8. The CNN-based dual-polarized channel fusion ship target size estimation method according to claim 7, wherein the M types of ship targets are five types of ship targets, respectively, such as an oil tanker, a cargo ship, a bulk carrier, a general cargo ship, and a container ship.

9. The CNN-based dual-polarization channel fusion ship target size estimation method of claim 1, wherein in step 1, the ratio of the training sample data to the test sample data is 7: 3.

Background

The synthetic aperture radar SAR can continuously observe the marine environment in a large range and all weather without being influenced by factors such as cloud layers, climate, time and the like, becomes an effective means of modern marine management, is widely applied to the fields of military reconnaissance, disaster prediction and the like, and has wide research and application prospects. In recent years, on the basis of monitoring marine targets, research in the fields of ship detection and classification, iceberg detection, wind inversion, oil spill detection, sea ice monitoring, ship trail detection and the like is carried out, and the practicability of the research is proved. The size estimation of the ship target is a key and basis for detecting and classifying the ship target, fine geometric parameter estimation is also a key for SAR image interpretation, and the method has important research significance.

Much work has been done on the estimation of target dimensions for ships. However, due to the limitations of real-world situations, it is difficult to obtain a large number of high-resolution SAR images. For the low-medium resolution SAR image, the detail information of the ship target is not rich enough, and the accuracy of target size estimation is influenced. In order to solve the problem, Bjorn Tings et al propose a dynamic self-adaptive ship parameter estimation method, cross entropy and multiple linear regression are used for optimizing algorithm parameters, the method obtains higher estimation precision on TerrraSAR-X data, but the algorithm is greatly influenced by environmental factors such as sea clutter and the like.

In addition, for a low-and-medium-resolution SAR image, Lui Bedini et al propose a ship target size extraction method based on an SAR image. In addition, machine learning also begins to be applied to the size estimation of ship targets and achieves better results. Boying Li et al, in their published paper "Ship Size Extraction for Sentinel-1Images Based on Dual-Polarization Fusion and Nonlinear Regression: Push Error Under One Pixel" (B.Li, B.Liu, W.Guo, Z.Zhang, and W.Yu, Ship Size Extraction for Sentinel-1Images Based on Dual-Polarization Fusion and Nonlinear Regression: Push Error unit Pixel [ J ]. IEEE Transactions on geometry and Remote Sensing,2018, Vol.56 (De8): 4887-4905) propose Fusion and Nonlinear Regression Based sizing methods, gradient Boosting tree prediction processing and Dual Polarization modeling for reducing the quality of imaging of ships DT (GBP) and for the morphological imaging. However, the algorithm model depends on artificial design features, a large amount of prior knowledge is needed, and the result is not robust.

Disclosure of Invention

The invention aims to provide a dual-polarization channel fusion ship target size estimation method based on CNN (convolutional neural network) aiming at the defects of the existing SAR ship target size estimation method, so as to improve the performance of target size estimation under the condition of medium and low resolution, thereby improving the precision of target size estimation.

The technical idea of the invention is as follows: firstly, inputting a training sample into a ship target size estimation network framework of a convolutional neural network CNN for training to obtain a trained convolutional neural network framework, and inputting a test sample into the trained convolutional neural network framework to obtain a ship target size estimation result.

A CNN-based dual-polarization channel fusion ship target size estimation method is used for SAR image interpretation and SAR image ship target identification and classification, and comprises the following steps:

step 1, acquiring an experiment data set, wherein the experiment data set comprises training sample data and test sample data; wherein each sample in the experimental data set comprises images of two polarization channels VH and VV and size truth values provided by an automatic identification system AIS of the ship; the size truth value comprises a length truth value and a width truth value;

step 2, constructing a ship target size estimation network framework psi based on a convolutional neural network;

step 3, inputting the training sample data into the constructed ship target size estimation network framework Ψ based on the convolutional neural network for training to obtain a trained convolutional neural network framework Ψ';

and 4, inputting the test sample data into the trained convolutional neural network framework Ψ' to obtain a size estimation result of the test sample.

The technical scheme of the invention has the characteristics and further improvements that:

(1) in step 1, selecting an M-class ship target from an open OpenSARShip data set as an experimental data set by the experimental data set; the M-type ship targets are five ship targets, namely, oil tankers, cargo ships, bulk cargo ships, common cargo ships and container ships.

(2) In step 1, the ratio of the training sample data to the test sample data is 7: 3.

(3) Step 2 comprises the following substeps:

substep 2.1 of setting N convolutional layers and the first maximum pooling layer P respectively1And a G layer full-link layer; wherein the N layers of convolution layers are respectively the first layer of convolution layer L1A second layer of the convolution layer L2… …, Nth layer of convolution layer LN(ii) a The G layer full connecting layer is a first layer full connecting layer Fc1Second layer full connection layer Fc2… …, layer G full connecting layer FcG

Substep 2.2. depositing the N convolutional layers and the first max pooling layer P1And the G layer full connection layers are sequentially arranged in a cross mode to form a ship target size estimation network framework psi based on the convolutional neural network.

(4) In substep 2.1, the N convolutional layers are seventeen convolutional layers; the G layer full-connection layer is four layers of full-connection layers.

(5) In substep 2.2, the convolutional neural network-based ship target size estimation network framework Ψ consists of a first layer of convolutional layer L1The first maximum pooling layer P1A second layer of the convolution layer L2… …, Nth layer of convolution layer LNFirst layer of fully connected layer Fc1The second layer is fullConnecting layer Fc2… …, layer G full connecting layer FcGAre sequentially connected in series.

(6) In sub-step 2.2, the full connection layer is nonlinearly mapped by using a ReLU activation function, where a mathematical expression f (x) of the ReLU activation function is:

f(x)=max(0,x)

where x represents the input value of the ReLU activation function, and the output value of the ReLU activation function f (x) is the maximum value of x and 0.

(7) The step 3 specifically comprises the following substeps:

and substep 3.1, inputting the training sample data into the constructed ship target size estimation network framework Ψ based on the convolutional neural network, and calculating a loss function loss of a network output layer:

loss=(ytrue-yprediction)2

wherein, ytrueIs the true size of the sample ship, ypredictionAn estimated size for the sample vessel target;

and substep 3.2, training the network by a back propagation algorithm and an impulse random gradient descent method, and updating parameters of each layer in the network, wherein the parameter updating formula comprises the following steps:

ωi+1=ωi+vi+1

wherein v isiThe velocity parameter at the i-th iteration is 0.9 impulse parameter, 0.0005 weight attenuation coefficient, i is iteration number, ε is learning rate, ω isiThe weight parameter of the ith iteration is L, and the loss function loss of the network output layer is L;

and 3.3, repeating the substep 3.2, and repeatedly iterating and continuously updating the parameters until the Loss function Loss is converged to obtain a trained convolutional neural network framework Ψ'.

Compared with the prior art, the invention has the following advantages:

1) the method carries out size estimation on the ship target in the SAR image based on the convolutional neural network, does not need to artificially design and extract characteristics according to the SAR image, greatly reduces the labor load, and simultaneously improves the robustness of a size estimation algorithm.

2) The invention uses the thought of a characteristic pyramid network for reference, fuses the multi-size characteristics of the SAR image ship target, simultaneously uses the dual-polarized channel image of the SAR ship image as a dual-channel input network, and utilizes the powerful characteristic extraction capability of a convolution network to perform dual-polarized channel information fusion, thereby further improving the precision of ship target size estimation.

Drawings

FIG. 1 is a flow chart of an implementation of the CNN-based dual-polarization channel fusion ship target size estimation method of the present invention;

FIG. 2 is a diagram of VV channel and VH channel images of a portion of a sample in accordance with the present invention; wherein, fig. 2(a) is a VH polarization channel image, and fig. 2(b) is a VV polarization channel image;

FIG. 3 is a frame diagram of a convolutional neural network-based ship target size estimation network constructed according to the present invention;

FIG. 4 is a graph of the comparison visualization of the target size estimation result with the target true size using the present invention; FIG. 4(a) is a comparison result of the target estimated length and the target real length obtained by the present invention; fig. 4(b) is a comparison result of the target estimated width and the target true width obtained by the present invention.

Detailed Description

The embodiments and effects of the present invention will be described in detail below with reference to the accompanying drawings:

referring to fig. 1, the implementation steps of the CNN-based dual-polarized channel fusion ship target size estimation method of the present invention are as follows:

step 1, obtaining a training sample and a testing sample.

Five ship targets are selected from the open OpenSARShip data set to serve as an experimental data set, and 70% of the experimental data set are selected as training sample data at random, and 30% of the experimental data set are selected as test sample data. Each sample contains an image of both VH, VV polarization channels and the size truth (length, width) provided by the automatic identification system AIS for the vessel.

FIG. 2 is a dual polarized channel image of two samples in the present invention; fig. 2(a) shows a VH polarization channel image, and fig. 2(b) shows a VV polarization channel image. As can be seen from FIG. 2, the VV channel tends to have a higher signal-to-noise ratio and is more suitable for marine target detection. VH channels are less affected by the marine environment and have better performance for bulk scattering than VV channels. Under different environments, the information of the two polarization channels can be mutually supplemented, the advantages of the information of the two polarization channels are fully utilized, and the estimation precision of the target size of the ship can be better improved.

And 2, constructing a ship target size estimation network framework psi and parameters based on the convolutional neural network.

Specifically, step 2 comprises the following substeps:

substep 2.1, referring to fig. 3, constructing a network framework Ψ for estimating the size of the ship target based on the convolutional neural network, specifically as follows:

1) providing seventeen layers of convolutional layers, i.e. the first layer L1A second layer of the convolution layer L2The third layer of the convolution layer L3The fourth layer of the convolution layer L4The fifth layer of the convolution layer L5The sixth layer of the convolution layer L6And a seventh layer of the convolutional layer L7The eighth layer of the convolution layer L8The ninth layer of the wound layer L9The tenth layer of the convolution layer L10The eleventh layer of the multilayer coil L11The twelfth layer of the wound layer L12Thirteenth layer of the wound layer L13And a fourteenth layer of the wound layer L14And a fifteenth layer of the wound layer L15Sixteenth layer of the multilayer winding L16Seventeenth layer of the multilayer laminate L17

2) Setting a maximum pooling layer, i.e. the first maximum pooling layer P1

3) Providing four full-link layers, i.e. a first full-link layer Fc1Second layer full connection layer Fc2Third layer full connection layer Fc3And a fourth full-link layer Fc4

4) Seventeen convolutional layers of 1), one maximum pooling layer of 2) and four full-connected layers of 3)Arranged crosswise in sequence, i.e. from the first layer of convolutional layers L1The first maximum pooling layer P1A second layer of the convolution layer L2The third layer of the convolution layer L3The fourth layer of the convolution layer L4The fifth layer of the convolution layer L5The sixth layer of the convolution layer L6And a seventh layer of the convolutional layer L7The eighth layer of the convolution layer L8The ninth layer of the wound layer L9The tenth layer of the convolution layer L10The eleventh layer of the multilayer coil L11The twelfth layer of the wound layer L12Thirteenth layer of the wound layer L13And a fourteenth layer of the wound layer L14And a fifteenth layer of the wound layer L15Sixteenth layer of the multilayer winding L16Seventeenth layer of the multilayer laminate L17First layer of fully connected layer Fc1Second layer full connection layer Fc2Third layer full connection layer Fc3And a fourth full-link layer Fc4And sequentially and serially connecting the network framework Ψ based on the estimation of the target size of the ship based on the convolutional neural network.

Substep 2.2, setting parameters of each layer of the network framework psi;

first layer of convolutional layer L1Inputting image data x1Size 128X 2, its convolution kernel K1Has a window size of 7 x 7, a sliding step lengthThe padding parameter P is 3, which is used for performing convolution operation on the input data and outputting 64 feature maps Y1,Y1The size is 64 multiplied by 64;

first maximum pooling layer P1Input data is Y1Filling parameter P1, pooling kernel U1Has a window size of 3 x 3, a sliding step length2, for down-sampling the input data and outputting a feature map Y2Is 32 × 32 × 64;

second layer of the convolution layer L2Input data is Y2Convolution kernel K of2Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 64 feature maps Y3,Y3The size is 32 × 32 × 64;

the third layer of the convolution layer L3Input data is Y3Convolution kernel K of3Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 64 feature maps Y4,Y4The size is 32 × 32 × 64;

the fourth layer of the convolution layer L4Input data is Y2+Y4I.e. Y2And Y4Adding the pixel values in the feature map on the corresponding channel to obtain the fourth convolution layer L4Of the input of (2), its convolution kernel K4Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 64 feature maps Y5,Y5The size is 32 × 32 × 64;

a fifth layer of convolutional layers L5Input data is Y5Convolution kernel K of5Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 64 feature maps Y6,Y6The size is 32 × 32 × 64;

the sixth layer of the convolution layer L6Input data is Y4+Y6I.e. Y4And Y6Adding the pixel values in the feature map of the corresponding channel to obtain the sixth convolution layer L6Of the input of (2), its convolution kernel K6Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 128 feature maps Y7,Y7The size is 16 × 16 × 128;

a seventh layer of the convolutional layer L7Input data is Y7Convolution kernel K of7Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 128 feature maps Y8,Y8The size is 16 × 16 × 128;

the eighth layer of the convolution layer L8The input data is Y'6+Y8Of which is Y'6That is, the portion indicated by the dotted line in FIG. 3, that is, Y'6Is composed of Y6The size of the convolution kernel is 1 multiplied by 1, the number of channels is 128, the sliding step length is 2, the filling parameter P is 0, and the size of the output after convolution operation is 16 multiplied by 128; y'6+Y8Namely Y'6And Y8Adding the pixel values in the feature map on the corresponding channel, and taking the result as the eighth convolution layer L8Of the input of (2), its convolution kernel K8Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 128 feature maps Y9,Y9The size is 16 × 16 × 128;

a ninth layer of the wound layer L9Input data is Y9Convolution kernel K of9Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 128 feature maps Y10,Y10The size is 16 × 16 × 128;

the tenth layer of the wound layer L10Input data is Y8+Y10I.e. Y8And Y10Adding the pixel values in the feature map on the corresponding channel to obtain the tenth convolution layer L10Of the input of (2), its convolution kernel K10Has a window size of 3 x 3, a sliding step length SL10The padding parameter P is 1, and is used for performing convolution operation on input data and outputting 256 feature maps Y11,Y11The size is 8 multiplied by 256;

the eleventh layer of the multilayer coil L11Input data is Y11Convolution kernel K of11Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 265 feature maps Y12,Y12The size is 8 multiplied by 256;

the twelfth layer of the wound layer L12The input data is Y'10+Y12Of which is Y'10Is composed of Y10The size of the convolution kernel is 1 multiplied by 1, the number of channels is 256, the sliding step length is 2, the filling parameter P is 0, and the size of the output after convolution operation is 8 multiplied by 256; y'10+Y12Namely Y'10And Y12Adding the pixel values in the feature map of the corresponding channel, and taking the result as the twelfth convolution layer L12Of the input of (2), its convolution kernel K12Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 256 feature maps Y13,Y13The size is 8 multiplied by 256;

a thirteenth layer of the wound layer L13Input data is Y13Convolution kernel K of13Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 256 feature maps Y14,Y14The size is 8 multiplied by 256;

a fourteenth layer of the wound layer L14Input data is Y12+Y14I.e. Y12And Y14Adding the pixel values in the feature map on the corresponding channel to obtain the fourteenth convolution layer L14Of the input of (2), its convolution kernel K14Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 512 feature maps Y15,Y15The size is 4 multiplied by 512;

a fifteenth layer of the wound layer L15Input data is Y15Convolution kernel K of15Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 512 feature maps Y16,Y16The size is 4 multiplied by 512;

a sixteenth layer of a convolutional layer L16The input data is Y'14+Y16Of which is Y'14Is composed of Y16The size of the convolution kernel is 1 multiplied by 1, the number of channels is 512, the sliding step length is 2, the filling parameter P is 0, and the size of the output after convolution operation is 4 multiplied by 512; y'14+Y16Namely Y'14And Y16Adding the pixel values in the feature map of the corresponding channel, and taking the result as the sixteenth convolution layer L16Of the input of (2), its convolution kernel K16Has a window size of 3 x 3, a sliding step lengthIs 1, the padding parameter P is 1, which is used for carrying out convolution operation on input dataComputing and outputting 512 feature maps Y17,Y17The size is 4 multiplied by 512;

a seventeenth layer of the convolutional layer L17Input data is Y17Convolution kernel K of17Has a window size of 3 x 3, a sliding step lengthThe padding parameter P is 1, and is used for performing convolution operation on input data and outputting 512 feature maps Y18,Y18The size is 4 multiplied by 512;

first layer full junction layer Fc12048 neurons are arranged, and input data is a feature fusion vector Of,OfIs composed of Y6、Y10、Y14、Y18Constituent row vectors, i.e. Y6、Y10、Y14、Y18Respectively drawing into 1 × 65536, 1 × 32768, 1 × 16384 and 1 × 8192 row vectors, and then splicing into Of,OfThe size is 1 × 122880. Nonlinear mapping is carried out through a ReLU activation function, and a 2048-dimensional row vector X is output1As an input to the second tier fully-connected tier; wherein the mathematical expression f (x) of the ReLU activation function is:

f(x)=max(0,x)

in the above equation, x represents the input value of the ReLU activation function, and the output value of the ReLU activation function f (x) is the maximum value of x and 0.

Second layer full connection layer Fc2With 1024 neurons for connecting the first full junction layer Fc1Output 2048-dimensional row vector X1Nonlinear mapping is carried out through a ReLU activation function, and a 1024-dimensional row vector X is output2As a third fully connected layer Fc3Inputting;

third fully connected layer Fc3It is provided with 256 neurons for connecting the layer Fc to the second layer2Output 1024-dimensional row vector X2Nonlinear mapping is carried out through a ReLU activation function, and a 256-dimensional row vector X is output3As input to the output layer;

fourth full junction layer Fc4It is provided with 2 neurons for Fc to the third full junction layer3Output 256-dimensional row vector X3Performing nonlinear mapping by using a ReLU activation function to output a 2-dimensional row vector X4As a result of the ship target size estimation.

And 3, inputting the training sample data into the constructed ship target size estimation network framework Ψ based on the convolutional neural network for training to obtain a trained convolutional neural network framework Ψ'.

Specifically, step 3 comprises the following substeps:

and substep 3.1, inputting training sample data into a constructed ship target size estimation network framework Ψ based on a convolutional neural network, and calculating a loss function loss of a network output layer:

loss=(ytrue-yprediction)2

wherein, ytrueIs the true size of the sample ship, ypredictionIs the estimated size of the sample ship target.

And substep 3.2, training the network by a back propagation algorithm and an impulse random gradient descent method, and updating parameters of each layer in the network, wherein the parameter updating formula comprises the following steps:

ωi+1=ωi+vi+1

wherein v isiThe velocity parameter at the i-th iteration is 0.9 impulse parameter, 0.0005 weight attenuation coefficient, i is iteration number, ε is learning rate, ω isiAnd L is a loss function, namely a loss function loss of the network output layer.

In the network, the weight is randomly initialized by using Gaussian distribution with the mean value of 0 and the variance of 0.01, and the initial speed v is set to be 0.

And 3.3, repeating the substep 3.2, and repeatedly iterating and continuously updating the parameters until the Loss function Loss is converged to obtain a trained convolutional neural network framework Ψ'.

And 4, inputting the test sample data into the trained convolutional neural network framework Ψ' to obtain a size estimation result of the test sample.

The effects of the present invention can be further illustrated by the following experimental data:

experimental conditions one:

1) experimental data:

the data used in the experiment is an OpenSARShip data set arranged by Shanghai university of transportation, the data set can be downloaded by an OpenSAR platform of Shanghai university of transportation, and the downloading website is https:// OpenSAR. The experimental data set was from VV, VH polarization channels in interference wide IW mode with a resolution of 20m × 20m and a pixel pitch of 10 m. The dimension truth value of the ship image for the experiment is provided for the automatic ship identification system AIS, and the used data set comprises five ship targets: tanker, Cargo ship, Bulk ship, General Cargo, Container ship Container.

The experimentally selected data set contains 4934 target images in total of 2467 samples, each sample contains images of two polarization channels of VV and VH, the size of the images is 128 x 128, 70% of the randomly selected data set is a training sample and contains 3454 target images in total of 1727 samples, and 30% of the data set is a test sample and contains 1480 target images in total of 740 samples.

2) Criteria for evaluation

In order to qualitatively evaluate the performance of the method in the ship length and width estimation, two indexes of average Absolute error mae (mean Absolute error) and average relative error mape (mean Absolute Percentage error) are adopted in the experiment to reflect the performance, and the calculation formula is as follows:

in the above formula, j represents the jth sample, yjRepresenting the true size of the ship target in the j-th sample, y'jRepresenting the estimated size of the ship target in the jth sample.

Experiment one: comparative experiments were performed on the above experimental data sets using the method of the present invention versus prior art methods. In order to verify the size extraction effect of the present invention, the size extraction result was compared with other methods, and the comparison result is shown in table 1.

TABLE 1 comparison of the dimensional estimation accuracy of the inventive method with that of the prior art

In table 1, the existing method is a ship size estimation method based on dual polarization fusion and nonlinear regression, and mainly includes two stages of image preprocessing and nonlinear regression, and the experimentally reproduced method may have a difference in detail from the original method. As can be seen from Table 1, the error of the method for estimating the size of the ship target is smaller than the estimation error of the existing method, so that the method realizes better target size estimation performance and has higher robustness.

Experiment two: the experimental data are tested by the method of the invention, and the comparison result of the target estimated size and the target real size is visualized, and the result is shown in fig. 4, wherein:

FIG. 4(a) is a comparison of the estimated length of the target and the true length of the target obtained by the present invention; fig. 4(b) is a comparison result of the target estimated width and the target true width obtained by the present invention. As can be seen from fig. 4, the estimated size of the object obtained by the present invention has a high correlation with the actual size of the object.

The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:车辆损伤区域的测量方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!