Dual-polarization SAR small ship detection method based on enhanced feature pyramid

文档序号:8342 发布日期:2021-09-17 浏览:27次 中文

1. The method for detecting the dual-polarized SAR small ship based on the enhanced characteristic pyramid is characterized by comprising the following steps: the method comprises the following steps:

removing a pure background image which does not contain a ship target in a dual-polarization SAR data set, and performing data enhancement and normalization on the image;

normalizing each pixel to be between 0 and 1 according to the original proportion, wherein the normalization mode is as follows:

in the formula PxyRepresenting the gray value size corresponding to the point with the pixel coordinate (x, y) in the image;

constructing a dual-polarized channel self-adaptive fusion module;

the dual-polarized channel self-adaptive fusion module learns two weight parameters, lambda, of VH and VV channels in the training process1Weights learned for the VH channels, λ2Weight set for VV channel, IadaptIs the final input, where12=1;

Iadapt=λ1×VH+λ2×VV

In the integral structure of the dual-polarization self-adaptive channel fusion module, Softmax is used for converting lambda1And λ2Normalized to between 0 and 1, and lambda121 is ═ 1; softmax maps the outputs of multiple neurons into the (0,1) interval, which can be viewed as a probability, assuming an array V, ViIndicating the i-th element therein, the Softmax value S of this elementiThe calculation method is as follows:

wherein j is the number of all elements;

constructing an attention enhancement type low-level feature pyramid module;

the attention enhancement type feature pyramid module in the overall structure of the small ship detection network comprises a low-level feature pyramid structure, a space attention module and a channel attention module;

in order to reduce the calculation amount and increase the reception field, the feature extraction network generally needs to perform 32 times of downsampling, while the small ship performs 32 times of downsampling, and the CNN feature extraction module performs downsampling to reserve more features of the small ship; fusing low-level and high-level information through a characteristic pyramid module and predicting on a large-scale characteristic diagram obtained by final fusion so as to accurately position the central position of the small ship;

S=Conv[Maxpool3(x),Maxpool5(x),Maxpool7(x)]

C=Sigmoid(Avepool 3(x)+Maxpool3(x))×x

s is the output of spatial attention, C is the output of channel attention, x is the input feature, Conv represents the convolution operation, Maxpool3, Maxpool5, Maxpool7 represent the maximal pooling with kernels 3, 5, 7, respectively, Avepool3 represents the average pooling with kernel 3, sigmoid function is used to limit the output between 0 and 1, [, ] represents the feature concatenation;

the space attention module learns the significant features of the ship target by performing maximum pooling on the feature map in different scales of 3, 5 and 7, and enhances the key point information of the small target on the large-scale feature map;

step (4) construction of loss function

Setting input map I ∈ RW×H×1Image, W is the image width, H is the image height; the key point thermodynamic diagram of the network output isWherein R represents obtaining the step length stride of the output relative to the original image; c represents the number of categories, where there is only one target for the small vessel, and is therefore set to 1;

the core loss formula is as follows:

similar to the Focalloss form, alpha and beta are hyper-parameters, and N represents the number of image key points;

for the case of otherwise YxyCalculating by using a Gaussian kernel, and using the following formula to show;

x and y are the corresponding pixel coordinates,andis the average of x and y and,is the variance;the method is characterized in that a predicted value representing a key point, N is the key point in a feature map, two hyper-parameters alpha and beta are used for balancing the influence of a loss function of a difficult sample, and a target is better distinguished from a background by reducing the contribution of an easy sample to loss and increasing the influence of the difficult sample.

2. The enhanced signature pyramid-based dual-polarization SAR small-sized ship detection method of claim 1, characterized in that: in the step (4), a bias value and a loss value of the bias are introduced, and the bias value output by the backbone network is set asThis offset value is trained with L1 loss:

where p represents the target box center point, R represents the downsampling factor 4,representing a deviation value;

assume the kth target, class CkIs represented byThen the center point coordinate position thereof isThe length and width of the target are Training the length and width is the L1Loss function:

whereinIs the result of the network output;

the integral loss function is the synthesis of the three, and different weights are distributed;

Ldet=LksizeLsizeoffsetLoffset

in the formula ofsize=0.1,λoffsize=1。

Background

Synthetic Aperture Radar (SAR) is a high-resolution imaging Radar based on active microwave sensing, compared with passive imaging optical remote sensing, SAR has the advantages of being all-weather, free of cloud layer shielding and the like, can be widely applied to the fields of environmental terrain investigation, military reconnaissance, ocean monitoring and the like, and SAR image ship target detection is an important technical means of ocean monitoring.

In the traditional ship target detection, ships and sea surface backgrounds are mostly distinguished in large-scene SAR images through methods such as a threshold method and manual feature extraction, the threshold method needs to set different thresholds according to background information in different-scene SAR images and has no universality, the manual feature extraction operation is complex, certain priori knowledge is needed, and the technical requirement is high. With the continuous development of computer vision, deep learning is applied to the field of target detection, the target detection method based on the deep learning can autonomously learn the main characteristics of a ship target according to artificially labeled SAR ship data and update parameters of a model for detection, and the deep learning is widely applied to SAR ship target detection by virtue of the advantages of simple operation, high model robustness and the like.

The scattering intensity of the small ship in the SAR image is usually weak, the occupied pixel number is small, the deep network is provided with a down-sampling module, the characteristics of the small ship extracted by the deep learning-based method are limited, and the image pixels occupied by part of ships are too small, so that the characteristics disappear in the down-sampling process. Due to insufficient extraction characteristics, the ship targets are easily interfered by background clutter and strong near-shore targets in the detection process, and a large amount of missing detection is caused. The method can adaptively fuse two channel data of the dual-polarization SAR to enrich the characteristics of the small ship target, and an attention mechanism of space and channel is introduced to construct an attention enhancement type low-layer characteristic pyramid for characteristic enhancement and characteristic screening aiming at the small ship, so that the problem of missed detection of the small ship target is solved, and the detection effect is improved.

Disclosure of Invention

The invention mainly aims to solve the problem of missed detection of small and medium-sized ship targets in SAR images due to small scale and weak scattering intensity, and provides a method for detecting the small and medium-sized ship targets of SAR based on dual-polarization adaptive channel fusion and attention-enhanced low-level feature pyramid. The main implementation object of the invention is the image collected by Sentinel-1, and the main work is to detect the target of the small ship.

The technical scheme of the invention specifically comprises the following contents:

1. and (3) dual-polarization adaptive channel fusion: the polarized SAR image comprises two polarized channels of VH and VV, the two polarized channels are simultaneously sent into a feature extraction network, the best effect cannot be achieved and feature redundancy occurs, strong priori knowledge is needed for manual parameter setting to perform dual-channel fusion, dual-polarized adaptive channel fusion automatically learns fusion coefficients of VH and VV through feedback of detection result loss, and a fusion channel input feature extraction network is obtained through coefficient weighting.

2. Low-level feature pyramid: the multi-scale feature map can be divided into a low-level feature map and a high-level feature map according to different depths, the low-level feature map usually corresponds to positioning information, the high-level feature map usually corresponds to semantic information, and the feature pyramid can effectively fuse the low-level feature and the high-level feature. Aiming at the problem that the target size of a small ship is small, and the characteristic is few or even disappears in the deep network characteristic extraction process, the low-level characteristic pyramid can be obtained by reducing the down-sampling times of the deep network, and the problem can be relieved by predicting on a large-scale characteristic diagram.

3. Space and channel attention mechanism: the attention mechanism is a resource allocation mechanism, and can allocate resources according to different importance levels set for an interested target, in a deep neural network, the resources to be allocated are weights, and the target detection effect can be improved by increasing the weights of the interested target. The space attention extracts the remarkable characteristics of the small ship target in a space range, and the channel attention screens out a characteristic channel beneficial to small ship target detection through the lifting dimension of the channel and reduces the characteristic redundancy.

The SAR small ship detection method based on the dual-polarization adaptive channel fusion and the attention enhancement type low-level feature pyramid comprises the following steps:

and (1) removing a pure background image which does not contain a ship target in the dual-polarization SAR data set, and performing data enhancement and normalization on the image.

In order to balance the number of positive and negative samples and reduce training time, pure background slices in labeled data need to be removed, total 9000 slices of original data are obtained, total 1859 slices of data obtained through background removal are obtained, the size of each slice is 800 × 800 pixels, in order that a trained depth model has robustness, data enhancement needs to be carried out, the rotation invariance of the model is improved through enhancement modes such as random cutting, rotation and inversion, and the anti-interference capability of the model is improved through the increase of salt and pepper noise.

The data normalization is beneficial to the learning of a deep network, the value range of each pixel of a VH and VV single-channel input image is 0-255, in order to reduce the learning pressure of the deep network, each pixel needs to be normalized to be 0-1 according to the original proportion, and the normalization mode is as follows:

in the formula PxyRepresenting the gray value size of the point corresponding to the pixel coordinate (x, y) in the image.

And (2) constructing a dual-polarized channel self-adaptive fusion module.

Dual-polarized SAR data has similar spatial resolution relative to single polarization but adds information of one polarization dimension. Compared with the traditional manual dual-polarization channel fusion parameter setting methodThe dual-polarization channel adaptive fusion module can learn two weight parameters for the VH and VV channels in the training process, as shown in formula (1), wherein lambda1Weights learned for the VH channels, λ2Weight set for VV channel, IadaptIs the final input, where12=1。

Iadapt=λ1×VH+λ2×VV

The overall structure of the dual-polarization adaptive channel fusion module is shown in fig. 1, wherein Softmax is used for converting lambda into lambda1And λ2Normalized to between 0 and 1, and lambda121. Softmax can map the output of multiple neurons into (0,1) intervals, which can be viewed as probabilities, assuming an array V, ViIndicating the i-th element therein, the Softmax value S of this elementiThe calculation method is as follows:

where j is the number of all elements.

And (3) constructing an attention enhancement type low-level feature pyramid module.

The overall structure of the small ship detection network is shown in fig. 2, wherein the right lower part of the small ship detection network is an attention-enhanced feature pyramid module, which comprises a low-level feature pyramid structure, a spatial attention module and a channel attention module.

In order to reduce the calculation amount and increase the receptive field, the feature extraction network usually needs to perform 32 times of downsampling, however, corresponding features in the feature map are few or even disappear after the small ship is subjected to 32 times of downsampling, in consideration of the problem, the CNN feature extraction module only performs 16 times of downsampling to reserve more features of the small ship, and in order to accurately position the central position of the small ship, the feature pyramid module is used for fusing low-layer and high-layer information and predicting the large-scale feature map obtained by final fusion.

The attention mechanism is a resource allocation mechanism, and can allocate resources according to different importance levels set for an interested target, in a deep neural network, the resources to be allocated are weights, and the target detection effect can be improved by increasing the weights of the interested target.

S=Conv[Maxpool3(x),Maxpool5(x),Maxpool7(x)]

C=Sigmoid(Avepool 3(x)+Maxpool3(x))×x

Combining fig. 2 and the above formula, S is the output of spatial attention, C is the output of channel attention, x is the input feature, Conv represents the convolution operation, Maxpool3, Maxpool5, Maxpool7 represent the maximum pooling with kernels 3, 5, 7, respectively, Avepool3 represents the average pooling with kernel 3, the sigmoid function is used to limit the output between 0 and 1, [, ] represents the feature concatenation.

And the space attention module learns the remarkable characteristics of the ship target by performing maximum pooling on the characteristic diagram in different scales of 3, 5 and 7, and enhances the key point information of the small target on the large-scale characteristic diagram. The channel attention module extracts significant information about the small ship targets on different channels through maximum pooling and average pooling, screens out features which are beneficial to detecting corresponding targets from the multiple channels of the spliced feature map, and can better distinguish the small ship targets from background clutter and other strong scattering non-ship targets while reducing the calculated amount.

Step (4) construction of loss function

The loss function (loss function) is used to measure the degree of inconsistency between the predicted value f (x) and the true value Y of the model, and is a non-negative real value function, usually expressed by L (Y, f (x)), and the smaller the loss function is, the better the robustness of the model is. The loss function is a core part of the empirical risk function and is also an important component of the structural risk function. The structural risk function of the model includes an empirical risk term and a regularization term.

Setting input map I ∈ RW×H×1For example, W is the image width and H is the image height. The key point thermodynamic diagram of the network output isWhere R represents the step size stride at which the output is obtained with respect to the original. C represents the number of classes, whichWhere there is only one target of a small boat, so set to 1.

The core loss formula is as follows:

similar to the Focal loss form, α and β are hyper-parameters, and N represents the number of image keypoints.

When Y isxycWhen 1, for an easy sample, the predicted value isClose to the value of 1, the number of the channels,is a small value, and thus loss is small, and a corrective action is taken.

For hard-to-divide samples, predict valuesClose to the value of 0 (c) and,it is bigger, which is equivalent to increasing the training proportion.

For the case of otherwise YxyCalculated with a gaussian kernel as shown in the following equation.

x and y are the corresponding pixel coordinates,andis the average of x and y and,is the variance.The prediction value representing the key point, N is the key point in the feature map, the two hyper-parameters alpha and beta are used for balancing the influence of the loss function of the difficult sample, and the target can be better distinguished from the background by reducing the contribution of the easy sample to the loss and increasing the influence of the difficult sample.

FIG. 3 is a simple Gaussian kernel diagram with Y as the ordinatexycAnd is divided into a region a (closer to the center point but having a value between 0 and 1) and a region B (far from the center point and close to 0). For region A, Y is a Gaussian kernel centerxycIs slowly changed from 1 to 0. In the case of the region B, for example,should be 0, if the value is relatively large, say 1, thenThe weight is increased, and the punishment degree is also increased. If the predicted value is close to 0, thenWill be small, allowing its loss specific gravity to be reduced. For (1-Y)xyc)βIn other words, the value of the B region is larger, and the loss specific gravity of other negative samples around the central point is weakened.

The spatial resolution of the feature map extracted by the attention-enhanced low-level feature pyramid becomes one fourth of that of the original input image. The point corresponding to one pixel on the output feature map corresponds to a 4 × 4 region of the original image, which brings about a large error, and thus introduces an offset value and a loss value of the offset. Let the bias value of the backbone network output beThis offset value is trained with L1 loss:

where p represents the target box center point, R represents the downsampling factor 4,representing the deviation value.

Assume the kth target, class CkIs represented byThen the center point coordinate position thereof isThe length and width of the target are Training the length and width is the L1Loss function:

whereinIs the result of the network output.

The overall loss function is a combination of the above three, and different weights are assigned.

Ldet=LksizeLsizeoffsetLoffset

In the formula ofsize=0.1,λoffsize=1。

Compared with the prior art, the dual-polarization self-adaptive channel fusion can fully utilize two channels of a dual-polarization SAR image, different weights are obtained through parameter learning to fuse the information which is beneficial to detection of small ships in the two channels, and the omission factor is reduced by 13.08% compared with the omission factor of only inputting a VH channel. The method has the advantages that the down-sampling times of the feature pyramid are reduced, the low-level feature pyramid is constructed, the features of the small ship target with higher resolution can be obtained, the significant features of the small ship target can be extracted in a spatial range by introducing spatial attention, feature redundancy can be reduced by introducing channel attention, a feature channel beneficial to small ship target detection is screened out, and the low-level feature pyramid, the spatial attention mechanism and the channel attention mechanism form an attention enhancement type low-level feature pyramid module.

Drawings

Fig. 1 is a structural diagram of a dual-polarization adaptive channel fusion module.

Fig. 2 is an overall configuration diagram of the target detection of the small ship.

FIG. 3 is a schematic representation of a Gaussian kernel distribution.

FIG. 4 is a flow chart of network model parameter training

FIG. 5 is a comparison of partial detection results of different algorithms.

Detailed Description

The following describes the implementation process and experimental results of the present invention with reference to the accompanying drawings.

The sample data used in the implementation of the invention is an LS-SSDD-v1.0 data set obtained by labeling 15 large-scene sentinel-1 images released by electronic technology university (Chengdu) in 2020, two polarization modes of VV and VH are provided, the size of each large image is 24000 x 16000 pixels, the resolution is 5 x 20m, wherein 2358 ship targets are totally marked by Labelimg software by taking AIS data and Google Earth as references. The pixel number of a target frame marked in the data set is smaller than 2342, all the target frames are small targets in a large scene image, 9000 ship slices are obtained by sequentially cutting through a sliding window with the size of 800 multiplied by 800, the ship slices comprise a large number of pure background slices without the ship targets, the data set is mainly characterized by large scenes, small targets and rich backgrounds, and the SAR data set only comprises the small ship targets.

The specific implementation steps are as follows:

step 1, pretreatment of the Sentinel-1 data.

In order to balance the number of positive and negative samples and reduce training time, pure background slices are removed, the front 6000 images in the original data slices are subjected to background removal to obtain 1123 slices as a training set, the rear 3000 images are subjected to background removal to obtain 736 slices as a test set, and 1859 background-removed data slices are obtained. And then, dividing the pixel value of each pixel point in the data set image by 255, and normalizing the pixel value of the image to be between 0 and 1.

And 2, setting specific network parameters.

1) Feature extraction network structure parameters

The feature extraction module structure references the Resnet-18 and makes some detail adjustments to accommodate the data. For a grayscale image with an input dimension of 800 × 800 × 1, 64 convolution filters with a size of 7 × 7 are first used to increase the number of channels and perform filtering. The method of filling zeros around the feature map and then performing convolution operation is adopted to ensure that the size of the output feature map is not changed and the number of channels is increased.

The middle part of the network is formed by stacking 3 residual convolution modules with similar structures, each module comprises 2 residual structures, and the output dimension after passing through each module is shown in table 1.

Table 1 setting of feature extraction network parameters

2) Characteristic pyramid module structure parameter

The feature pyramid module comprises a feature fusion and attention mechanism, the feature fusion module is used for splicing two groups of feature graphs with the same dimension in the channel dimension direction, the dimension is unchanged after splicing, the channel dimension is the sum of the original channel dimensions of the two groups of feature graphs, the feature fusion of the high-level feature graph and the low-level feature graph needs the up-sampling of the parameter 2 of the high-level feature graph, and the channel dimension is unchanged.

In the attention mechanism, spatial attention includes 3 Maxpooling modules, and the pooling scale resolution is set to 3, 5, 7. The channel attention included one maxporoling module and one avepongling module, all set to 3 in scale.

3) Parameter setting of loss function and optimizer

The optimizer uses a random gradient descent (SGD), the learning rate lr is set to 0.001, the Momentum is set to 0.9, and the weight decay is set to 5 e-4. In the training process, after every 100 epochs are completely trained, the learning rate is reduced to half of the original learning rate, and alpha and beta in the classification loss function are respectively set to be 2 and 4 by referring to the parameter setting of the Focal loss.

4) Other parameters

The training times are set to be 250 epochs, the size of a single picture is 800 multiplied by 800, the phenomenon that the occupied video memory is too large is prevented, and the batch size is set to be 4.

Step 3, training the network model

After the data and the network structure are prepared through the steps, training of the network model can be started. Data is input into the network in the form of batches (batch), and each sample data in a single batch is calculated and transmitted in the network in a parallel mode. One batch of training is equivalent to one iteration (iteration), and one round of (epoch) iteration is completed when all training data are trained in the network once. Before training, the maximum number of training rounds is set, the current model parameters are used for testing the verification set once after each iteration, the testing precision of the verification set is recorded, and the current network model is stored whenever a better verification result appears.

The training process of the network model parameters is shown in the flow of fig. 4, and the specific steps are as follows:

(1) parameters of the network are initialized.

(2) A round of iteration begins.

(3) Training data is scrambled and divided into N batches according to the batch size M (the data amount is not more than mxn).

(4) Inputting a batch of data into the network, obtaining an output result through forward calculation of the network, and obtaining the loss of the iteration through a total loss function.

(5) Propagating the loss back to each layer of the network yields the gradient of the layer weight W and bias b by the chain rule of gradient derivatives.

(6) And finally, updating the network parameters through the SGD optimization function.

(7) And (5) returning to the step (4) to perform iteration of the next batch until all batches are completely calculated, namely completing one round of iteration.

(8) The data of the verification set is tested by the model at the moment, and the result is recorded.

(9) And (5) returning to the step (2) until the set maximum number of training rounds is reached.

Step 4, result prediction and verification

In comparison with the overall structure of the small ship detection algorithm 2, the pseudo code of the result prediction process is as follows:

in order to verify the performance of the method on the SAR small ship target data set, the LS-SSDD-v1.0 data set is subjected to result verification and is compared with other common algorithms, and in addition, an ablation experiment is performed on the dual-polarization self-adaptive channel fusion module and the enhanced characteristic pyramid module.

TABLE 2 comparison of the results of the method of the present invention with other commonly used algorithms

Method Rate of accuracy Recall rate [email protected] Test set detection time
CascadeR-CNN 57.68% 77.04% 69.90% 72.63s
YOLOv4 70.69% 70.70% 70.70% 36.44s
CenterNet 69.26% 59.85% 59.85% 46.49s
The method of the invention 67.92% 80.40% 71.76% 31.68s

The experimental results of the present invention are shown in table 2, along with some other methods of detection, where time is the total time (800 x 800 pixels) to detect 736 image slices. Partial results for example, as shown in fig. 5, the method of the present invention is superior to other methods in recall, AP and time indices, and the accuracy index is lower than YOLOv4 and centrnet, but this is acceptable because the recall index is more important in target detection of small vessels than several other indices.

TABLE 3 Dual-polarization adaptive channel fusion module ablation experiment

In Table 3. lambda.1=0,λ21 for a single VV channel input, λ1=1,λ2The result can be known from the table that the result knows in 0 representative form VH channel input, compares in single channel input SAR data and manual setting and fuses the parameter, and self-adaptation fuses can effectively promote the recall rate index, reduces the hourglass of small-size ship target and examines.

Table 4 attention-enhanced low-level feature pyramid module ablation experiment

In table 4, compared with the feature pyramid, the low-level feature pyramid retains more features of the small-sized ship, and can effectively improve the accuracy, recall rate, and AP index. The recall rate and the AP index are improved after the attention mechanism is added on the basis of the characteristic pyramid, but the accuracy is slightly reduced, and after the attention mechanism and the low-level characteristic pyramid are combined, the accuracy, the recall rate and the AP index are effectively improved, so that the missing detection is reduced, the false alarm is reduced, and the effectiveness of the method is further proved.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:负荷识别方法、装置、计算机设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!