Joint modulation type identification and parameter estimation method for cognitive radar signals

文档序号:6590 发布日期:2021-09-17 浏览:29次 中文

1. The joint modulation type identification and parameter estimation method for the cognitive radar signal is characterized by comprising the following steps:

s1, constructing a data set for training:

s11, when the input signal for training is a waveform signal, each waveform signal is described by a set number of definition parameters; each definition parameter selects a corresponding modulation type from a set of modulation types with fixed number, and a sample of a waveform signal is described by the combination of the modulation types corresponding to the definition parameters; obtaining a plurality of samples of waveform signals with different modulation type combinations, and labeling the defined parameters and the modulation type combinations for describing the signals to form a data set for training;

s12, when the input signal for training is a PDW signal, each PDW signal is described by a definition parameter, each pulse in the PDW signal is described by an M-dimensional vector, and the vector describes a specific value of the M-dimensional definition parameter corresponding to the pulse; m represents the number of defined parameters; each definition parameter selects a corresponding modulation type from a modulation type set with a fixed number, and a PDW signal sample is described by the modulation type combination corresponding to each definition parameter; obtaining a plurality of samples of PDW signals with different modulation type combinations, and labeling the defined parameters describing the signals and the modulation type combinations to form a data set for training;

s2, constructing a deep multitask neural network JMRPE-Net, wherein the neural network comprises a three-layer cascade convolution neural network CNN and two task specific layers connected in parallel behind the CNN; defining two task-specific layers as a first task-specific layer and a second task-specific layer, respectively; each task specific layer comprises an attention mechanism layer, two cascaded two-way long-and-short time memory network layers and a full connection layer which are sequentially connected in series, wherein a softmax layer is connected behind the full connection layer of the first task specific layer;

inputting the data set obtained in the step S11 or S12 into a JMRPE-Net, wherein the softmax layer is used for outputting a probability distribution sequence of modulation types and realizing a modulation type identification task; the full connection layer of the second task specific layer outputs a defined parameter estimation sequence used for completing a defined parameter recognition task, so that the training of the deep multitask neural network JMRPE-Net is realized;

s3, inputting the cognitive radar pulse sequence signals to be recognized, namely waveform signals or PDW signals, into the corresponding trained deep multitask neural network JMRPE-Net to obtain the modulation type of each signal to be recognized and modulation parameters corresponding to the modulation type.

2. The method of joint modulation type identification and parameter estimation of cognitive radar signals of claim 1, wherein in training the deep multitasking neural network JMRPE-Net, the loss function is:

in the formula, 2K is the total number of tasks and is respectively K modulation type identification tasks and K modulation parameter estimation tasks; omegaiIs the weight of the ith task, LiIs the loss value for the ith task.

3. The method of joint modulation type identification and parameter estimation of cognitive radar signals of claim 2, wherein for the modulation type identification task, the loss function is:

where n represents the number of training samples, m represents the number of classes of modulation types,in the form of a true category label,is a prediction category label.

4. A method for joint modulation type identification and parameter estimation of cognitive radar signals according to claim 2 or 3, characterized in that for the modulation parameter estimation task the loss function is:

wherein n represents the number of training samples, m represents the number of classes of modulation parameters,for the value of the true parameter(s),are predicted parameter values.

5. The method of joint modulation type identification and parameter estimation of cognitive radar signals according to claim 1, 2 or 3, characterized in that said feature sharing extraction layer is used to extract modulation type identification tasks and to define general features of parameter identification tasks.

6. The joint modulation type identification and parameter estimation method for cognitive radar signals according to claim 1, 2 or 3, wherein the feature sharing extraction layer comprises a multi-layer convolution-pooling layer for extracting spatial features and local shift invariant features, respectively.

7. The joint modulation type recognition and parameter estimation method for cognitive radar signals according to claim 1, 2 or 3, wherein the training data set obtained in step S1 is normalized and then fed into the JMRPE-Net for training.

8. The method as claimed in claim 7, wherein the normalization process is performed on the cognitive radar pulse sequence signal to be identified, and then the normalized cognitive radar pulse sequence signal is input into a network for identification.

9. A method for joint modulation type identification and parameter estimation of cognitive radar signals according to claim 1, 2 or 3, characterized in that said defined parameters comprise one or more of pulse repetition interval PRI, radio frequency carrier frequency RF, pulse width PW, intra-pulse modulation waveform MOP.

Background

The cognitive radar is a complex sensor with various dynamic change working modes, and is widely applied to the fields of reconnaissance, target tracking and the like. Cognitive radars perceive the environment through a priori knowledge and interactive learning of the environment. On the basis, the transmitting loop and the receiving loop can be adjusted in real time to adapt to the change of the environment and the target, so that the performance potential of the radar is fully exerted to meet the preset radar performance index. The cognitive radar's Perception-action-cycle (PAC) makes it a significantly flexible capability to optimize the modulation type and modulation parameters between and within pulses. Specifically, by continuously sensing the environment and optimizing a preset objective function by the cognitive radar, corresponding control parameters, such as Pulse Repetition Interval (PRI), Radio Frequency (RF), Pulse Width (PW), and Modulation On Pulse (MOP), are optimized in real time. This poses a significant challenge to conventional electronic support systems or radar warning receivers in three areas: 1) receiving a complex cognitive radar working mode formed by jointly optimizing a plurality of control parameters (such as PRI, RF and PW); 2) each control parameter can be subjected to complex modulation; 3) by optimizing the specific modulation parameter value in the specific parameter space, the cognitive radar can flexibly realize a fine-grained working mode with the same modulation mode and different modulation parameters.

In the early research on automatic modulation type identification, the automatic modulation type identification is carried out on the modulation modes between pulses and within the pulses by respectively identifying the modulation types of different control parameters in the pulse sequence, and on the basis, the identification of a complex sequence containing a plurality of parameters is solved. The methods can be divided into two categories, wherein the first category is that a single network is set for different control parameters to realize modulation type identification; the second category is that the combination of modulation types of different control parameters is artificially considered as different categories, and a multi-classifier is used for realizing the identification task, and a plurality of parameters are treated as a whole.

In an actual system, for the first method, although good performance can be obtained in a single control parameter identification task, it is difficult to mine the association relationship among various control parameters commonly existing in the cognitive radar; for the second kind of method, the performance of the model can be guaranteed only for a small number of classes, and as the number of control parameters and modulation types thereof increases, the number of combined classes grows exponentially, which affects the overall performance of the model. On the other hand, models used in the traditional method assume that data are ideal, and a pulse sequence received by a real system is often influenced by three typical non-ideal factors, namely parameter measurement errors, pulse signal loss, false pulse interference and the like, so that the identification capability of the traditional method for the working mode of the multifunctional radar is reduced.

In the aspect of modulation parameter estimation research, the modulation parameter estimation of the cognitive radar working mode is an important step for revealing the cognitive radar target function optimization process, and related methods are relatively less researched.

The deep learning model is a neural network model with a plurality of nonlinear mapping layers, can abstract an input sequence layer by layer and extract features, excavates a deeper potential rule, and has strong robustness to noise and the like. In deep Learning, a Multi-task Learning (Multi-task Learning) deep neural network is widely applied in many fields, and has the capability of simultaneously solving a plurality of tasks, and on one hand, the shared layer system structure can greatly reduce memory occupation and calculation load; on the other hand, multiple tasks can realize information sharing in the implementation process, so that the performance of the network is improved. The multi-task learning is naturally applicable to the identification and modulation parameter estimation of radar working modes defined by multiple parameters of the cognitive radar.

Disclosure of Invention

The invention provides a method for identifying and estimating combined modulation types of cognitive radar signals, which can be used for identifying and estimating the modulation types of received cognitive radar pulse signals with a plurality of control parameters and rich modulation types in parallel according to working modes defined by different modulation parameters and modulation type combinations.

The joint modulation type identification and parameter estimation method for the cognitive radar signal comprises the following steps:

s1, constructing a data set for training:

s11, when the input signal for training is a waveform signal, each waveform signal is described by a set number of definition parameters; each definition parameter selects a corresponding modulation type from a set of modulation types with fixed number, and a sample of a waveform signal is described by the combination of the modulation types corresponding to the definition parameters; obtaining a plurality of samples of waveform signals with different modulation type combinations, and labeling the defined parameters and the modulation type combinations for describing the signals to form a data set for training;

s12, when the input signal for training is a PDW signal, each PDW signal is described by a definition parameter, each pulse in the PDW signal is described by an M-dimensional vector, and the vector describes a specific value of the M-dimensional definition parameter corresponding to the pulse; m represents the number of defined parameters; each definition parameter selects a corresponding modulation type from a modulation type set with a fixed number, and a PDW signal sample is described by the modulation type combination corresponding to each definition parameter; obtaining a plurality of samples of PDW signals with different modulation type combinations, and labeling the defined parameters describing the signals and the modulation type combinations to form a data set for training;

s2, constructing a deep multitask neural network JMRPE-Net, wherein the neural network comprises a three-layer cascade convolution neural network CNN and two task specific layers connected in parallel behind the CNN; defining two task-specific layers as a first task-specific layer and a second task-specific layer, respectively; each task specific layer comprises an attention mechanism layer, two cascaded two-way long-and-short time memory network layers and a full connection layer which are sequentially connected in series, wherein a softmax layer is connected behind the full connection layer of the first task specific layer;

inputting the data set obtained in the step S11 or S12 into a JMRPE-Net, wherein the softmax layer is used for outputting a probability distribution sequence of modulation types and realizing a modulation type identification task; the full connection layer of the second task specific layer outputs a defined parameter estimation sequence used for completing a defined parameter recognition task, so that the training of the deep multitask neural network JMRPE-Net is realized;

s3, inputting the cognitive radar pulse sequence signals to be recognized, namely waveform signals or PDW signals, into the corresponding trained deep multitask neural network JMRPE-Net to obtain the modulation type of each signal to be recognized and modulation parameters corresponding to the modulation type.

Preferably, in the process of training the deep multitask neural network JMRPE-Net, the loss function is:

in the formula, 2K is the total number of tasks and is respectively K modulation type identification tasks and K modulation parameter estimation tasks; omegaiIs the weight of the ith task, LiIs the loss value for the ith task.

Preferably, for the modulation type identification task, the loss function is:

where n represents the number of training samples, m represents the number of classes of modulation types,in the form of a true category label,is a prediction category label.

Preferably, for the task of modulation parameter estimation, the loss function is:

wherein n represents the number of training samples, m represents the number of classes of modulation parameters,for the value of the true parameter(s),are predicted parameter values.

Preferably, the feature sharing extraction layer is used for extracting the modulation type identification task and defining the general features of the parameter identification task.

Preferably, the feature sharing extraction layer comprises a plurality of convolution-pooling layers for respectively extracting the spatial features and the local displacement invariant features.

Preferably, after the training data set is obtained in step S1, the training data set is normalized and then sent to the JMRPE-Net for training.

Preferably, the cognitive radar pulse sequence signal to be identified is subjected to the normalization processing and then is input into the network for identification.

Preferably, the defining parameters include one or more of a pulse repetition interval PRI, a radio frequency carrier frequency RF, a pulse width PW, and an intra-pulse modulation waveform MOP.

The invention has the beneficial effects that:

the invention provides a method for identifying a joint modulation type and estimating parameters of a cognitive radar signal, which can be used for identifying the modulation type and estimating the modulation parameters of the received cognitive radar pulse signal with a plurality of control parameters and rich modulation types in parallel aiming at a working mode defined by modulation type combinations on different control parameters, and specifically comprises the following steps:

(1) the method combines a convolutional neural network and a recursive neural network, utilizes the automatic feature learning and representing capability of a depth network, can effectively extract space and time sequence features between pulses and in the pulses, and can effectively complete automatic modulation type identification and modulation parameter estimation tasks under severe non-ideal conditions;

(2) the method can learn from the waveform signal end to end, on one hand, avoids the condition that the original IF waveform contains important information and is possibly lost in the PDW parameter measurement process, on the other hand, the method is favorable for finding out the time sequence relation between continuous pulses and extracting modulation parameters between the pulses and in the pulses, and further excavates the correlation between different control parameters;

(3) the JMRPE-Net method provided by the invention is not only limited to input in an IF form, but also can finish the same tasks of modulation type identification and modulation parameter estimation on input signals in a PDW form;

(4) the joint modulation type identification and modulation parameter estimation method for the cognitive radar signal sequence can provide technical means support for subsequent cognitive radar working mode analysis and reasoning.

Drawings

Fig. 1 is an exemplary diagram of an intermediate frequency sampling signal in an analog cognitive radar operating mode according to the present invention.

Fig. 2 is a network framework diagram of joint modulation type identification and parameter estimation constructed by the present invention.

Fig. 3 is a network hierarchy diagram of joint modulation type identification and parameter estimation constructed by the present invention.

Fig. 4 is a diagram of an example of joint modulation type identification and parameter estimation network testing.

Fig. 5 is a diagram of modulation types and modulation parameter examples of the cognitive radar typical mode definition parameters.

Fig. 6 is an exemplary diagram of an MOP modulation method in different scenarios.

Detailed Description

The invention provides a method for identifying a joint modulation type and estimating a modulation parameter of a cognitive radar signal sequence.

A joint modulation type identification and parameter estimation method for cognitive radar signals comprises the following steps:

s1, constructing a data set for training:

s11, if the input signal is a waveform signal, the following description is given:

the input pulse signal sample is a waveform signal with a specific signal-to-noise ratio, and is described by M-dimensional definition parameters such as PRI, RF, PW, and MOP (for simplicity, these four definition parameters are described as examples in the following), and each definition parameter may select a corresponding modulation type from a set of a fixed number of modulation types, so that one input signal sample may be described by a combination of modulation types corresponding to each definition parameter. Each input sample is a pulse sequence, see fig. 1.

S12, if the input signal is the PDW signal, the following description is given:

the input pulse signal sample is a PDW sequence, and is described by M-dimensional definition parameters such as PRI, RF, PW, and MOP (again, for simplicity, these four definition parameters are described as an example hereinafter). In the PDW sequence, each pulse is described by an M-dimensional vector describing the specific values of the M-dimensional defining parameters corresponding to the pulse. The PDW sequence, which contains L pulses, is an M x L matrix. Similarly, each definition parameter selects a corresponding modulation type from a set of modulation types with a fixed number, so that a sample of the PDW signal is described by a combination of modulation types corresponding to the definition parameters; samples of a plurality of PDW signals having different combinations of modulation types are obtained and defined parameters describing the signals and the combinations of modulation types are labeled to form a data set for training.

S2, constructing and training a Joint modulation type recognition and modulation parameter estimation depth multi-task neural network (JMRPE-Net); the JMRPE-Net includes a three-layer cascaded Convolutional Neural Network (CNN) as a shared feature extraction module, and then connects two types of task-specific layers, where each task-specific layer includes an Attention mechanism (Attention) layer, a two-layer cascaded bidirectional long-short memory network (bi-LSTM) layer, and a full connection layer, and specifically includes:

s21, CNN feature sharing extraction layer

Note that the signal at input CNN is x. The feature sharing extraction layer comprises 3 convolution-pooling layers for extracting spatial features and local displacement invariant features, each convolution-pooling layer comprises 64 filters, the convolution kernel size is 2 x 3 (if the input is a PDW sequence, the convolution kernel size of the convolution layer needs to be adjusted according to the dimension M of the PDW), a ReLU activation function is used, the first two layers of convolution layers are connected with the pooling layer with the size of 1 x 2 to reduce feature dimensions, the extracted features have local deformation invariance, the third layer of convolution layers is connected with the pooling layer with the size of 2 x 2 to reduce the feature dimensions, and the features are converted into row vectors. The output of x after passing CNN is recorded as

S22 Attention layer of task specific layer

Then A is input into a task specific layer, firstly an attention mechanism layer is realized by a full connection layer, a nonlinear sigmoid layer is used as an excitation function,is the dimension of the feature in time; attention weight at time t:

wherein:

f(a)=aTWattention

Wattentionis a trainable parameter matrix;

calculating an attention feature vector:

s23, the first bi-LSTM layer of the task specific layer

Then, the user can use the device to perform the operation,is fed into two bi-LSTM layers. Computing forward hidden layers for bi-LSTM layersAnd a backward hidden layerWherein:

wherein L represents the length of the Attention feature vector extracted by the Attention layer,representing the feature of the t-th moment in the feature vector;

s24, the second bi-LSTM layer of the task specific layer

Hidden layer H ═ H according to the first bi-LSTM layerf,Hb]Calculating the forward hidden layer of the current bi-LSTM layerAnd a backward hidden layerWherein the forward hidden layerThe tth element in (1):

in the formula (I), the compound is shown in the specification,

backward hidden layerThe tth element in (1):

s25 full connection layer of task specific layer

Vector according to the second bi-LSTM layerCalculating an output vector:

whereinOutput o at each timetFor the modulation type identification task, a probability distribution sequence of the modulation type is obtained through a softmax layer:whereinIs the output class probability sequence for the mth modulation type; for the modulation parameter estimation task, obtaining a modulation parameter estimation sequence through a full connection layer:whereinIs the nth parameter to be estimated

S3, the input complex radar waveform signals to be detected are arranged into a data set sample format and input into a JMRPE-Net combining modulation type identification and modulation parameter estimation to obtain the identification result of the modulation type and the estimation result of the modulation parameter.

Example (b):

s1, firstly, generating a sequence sample data set for model training by using the recorded data or the simulation data:

s11, cleaning, extracting or simulating to generate data sets D1 under 8 different signal-to-noise ratios according to the corresponding domain expert knowledge. The data set contains 8 subsets of data, corresponding to [ -10, -6, -2,0,2,6,10,50, respectively]dB, 8 different signal-to-noise ratios, wherein the signal-to-noise ratio is 50dB of data setIs considered to be a noise-free case. Each training sample in the data subset corresponds to a modulation type combination, the sample is an IF wave band signal, pulse-to-pulse modulation is carried out by three mode definition parameters of PRI, RF and PW, and pulse-to-pulse waveform modulation is defined by MOP parameters. Such as Pi=(p1,p2,…,pn)∈R2×nFor the ith sample in the data set D1, n is the sequence length, and each sample point in the sample contains a real value and a complex value;

and S12, generating data sets D2 under 4 different MOP scenes according to corresponding domain expert knowledge in a simulation mode. D2 is composed of 4 data subsets, defined as 4 MOP scenes, and the modulation types of the MOPs increase sequentially from scene 1 to scene 4. If scene 1 only contains one MOP modulation type, namely Linear Frequency Modulation (LFM), scene 2 increases frequency modulation signal modulation (Costas), scene 3 increases phase Frank code modulation, scene 4 increases non-linear frequency modulation (NLFM), the number of modulation type combinations increases, and the complexity of the work also increases. The data set can be obtained by screening a data subset with a certain specific signal-to-noise ratio of the data set D1 according to the MOP modulation type;

s2, performing fixed normalization processing on the pulse sequence data set generated in S1, wherein the formula is as follows

Wherein s (t) is an original signal, and s' (t) is a normalized signal;

and S3, constructing and training a deep multitask neural network JMRPE-Net combining modulation type identification and modulation parameter estimation. The JMRPE-Net comprises three layers of cascaded CNN serving as a shared feature extraction module, and then is connected with two task specific layers in parallel, wherein each task specific layer comprises an Attention layer, two layers of cascaded bi-LSTM layers and a full connection layer. The CNN shared feature extraction layer is used for extracting spatial features and local displacement invariant features, the Attention layer adds different Attention weights to the extracted shared features aiming at different tasks, and the bi-LSTM layer extracts time sequence features.

S31, calculating the convolution output of the h layer of the CNN:

outputting from the h +1 th layer of the pooling layer:

outputting the h +2 th convolution layer:

expressed in matrix form as:

where K denotes the convolution kernel, KrotDenotes that k is rotated by 180 deg., d denotes the pooling step, lambdah+1Representing the weight of each feature element. Output after CNN is noted

S32, then A is input into a task specific layer, firstly an attention mechanism layer, wherein the attention mechanism layer is realized by a full connection layer, a nonlinear sigmoid layer is used as an excitation function,attention weight for time t, which is the dimension of the feature in time:

wherein

f(a)=aTWattention

WattentionIs a matrix of parameters that can be trained,

calculating an attention feature vector:

s33, and then,is fed into two bi-LSTM layers. Computing forward and backward hidden layers for the first bi-LSTM layerAndwherein

Wherein L represents the characteristic sequence length of the output of a preceding attention mechanism layer,features representing the t-th time instant;

the LSTM of the above formula represents the LSTM function, which is represented by the following formulas

ft=σ(Wfpt+Rfht-1+bf)

it=σ(Wipt+Riht-1+bi)

at=tanh(Wapt+Raht-1+ba)

ot=σ(Wopt+Roht-1+bo)

ct=ct-1×ft+at×it

ht=tanh(ct)×ot

Wherein f ist,it,otFor forgetting gate, input gate, output gate, atFor input, Wf,Wi,Wa,WoFor corresponding weight matrices, Rf,Ri,Ra,RoAs corresponding cyclic weights, bf,bi,ba,boσ and tanh are sigmoid and hyperbolic tangent activation functions, respectively, for the corresponding bias terms. Splicing the hidden layers in the forward direction and the backward direction to obtain a hidden layer H ═ H of the bi-LSTM layerf,Hb]。

S34, calculating the hidden layers of the second bi-LSTM layer in the forward and backward directions

Hidden layer H ═ H according to the first bi-LSTM layerf,Hb]The hidden layer variables of the current bi-LSTM layer are calculated. The calculation process is similar to S43, and the formula is as follows:

whereinThe hidden layers in the forward direction and the backward direction are spliced to obtain the vector of the bi-LSTM layer in the working mode

S35 vector according to the second bi-LSTM layerCalculating an output vector:

whereinOutput o at each timetFor the modulation type identification task, a probability distribution sequence of the modulation type is obtained through a softmax layer:whereinIs the output class probability sequence for the mth modulation type; for the modulation parameter estimation task, obtaining a modulation parameter estimation sequence through a full connection layer:whereinIs the nth parameter that needs to be estimated.

Tag sequence Y ═ for one modulation type (Y)1,y2,...,yM) The modulation parameter label is P ═ (P)1,p2,...,pN) The JMRPE-Net outputs a corresponding modulation type identifierOther resultsAnd modulation parameter estimation resultDefining a loss function for the input waveform signal as:

in the formula, 2K is the total task number of K mode parameters, and is respectively K modulation type identification tasks and K modulation parameter estimation tasks; omegaiIs the weight of the ith task, LiIs the loss value for the ith task.

For the modulation type identification task, the loss function is:

wherein n represents the number of training samples, m represents the number of classes,in the form of a true category label,a prediction category label;

for the modulation parameter estimation task, the loss function is:

wherein n represents the number of training samples, m represents the number of classes,for the value of the true parameter(s),is a predicted parameter value;

s4, after the IF waveform signals of the cognitive radar pulse sequence are normalized, the IF waveform signals are input into the deep multitask neural network JMRPE-Net trained in the step S3, and the modulation types and the modulation parameters of the four mode definition parameters of each sample PRI, RF, PW and MOP can be obtained. The IF waveform signal of the cognitive radar pulse sequence to be tested requires the same format as training data. The modulation type combination of the pattern defining parameters may be different from that of the training samples for different test samples.

The modulation type and modulation parameters of four typical mode defining parameters such as Pulse Repetition Interval (PRI), radio frequency carrier frequency (RF), Pulse Width (PW), intra-pulse modulation waveform (MOP) and the like are shown in FIG. 6, and the formed intermediate frequency sampling signal in the cognitive radar working mode is shown in FIG. 1.

The specific pulse level identification method is as follows:

s1, generating a sequence sample data set for model training by using the recorded data or the simulation data;

s2, performing fixed normalization processing on the pulse sequence data set generated in S1;

s3, training JMRPE-Net by using a training data set, wherein the network structure of the JMRPE-Net is shown in FIG. 4, and the network training of the JMRPE-Net is divided into 5 steps;

s31, extracting spatial features and local displacement invariant features by the CNN layer to serve as shared features;

s32, the Attention layer of the task specific layer calculates Attention weight for the specific taskObtaining attention feature vector

S33, calculating time sequence characteristics of the first bi-LSTM layer of the task specific layer

S34, calculating time sequence characteristics of the second bi-LSTM layer of the task specific layer

S34 calculation output vector of full connection layer of task specific layer

S35, calculating loss function value

After the complex long radar pulse sequence to be tested shown in fig. 1 is subjected to sequence normalization, the JMRPE-Net network obtained through training in step S3 is used to obtain the recognition result of the modulation type and the estimation result of the modulation parameter, and the results are output as shown in fig. 5.

In summary, the above is only an example of the implementation of the present invention based on the selected fixed modulation type and modulation parameter, and is not intended to limit the protection scope of the present invention. The core key points of the invention are a cognitive radar working mode defined by different modulation type combinations and different modulation parameters based on a plurality of PDW parameters, an attention mechanism aiming at a specific task, and a multitask neural network model based on a CNN-LSTM model. Within the above design principles and implementation points of the present invention, the methods for identifying the cognitive radar signal joint modulation type and estimating the modulation parameters through corresponding modification, replacement, improvement, etc. are all included in the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于SIMO多普勒雷达的多目标动作识别方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!