Active defense method for DDoS (distributed denial of service) attack in sensing edge cloud based on flow weight control

文档序号:7250 发布日期:2021-09-17 浏览:30次 中文

1. A method for actively defending DDoS attack inside a sensing edge cloud based on flow weight control is characterized by comprising the following steps:

(1) in a defense period t, for each cooperative defense edge node i to be decided and other defense cooperator sets { -i }, a dynamic random game model is adopted to obtain the flow weight of the cooperative defense edge node with the minimum cost function in the Nash equilibrium stateAnd calculating an optimal control strategy according to the current flow weight of the cooperative defense edge nodeThe control strategy, i.e. during the attack duration [0, T]Set of all defending collaborator traffic weights within wi(t),w-i(t)};

The cost function considers the flow state and the task unloading amount threshold when the edge node is attacked by the internal DDoS;

(2) the optimal control strategy obtained according to the step (1)Reconfiguring collaborationThe flow weight of the defense edge node end is achieved to achieve the flow weight of the cooperative defense edge nodeRealizing a nash equilibrium state.

2. The active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 1, characterized in that the dynamic random game GsIt is written as:

wherein the content of the first and second substances,for a game participant comprising all sensor device nodes of a cooperative defense border node i, other defense collaborators-i, possibly DDoS attackers,representing the number of all game participants;

w (t) is traffic weight space w (t) { { w { (t)o(t)},{wi(t),w-i(t) }, where wo(t)}∈Wo,wi(t),w-i(t)∈Wi;wo(t) the frequency of communication over the defender's connection with attacker o i.e. traffic weight,the traffic weights taken for the internal DDoS attacker o,maximum traffic weight allowed for attacker o; w is ai(t) the frequency of communication, namely the flow weight, on the connection of the cooperative defense edge node i and the sensing equipment node, w-i(t) other defencesThe communication frequency of the collaborator-i and the sensing equipment node connection is the traffic weight,traffic weights taken for cooperative defense edge nodes,maximum traffic weight allowed by defenders;

s (t) is a state space, S (t) { theta }o(t),θi(t), o belongs to N, i belongs to M, wherein N represents the number of internal DDoS attackers, and M represents the number of cooperative defense edge nodes; thetao(t) traffic status of an internal DDoS attacker, θi(t) traffic status observed for defense cooperator i;qo(t) represents the attack rate of an internal DDoS attacker, wo(t) is the frequency of communication over the connection with attacker o, i.e. the traffic weight;wherein q iso(t)wo(t) is traffic from an internal DDoS attacker o,is the sum of the flows from other sensing devices, qj(t) is the transmission rate from the other sensing device j, wj(t) is the frequency of communication over the connection with the other sensing device j, i.e. the traffic weight.

J (t) is a cost function, and a quadratic increasing function is adopted as the cost function J (t) as follows:

wherein q isthIf the calculated task unloading amount of the sensing equipment exceeds the threshold value, the sensing equipment is hijacked and becomes an internal DDoS attacker, and the normal calculation task unloading process of legal sensing equipment is interfered; thetai(t) traffic status, σ, observed by defense cooperator i2(t) is the variance of the internal DDoS attack rate.

For an optimal control strategy, i.e. over an attack duration [0, T]A set of all defense cooperator traffic weights when minimizing the average cost function; namely:

wherein eta isTIs the cost at time T.

3. The active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 1, characterized in that for dynamic random game GsThe function of the values u (T, s (T)) at time T and state s (T) is defined as follows:

under the Nash equilibrium state, the optimal control strategy is as follows:

wherein the content of the first and second substances,andrespectively configuring action values for the flow weights of the defense collaborator i and other defense collaborators-i when the value function meets the Nash equilibrium condition; the game model GsThe Nash equilibrium conditions of (A) are:

wherein the content of the first and second substances,the optimal traffic weight to take for the cooperative defending edge node i,and (c) the optimal traffic weight adopted for other cooperative defense edge nodes-i, and u (T) is the value of a time T value function.

At this time:

4. the active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 1, characterized in that a mean field game approximate solution dynamic random game model is adopted to obtain a control strategy when the gains R (t) of task unloading capacity in a Nash equilibrium state of the mean field game are the largest, namely the cost function J (t) is the smallestAs an optimal control strategy.

5. The active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 4, characterized in that the flatMean field game model (u (t, s)m(t)), v (t, s)), wherein u (t, s)m(t)) is a value function of the cooperative defense edge node i, and v (t, s) is a probability distribution of traffic weights for all cooperative defense edge nodes, expressed as:

h represents the number of network nodes in the high-density unloading connection of the sensing edge cloud task; sm(t)=[si(t),s-i(t)]Traffic states, s, observed for all cooperative defending edge nodesi(t) traffic status, s, observed for cooperative defense edge node i-i(t) observed traffic status for other defense collaborators-i; i is an indication function, when all cooperative defense edge nodes observe a traffic state sm(t) when the flow state s is equal, the value of I is 1, otherwise 0, the flow state s is a settable parameter;

the average field game Nash equilibrium state is that the flow weight of the edge node is defended in a cooperative wayThe following conditions are satisfied:

J(w*(t))≤J(wi(t),w-i(t)),

at the moment, the probability distribution of the flow weight of all cooperative defense edge nodes reaches the optimal v*(t, s) and minimizes the cost function.

For the mean field game (u (t, s)m(t)), v (t, s)), the value function u (t, s) thereofm(t)) is:

wherein, R (t) is a reward function, and is calculated according to the following method:

where ω is a penalty factor, the loss of the sum of the flows over the duration of the attack when the defender's cooperative action is not allowed; Δ hi(t)=hi(t)-hi(t-1),Δwi(t)=wi(t)-wi(t-1);ξtIn a system with M edge nodes, the fairness factor of flow distribution of each edge node under a flow weight configuration strategy is calculated according to the following method:

wherein xi=hi(t)/qi(t),hi(t) receiving rate of cooperative defense edge nodes, qi(t)=qo(t) represents the internal DDoS attack rate.

6. The active defense method for DDoS attacks inside sensing edge clouds based on traffic weight control as claimed in claim 5, characterized in that, the solution of the minimized cost function HJB equation of cooperative defense edge nodes is adopted as the solution to reach the optimal traffic weight w*Value function u (t, s) at (t)m(t)) calculating to reach the optimal flow weight w by adopting an FPK equation*Probability distribution v of optimal traffic weight at (t)*(t,s)。

7. The active defense method for DDoS attacks inside the sensing edge cloud based on traffic weight control as claimed in claim 6, wherein the minimized cost function HJB equation of the cooperative defense edge node is:

8. the active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 6, wherein the method for calculating to reach optimal traffic weight w*Probability distribution v of optimal traffic weight at (t)*The FPK equation for (t, s) is:

9. the active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 6, characterized in that a model-free reinforcement learning update value function is adopted to solve an HJB equation to obtain the optimal weight; preferably, a reinforcement learning Q function is adopted to carry out an update value function, and an HJB equation is solved, specifically as follows:

the reinforcement learning samples are: de1=(sm(t),w-i(t),Ri(t),sm(t +1)), wherein sm(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, w-i(t) as the communication frequency, i.e. the flow weight, on the connection of other defense collaborators-i and the sensing equipment node, Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic state observed by all cooperative defense edge nodes in the next decision period.

The flow weight value parameterization Q value updating function of the reinforcement learning Q function cooperation defense edge node i is as follows:

wherein, alpha represents the learning rate,in order to be a function of the reward,

wherein the content of the first and second substances,average traffic weight for cooperative defense edge nodesMiRepresenting the size of the set of other cooperative defenders other than defender i,representing the probability distribution of a flow weight control strategy of the defense cooperative edge node i;the probability distribution of the flow weight control strategy of other defense cooperative edge nodes except the defense cooperative edge node i is represented by the following steps:

from the average motion value at the previous momentCalculating;

wherein beta is an expression rate temperature over-parameter;

the loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network from the target network parametersTo adjust, gamma is a discount factor,is sm(t) mean field Q function value of the state, the value of which is obtained using an evaluation network which evaluates network parametersAdjusting;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the average field game feedback Nash equilibrium condition is achieved;

obtaining the flow weight of the cooperative defense edge node when the reinforcement learning Q function is converged as the flow weight of the optimal cooperative defense edge node

10. The active defense method for DDoS attack inside sensing edge cloud based on traffic weight control as claimed in claim 9, characterized in that average action value of other collaborators is adoptedApproximate frequency of communication between other defense collaborators-i and sensing equipment node connection, namely flow weight w-i(t), specifically as follows:

the reinforcement learning samples are:wherein s ism(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, wvec=[w1(t),...,wM(t)]For the traffic weight vectors of all cooperative defending edge nodes,whereinAs an average of the traffic weights of the cooperative defending edge nodes,Rvec=[R1(t),...,RM(t)],Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic state observed by all cooperative defense edge nodes in the next decision period.

Updating the target network parameters of the flow weight value parameterization Q value of the reinforcement learning Q function cooperation defense edge node i into

Wherein, alpha represents the learning rate,in order to evaluate the parameters of the network,for the target network parameter, initiateAndis preset; updating parameters of an evaluation network using a stochastic gradient descent methodByUpdating target network parameters

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network from the target network parametersGamma is a discount factor;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the feedback Nash equilibrium condition of the average field game is achieved,obtaining the flow weight of the cooperative defense edge node at the moment as the flow weight of the optimal cooperative defense edge node

Background

The high-density unloading connection of the computing tasks in the sensing edge cloud network enables the computing tasks of the sensing equipment to be unloaded to the edge nodes with high reliability and low time delay, and effectively improves the throughput and the distributed processing capacity of the edge network. However, malicious nodes inside the sensing edge cloud network can initiate DDoS attacks by using high-density offload connections with high interaction frequency, so that the offload of computing tasks by the sensing device fails.

In order to provide cross-domain services, the sensing edge cloud technology realizes the uniform connection of various sensing devices. Although the service field of the sensing edge cloud is continuously growing, the safety problem faced by the sensing edge cloud is increasingly severe. Due to the limited computing capacity of the sensing equipment, a complex protection mechanism is difficult to deploy on the sensing equipment, therefore, a sensing edge cloud network generally adopts some lightweight security protocols with lower protection levels, so that the sensing equipment is easy to attack, the sensing equipment becomes an internal DDoS attacker after being controlled by a malicious attacker, and under the condition of no sign, the internal DDoS attacker launches DDoS attack to an edge node through high-density task unloading connection in the process of unloading a computing task to the edge node, and the legal sensing equipment is prevented from unloading the computing task to the edge node. Because the internal DDoS attacker is a hidden attacker parasitizing in the sensing edge cloud network, the internal DDoS attacker is difficult to discover by an intrusion detection system in time. Meanwhile, an internal DDoS attacker simultaneously initiates traffic attack to edge nodes through multiple connections, which makes it difficult to defend simultaneously on multiple edge nodes. Under the traditional network environment, the problem of DDoS attack defense has been widely researched. However, due to uncertainty and dynamics of internal DDoS attack traffic, these methods cannot be directly applied to active defense of internal DDoS attacks in high-density offload connection of computing tasks of sensing devices in a sensing edge cloud environment. Jia et al propose An Edge-centered DDoS attack Defense Mechanism, which is mainly used for detecting, identifying and classifying DDoS Attacks and is not a powerful DDoS attack mitigation and inhibition Mechanism ("Flowguard: An Intelligent Edge Defence Mechanism Agailant IoT DDoS Attacks," in IEEE Internet of threads Journal "). Li et al propose a dynamic Container quantity adjustment technology and allocate resources to maximize service quality of a Cloud Environment when attacked by DDoS against Low-Rate DDoS attacks, and do not develop a corresponding solution against DDoS attacks in high-density offload connection of computing tasks in a sensing edge Cloud Environment (expanding New Opportunities to Defect Low-Rate DDoS attach in Container-Based Cloud Environment, "in IEEE Transactions on Parallel and Distributed Systems, vol.31, No.3, pp.695-706,1 March 2020). Aiming at the problem of Virus propagation on Complex Network connections, Huang and the like propose a Differential Game model to develop a Network connection Weight adaptive mechanism to resist the Virus propagation, and the computational complexity of the mechanism is high (A Differential Game application to centralized Virus-resist Weight Adaptation Policy Over Networks, "in IEEE Transactions on Control of Network Systems, vol.7, No.2, pp.944-955, June 2020). Simpson relieves DDoS attacks by directly controlling Host traffic, each defender adopts respective strategies to reduce load traffic on a path from a source to a target node, and a plurality of defender cooperative Control strategies ('Per-Host DDoS differentiation by Direct-Control-relationship enforcement' in IEEE Transactions on Network and Service Management, vol.17, No.1, pp.103-117 and March2020) are not considered. These research protocols also suffer from the following deficiencies:

(1) the proposed method has limited consideration for the uncertain state of the internal DDoS attack flow, and does not consider the influence of the internal DDoS attack flow on a plurality of edge nodes when the multitask is unloaded to different edge nodes at the same time. Therefore, when an defender faces uncertain internal DDoS attack flows, the realization of a flow weight control strategy is difficult.

(2) Although the existing solution has proposed a flow control method based on reinforcement learning, it is not considered that the internal DDoS attack flow in the high-density offload connection of the computation task is controlled by flow weight control without affecting the normal computation task offload amount.

(3) The traditional DDoS defense method focuses on DDoS attack detection aiming at a cloud computing environment or a wireless sensor network environment, and uses flow weight control to actively defend internal DDoS attack under the condition of not considering a sensing edge cloud environment. Particularly, when an internal DDoS attacker simultaneously attacks a plurality of edge nodes performing distributed task processing, a corresponding active defense method taking the edge nodes as the center has not been proposed yet.

Disclosure of Invention

In order to solve the defects of the method, the invention provides a method for realizing low-complexity active cooperative defense of the edge node to the DDoS attack in the sensing equipment side by considering the uncertainty and the dynamic property of the internal DDoS attack flow when an internal DDoS attacker simultaneously attacks the edge node for distributed processing in the high-density unloading connection of the computing task of the sensing equipment in the sensing edge cloud environment. In order to achieve the above object, according to an aspect of the present invention, there is provided a traffic weight control-based active defense method for DDoS attack inside a sensing edge cloud, including the following steps:

(1) in a defense period t, for each cooperative defense edge node i to be decided and other defense cooperator sets { -i }, a dynamic random game model is adopted to obtain the flow weight of the cooperative defense edge node with the minimum cost function in the Nash equilibrium stateAnd calculating an optimal control strategy according to the current flow weight of the cooperative defense edge nodeThe control strategy, i.e. during the attack duration [0, T]Set of all defending collaborator traffic weights within wi(t),w-i(t)};

The cost function considers the flow state and the task unloading amount threshold when the edge node is attacked by the internal DDoS;

(2) the optimal control strategy obtained according to the step (1)Reconfiguring the flow weight of the cooperative defense edge node end to achieve the flow weight of the cooperative defense edge nodeRealizing a nash equilibrium state.

Preferably, the dynamic random game G of the active defense method for the DDoS attack in the sensing edge cloud based on the traffic weight controlsIt is written as:

wherein the content of the first and second substances,for a game participant comprising all sensor device nodes of a cooperative defense border node i, other defense cooperators-i, possibly DDoS attackers,representing the number of all game participants;

w (t) is traffic weight space w (t) { { w { (t)o(t)},{wi(t),w-i(t) }, where wo(t)}∈Wo, wi(t),w-i(t)∈Wi;wo(t) the frequency of communication over the defender's connection with attacker o i.e. traffic weight,the traffic weights taken for the internal DDoS attacker o,the maximum flow weight allowed for attacker o; w is ai(t) the frequency of communication, namely the flow weight, on the connection of the cooperative defense edge node i and the sensing equipment node, w-i(t) the communication frequency or flow weight of other defense collaborators-i and the sensing equipment node connection,traffic weights taken for cooperative defense edge nodes,maximum traffic weight allowed by defenders;

s (t) is a state space, S (t) { theta }o(t),θi(t) }, o belongs to N, i belongs to M, wherein N represents the number of internal DDoS attackers, and M represents the number of cooperative defense edge nodes; thetao(t) traffic status of an internal DDoS attacker, θi(t) traffic status observed for defense cooperator i;qo(t) represents the attack rate of an internal DDoS attacker, wo(t) is the frequency of communication over the connection with attacker o, i.e. the traffic weight;wherein q iso(t)wo(t) is traffic from an internal DDoS attacker o,is the sum of the flows from other sensing devices, qj(t) is the transmission rate from the other sensing device j, wj(t) is the frequency of communication over the connection with the other sensing device j, i.e. the traffic weight.

J (t) is a cost function, and a quadratic increasing function is adopted as the cost function J (t) as follows:

wherein q isthIf the calculated task unloading amount of the sensing equipment exceeds the threshold value, the sensing equipment is hijacked and becomes an internal DDoS attacker, and the normal calculation task unloading process of legal sensing equipment is interfered; thetai(t) traffic status, σ, observed by defense cooperator i2(t) is the variance of the internal DDoS attack rate.

For an optimal control strategy, i.e. over an attack duration [0, T]A set of all defense cooperator traffic weights when minimizing the average cost function; namely:

wherein eta isTIs the cost at time T.

Preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the traffic weight control is used for the dynamic random game GsThe function of the values u (T, S (T)) at time T and state S (T) is defined as follows:

under the Nash equilibrium state, the optimal control strategy is as follows:

wherein the content of the first and second substances,andrespectively configuring action values for the flow weights of the defense cooperator i and other defense cooperators-i when the value function meets the Nash equilibrium condition; the game model GsThe Nash equilibrium conditions of (1) are:

wherein the content of the first and second substances,the optimal traffic weight to take for the cooperative defending edge node i,the optimal traffic weight, u (T), taken for the other cooperative defending edge node-i is the value of the function of the value at time T.

At this time:

preferably, the active defense method for DDoS attack inside sensing edge cloud based on traffic weight control adopts a mean field game to approximately solve a dynamic random game model, and obtains a control strategy when the gains r (t) of task unloading capacity in a Nash equilibrium state of the mean field game are the largest, i.e., the cost function j (t) is the smallestAs an optimal control strategy.

Preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the traffic weight control is the mean field game model (u (t, s)m(t)), v (t, s)), wherein u (t, s)m(t)) is a value function of the cooperative defense edge node i, v (t, s) is a probability distribution of traffic weights for all cooperative defense edge nodes, tableShown as follows:

h represents the number of network nodes in the high-density unloading connection of the sensing edge cloud task; sm(t)=[si(t),s-i(t)]Traffic states, s, observed for all cooperative defending edge nodesi(t) traffic status observed for cooperative defense edge node i, s-i(t) traffic status observed for other defense collaborators-i; i is an indication function, when all cooperative defense edge nodes observe a traffic state sm(t) when the flow state s is equal, the value of I is 1, otherwise 0, the flow state s is a settable parameter;

the average field game Nash equilibrium state is that the flow weight of the edge node is defended in a cooperative wayThe following conditions are satisfied:

at the moment, the probability distribution of the flow weight of all cooperative defense edge nodes reaches the optimal v*(t, s) and minimizes the cost function.

For the mean field game (u (t, s)m(t)), v (t, s)), the value function u (t, s) thereofm(t)) is:

wherein, R (t) is a reward function, and is calculated according to the following method:

wherein, ω isIs a penalty factor, the loss of the sum of the flows over the duration of the attack when the defender's cooperative action is not allowed; Δ hi(t)=hi(t)-hi(t-1),Δwi(t)=wi(t)-wi(t-1);ξtIn a system with M edge nodes, a fairness factor of traffic distribution of each edge node under a traffic weight reconfiguration strategy is calculated according to the following method:

wherein xi=hi(t)/qi(t),hi(t) receiving rate of cooperative defense edge nodes, qi(t)=qo(t) represents the internal DDoS attack rate.

Preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the traffic weight control adopts the solution of the minimized cost function HJB equation of the cooperative defense edge node as the optimal traffic weight w*Value function u (t, s) at (t)m(t)) calculating to reach the optimal flow weight w by adopting an FPK equation*Probability distribution v of optimal traffic weight at (t)*(t,s)。

Preferably, in the active defense method for DDoS attack inside a sensing edge cloud based on traffic weight control, a minimized cost function HJB equation of the cooperative defense edge node is as follows:

preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the traffic weight control is used for calculating the optimal traffic weight w*Probability distribution of optimal traffic weight at time (t) v*The FPK equation for (t, s) is:

preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the flow weight control adopts a model-free reinforcement learning update value function to solve an HJB equation to obtain an optimal weight; preferably, a reinforcement learning Q function is adopted to carry out an update value function, and an HJB equation is solved, specifically as follows:

the reinforcement learning samples are: de1=(sm(t),w-i(t),Ri(t),sm(t +1)), wherein sm(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, w-i(t) as the communication frequency, i.e. the flow weight, on the connection of other defense collaborators-i and the sensing equipment node, Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic status observed by all cooperative defense edge nodes in the next decision period.

The flow weight value parameterization Q value updating function of the reinforcement learning Q function cooperation defense edge node i is as follows:

wherein, alpha represents the learning rate,in order to be a function of the reward,

wherein the content of the first and second substances,average traffic weight for cooperative defense edge nodesMiRepresenting collaborating defenders other than defender iThe size of the set of (a) and (b),representing the probability distribution of a flow weight control strategy of the defense cooperative edge node i;the probability distribution of the flow weight control strategy of other defense cooperative edge nodes except the defense cooperative edge node i is represented by:

from the average motion value at the previous momentCalculating;

wherein beta is an expression rate temperature over-parameter;

the loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network from the target network parametersTo adjust, gamma is a discount factor,is sm(t) mean field Q function value of the state, the value of which is obtained using an evaluation network, said evaluation network being derived from evaluation network parametersAdjusting;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the Nash equilibrium condition is fed back by the average field game;

obtaining the flow weight of the cooperative defense edge node when the reinforcement learning Q function is converged as the flow weight of the optimal cooperative defense edge node

Preferably, the active defense method for the DDoS attack inside the sensing edge cloud based on the traffic weight control adopts the average action value of other collaboratorsApproximate frequency of communication between other defense collaborators-i and sensing equipment node connection, namely flow weight w-i(t), specifically as follows:

the reinforcement learning samples are:wherein s ism(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, wvec=[w1(t),...,wM(t)]For the traffic weight vectors of all cooperative defending edge nodes,whereinAs an average of the traffic weights of the cooperative defending edge nodes,Rvec=[R1(t),...,RM(t)],Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic state observed by all cooperative defense edge nodes in the next decision period.

Updating the target network parameters of the flow weight value parameterization Q value of the reinforcement learning Q function cooperation defense edge node i into

Wherein, alpha represents the learning rate,in order to evaluate the parameters of the network,for the target network parameter, initiateAndis preset; updating parameters of an evaluation network using a stochastic gradient descent methodByUpdating target network parameters

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network from the target network parametersGamma is a discount factor;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the condition of Nash equilibrium feedback of the average field game is achieved, and the flow weight of the cooperative defense edge node at the moment is obtained and used as the flow weight of the optimal cooperative defense edge node

In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:

(1) the invention considers the uncertainty and the dynamic property of the flow in the high-density unloading connection of the calculation task caused by the internal DDoS attack, models the uncertainty and the dynamic property into an Ornstein-Uhlenbech dynamic equation, and captures the interactive process of an internal DDoS attacker and an edge node by using DSG.

(2) In order to reduce the computational complexity, the DSG is converted into the mean field game to solve the active cooperative defense problem of the multi-game participants, and the HJB and FPK equations are provided for optimizing the traffic weight control strategy.

(3) In order to efficiently solve the HJB equation and obtain a flow weight control strategy of an active cooperative defense edge node, the invention provides a cooperative defense edge node reinforcement learning algorithm based on an average field to obtain an internal DDoS attack flow weight control method. The method integrates reinforcement learning and MFG equations, and provides a new solution for slowing down and inhibiting internal DDoS attack flow in high-density unloading connection of the calculation task.

Drawings

Fig. 1 is a schematic diagram of an active defense method for DDoS attack inside a sensing edge cloud based on traffic weight control according to an embodiment of the present invention;

FIG. 2 is a workflow of reinforcement learning for each defender provided by the present invention;

fig. 3 is a schematic diagram illustrating an application effect of the DDoS attack active defense method in the sensing edge cloud based on the traffic weight provided by the present invention; fig. 3(a) shows a scenario of an attack on a high-density offload connection by an internal DDoS attacker before active defense, and fig. 3(b) shows an attack flow of an internal DDoS attack after active defense.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.

The invention provides a flow weight control-based active prevention method for DDoS attack inside a sensing edge cloud, which comprises the following steps:

(1) in a defense period t, for each cooperative defense edge node i to be decided and other defense cooperator sets { -i }, a dynamic random game model is adopted to obtain the flow weight of the cooperative defense edge node with the minimum cost function in the Nash equilibrium stateAnd calculating an optimal control strategy according to the current flow weight of the cooperative defense edge nodeThe control strategy beingAt attack duration [0, T]Set of all defending collaborator traffic weights within wi(t),w-i(t)};

The dynamic random game GsIt is written as:

wherein the content of the first and second substances,the method comprises the steps that game participants comprise all sensing equipment nodes of a cooperative defense edge node i, other defense collaborators-i and a DDoS attacker;indicating the number of all gaming participants.

w (t) is traffic weight space w (t) { { w { (t)o(t)},{wi(t),w-i(t) }, where wo(t)}∈Wo, wi(t),w-i(t)∈Wi;wo(t) the frequency of communication over the defender's connection with attacker o i.e. traffic weight,the traffic weights taken for the internal DDoS attacker o,the maximum flow weight allowed for attacker o; w is ai(t) the frequency of communication, namely the flow weight, on the connection of the cooperative defense edge node i and the sensing equipment node, w-i(t) the communication frequency or flow weight of other defense collaborators-i and the sensing equipment node connection,traffic weights taken for cooperative defense edge nodes,maximum traffic weight allowed by defenders;

s (t) is a state space, S (t) { theta }o(t),θi(t) }, o belongs to N, i belongs to M, wherein N represents the number of internal DDoS attackers, M represents the number of cooperative defense edge nodes, and theta represents the number of the cooperative defense edge nodeso(t) traffic status of an internal DDoS attacker, θi(t) traffic status observed for defense cooperator i; qo(t) represents the attack rate of an internal DDoS attacker, wo(t) is the number of communications frequencies, i.e. traffic weights, on the connection with attacker o;wherein q iso(t)wo(t) is the traffic from the internal DDoS attacker o,is the sum of the flows from other sensing devices, qj(t) is the transmission rate from the other sensing device j, wj(t) is the frequency of communication over the connection with the other sensing device j, i.e. the traffic weight.

J (t) is a cost function, considering a traffic state and a task unloading amount threshold when an edge node is attacked by internal DDoS, the present invention adopts a quadratic increasing function as the cost function j (t) as follows:

wherein q isthIs a task unloading amount threshold value, if the task unloading amount calculated by the sensing equipment exceeds the threshold value, the sensing equipment is hijacked and becomes an internal DDoS attacker, and the normal calculation task unloading of the legal sensing equipment is interferedA process; thetai(t) traffic status, σ, observed by defense cooperator i2(t) is the variance of the internal DDoS attack rate.

For an optimal control strategy, i.e. over an attack duration [0, T]A set of all defense cooperator traffic weights when minimizing the average cost function; namely:

wherein eta isTIs the cost at time T.

For dynamic random game GsThe function of the values u (T, s (T)) at time T and state s (T) is defined as follows:

under the Nash equilibrium state, the optimal control strategy is as follows:

wherein the content of the first and second substances,andrespectively configuring action values for the flow weights of the defense collaborator i and other defense collaborators-i when the value function meets the Nash equilibrium condition; the game model GsThe Nash equilibrium conditions of (A) are:

wherein the content of the first and second substances,the optimal traffic weight to take for the cooperative defending edge node i,the optimal traffic weight, u (T), taken for the other cooperative defending edge node-i is the value of the function of the value at time T.

At this time:

preferably, a dynamic random game model is approximately solved by adopting a mean field game, and a control strategy for obtaining the maximum profit R (t) of the task unloading capacity in the Nash equilibrium state of the mean field game, namely the minimum cost function J (t)As an optimal control strategy. Specifically, the method comprises the following steps:

the mean field game model (u (t, s)m(t)), v (t, s)), wherein u (t, s)m(t)) is a value function of the cooperative defense edge node i, and v (t, s) is a probability distribution of traffic weights for all cooperative defense edge nodes, expressed as:

h represents the number of network nodes in the high-density unloading connection of the sensing edge cloud task; sm(t)=[si(t),s-i(t)]Traffic states, s, observed for all cooperative defending edge nodesi(t) traffic status observed for cooperative defense edge node i, s-i(t) traffic status observed for other defense collaborators-i; i is an indication function, when all cooperative defense edge nodes observe a traffic state sm(t) when the flow state s is equal, the value of I is 1, otherwise 0, the flow state s is a settable parameter;

the average field game Nash equilibrium state is that the flow weight of the edge node is defended in a cooperative wayThe following conditions are satisfied:

at the moment, the probability distribution of the flow weight of all cooperative defense edge nodes reaches the optimal v*(t, s) and minimizes the cost function.

For the mean field game (u (t, s)m(t)), v (t, s)), the value function u (t, s) thereofm(t)) is:

wherein, R (t) is a reward function, and is calculated according to the following method:

where ω is a penalty factor, the loss of the sum of the flows over the duration of the attack when the defender's cooperative action is not allowed; Δ hi(t)=hi(t)-hi(t-1),Δwi(t)=wi(t)-wi(t-1);ξtIn a system with M edge nodes, a fairness factor of traffic distribution of each edge node under a traffic weight reconfiguration strategy is calculated according to the following method:

wherein xi=hi(t)/qi(t),hi(t) receiving rate of cooperative defense edge nodes, qi(t)=qo(t) represents the internal DDoS attack rate.

According to the optimal control theory and the Bellman optimization principle, the solution of the HJB equation of the minimized cost function of the cooperative defense edge nodes is adopted as the optimal flow weightWeight w*Value function u (t, s) at (t)m(t)) calculating to reach the optimal flow weight w by adopting an FPK equation*Probability distribution v of optimal traffic weight at (t)*(t,s);

The minimized cost function HJB equation of the cooperative defense edge node is as follows:

the method for calculating the optimal flow weight w*Probability distribution v of optimal traffic weight at (t)*The FPK equation for (t, s) is:

preferably, a model-free reinforcement learning update value function is adopted, and an HJB equation is solved to obtain the optimal weight; preferably, a reinforcement learning Q function is adopted to carry out an update value function, and an HJB equation is solved, specifically as follows:

the reinforcement learning samples are: de1=(sm(t),w-i(t),Ri(t),sm(t +1)), wherein sm(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, w-i(t) as the communication frequency, i.e. the flow weight, on the connection of other defense collaborators-i and the sensing equipment node, Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic status observed by all cooperative defense edge nodes in the next decision period.

The flow weight value parameterization Q value updating function of the reinforcement learning Q function cooperation defense edge node i is as follows:

wherein, alpha represents the learning rate,in order to be a function of the reward,

wherein the content of the first and second substances,average traffic weight for cooperative defending edge nodesMiRepresenting the size of the set of other cooperative defenders other than defender i,representing the probability distribution of the traffic weight control strategy of the defending cooperative edge node i.The probability distribution of the flow weight control strategy of other defense cooperative edge nodes except the defense cooperative edge node i is represented by the following steps:

from the average motion value at the previous momentCalculating;

where β is a constant that represents the search rate temperature over-parameter and can be set.

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network, gamma is the discount factor, yiBy parametersSo as to adjust the position of the movable part,is sm(t) the mean field Q function value of the state, the value of which is obtained using an evaluation network, the network regulation parameter being

The gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the average field game feedback Nash equilibrium condition is achieved.

Obtaining the flow weight of the cooperative defense edge node when the reinforcement learning Q function is converged as the flow weight of the optimal cooperative defense edge node

Preferably using average traffic weights of other collaboratorsApproximate frequency of communication between other defense collaborators-i and sensing equipment node connection, namely flow weight w-i(t), specifically as follows:

the reinforcement learning samples are:wherein s ism(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, wvec=[w1(t),...,wM(t)]For the traffic weight vectors of all cooperative defending edge nodes,whereinAs an average of the traffic weights of the cooperative defending edge nodes,Rvec=[R1(t),...,RM(t)],Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic state observed by all cooperative defense edge nodes in the next decision period.

Updating the target network parameters of the flow weight value parameterization Q value of the reinforcement learning Q function cooperation defense edge node i into

Wherein, alpha represents the learning rate,in order to evaluate the parameters of the network,for the target network parameter, initiateAndis preset; updating parameters of an evaluation network using a stochastic gradient descent methodThen is made ofUpdating target network parametersAs shown in fig. 2.

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network from the target network parametersGamma is a discount factor;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the condition of Nash equilibrium feedback of the average field game is achieved, and the flow weight of the cooperative defense edge node at the moment is obtained and used as the flow weight of the optimal cooperative defense edge node

Feedback Nash equilibrium of mean field cooperative game with M defenders is a combined flow weight configuration strategy action valueAnd the flow weight configuration strategy meets the following conditions:

(2) the optimal control strategy obtained according to the step (1)Reconfiguring the flow weight of the cooperative defense edge node end to achieve the flow weight of the cooperative defense edge nodeRealizing a nash equilibrium state.

The invention realizes the slowing or inhibiting of the internal DDoS attack flow by controlling the flow weight of the high-density unloading connection of the calculation task of the sensing equipment, thereby improving the average unloading amount of the calculation task of the sensing equipment to the maximum extent. In the sensing edge cloud network, in order to obtain an active defense strategy of optimal flow control of defenders, the invention models uncertain DDoS attacks launched by internal malicious nodes on a plurality of edge nodes by utilizing high-density unloading connection of computational tasks into a Dynamic random game (DSG), and in order to solve the problem of the Dynamic random game participated by a plurality of defenders, the invention converts the DSG into a Mean Field Game (MFG). The Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck-Kolmogorov (FPK) equations were constructed using the mean field method to obtain the optimized solutions. Because the flow weight control strategies of a plurality of defenders obtained by solving the HJB and the FPK equations have higher complexity and time cost, the invention provides the active flow weight control algorithm based on mean field reinforcement learning, and the complexity of solving the flow weight control strategies of the plurality of defenders is reduced to the maximum extent.

The following are examples:

an internal DDoS attack active defense method based on flow weight control comprises the following steps:

(1) in a defense period t, for each cooperative defense edge node i to be decided and other defense cooperator sets { -i }, collectingObtaining the flow weight of the cooperative defense border node when the cost function is minimum under the Nash equilibrium state by using a dynamic random game modelAnd calculating a control strategy according to the current flow weight of the cooperative defense edge nodeThe control strategy, i.e. during the attack duration [0, T]Set of all defending collaborator traffic weights within wi(t),w-i(t)};

The dynamic random game GsIt is written as:

wherein the content of the first and second substances,the method comprises the steps that game participants comprise all sensing equipment nodes of a cooperative defense edge node i, other defense collaborators-i and a DDoS attacker;indicating the number of all gaming participants.

w (t) is traffic weight space w (t) { { w { (t)o(t)},{wi(t),w-i(t) }, where wo(t)}∈Wo, wi(t),w-i(t)∈Wi;wo(t) the frequency of communication over the defender's connection with attacker o i.e. traffic weight,the traffic weights taken for the internal DDoS attacker o,maximum traffic weight allowed for attacker o; w is ai(t) is auxiliaryDefense edge node i and sensing equipment node connection communication frequency, namely flow weight, w-i(t) the communication frequency or the traffic weight of other defense collaborators-i and the sensing equipment node connection,traffic weight, w, taken for cooperative defense edge nodesi maxMaximum traffic weight allowed by defenders;

s (t) is a state space, S (t) { theta }o(t),θi(t) }, o belongs to N, i belongs to M, wherein N represents the number of internal DDoS attackers, and M represents the number of cooperative defense edge nodes. Thetao(t) traffic status of an internal DDoS attacker, θi(t) traffic status observed for defense cooperator i;qo(t) represents the attack rate of an internal DDoS attacker, wo(t) is the frequency of communication over the connection with attacker o, i.e. the traffic weight;wherein q iso(t)wo(t) is traffic from an internal DDoS attacker o,is the sum of the flows from other sensing devices, qj(t) is the transmission rate from the other sensing device j, wj(t) is the frequency of communication over the connection with the other sensing device j, i.e. the traffic weight.

In the process of unloading the computing tasks of the sensing equipment, the internal DDoS attack seriously reduces the task unloading amount in the sensing edge cloud network. When the internal DDoS attacks, the task unloading capacity is calculated to be related to the receiving rate and the flow weight of the edge node. Therefore, the invention provides an internal DDoS attack perception model to analyze the unloading flow of the calculation tasks so as to improve the average unloading amount of the calculation tasks of the system. In addition, game theory provides an ideal framework for handling multiple game participationAnd (5) attacking and defending interaction problems. Thus, internal DDoS attackers and edge nodes in a gaming framework are used as game participantsIndicates in the framework of the game thatAn attacker and defender.

(1) The state equation of the attack traffic initiated by the internal DDoS attacker o to the M cooperative defense edge nodes is as follows:

wherein o is ∈ [1, N ∈ >],qo(t) represents the attack rate of an internal DDoS attacker, wo(t) represents the frequency of communication on each connection, referred to herein as a weight.

(2) The edge nodes serve as defenders, flow weights are controlled in a mutual cooperation mode to defend internal DDoS attack, and the flow state equation observed by each defender i is as follows:

wherein i ∈ [1, M ]],j∈[1,N-1]And j ≠ i, the first term representing traffic from the internal DDoS attacker o, the second term representing traffic from other sensing devices. q. q.sj(t) represents the velocity from the other sensing device, wj(t) represents the weight from the other sensing device communication connections.

In order to actively defend against internal DDoS attackers, the actions that the cooperative defender i can take are to unload the connection weights for the tasksThe action taken by an internal DDoS attacker is to offload the connection weights for the tasks toWherein the content of the first and second substances,andrespectively representing the maximum weight of the task offload connection allowed by the defender and the internal DDoS attacker. The credibility state of the unloading flow of the sensing edge cloud computing task is determined by the flow weight value of the weight control strategy, and respectively corresponds to wi(t),w-i(t)∈Wi,wo(t)∈WoWherein w is-i(t) represents the traffic weight taken by the cooperative defenders other than defender i. Further, the invention considers the dynamics and uncertainty of task unloading flow of internal DDoS attackers and edge nodes in the sensing edge cloud network. Therefore, an Ornstein-Uhlenbeck dynamic equation is adopted to model the dynamic change of the internal DDoS attack traffic state:

where μ, and σ represent the mean and variance, respectively, of the internal DDoS attack rate. In addition, B (t) represents a standard Brownian motion function,τ denotes the number of time intervals, εiRepresents a random value in the standard normal distribution, and Δ t represents the variance of the brownian motion variation. B (t) is used for describing the uncertainty of the dynamic change of the internal DDoS attack rate. Let all internal DDoS attack rate dynamics equations use the same μ and σ values. At a fixed time t, the traffic weight is unchanged and is constant, and then the dynamic change equation of the internal DDoS attack traffic state is:

the dynamic change equation of the internal DDoS attack flow state is obtained as follows:

similarly, the flow dynamics equation of the cooperative defense edge node is:

j (t) is a cost function, considering a traffic state and a task unloading amount threshold when an edge node is attacked by internal DDoS, the present invention adopts a quadratic increasing function as the cost function j (t) as follows:

wherein q isthIf the calculated task unloading amount of the sensing equipment exceeds the threshold value, the sensing equipment is hijacked and becomes an internal DDoS attacker, and the normal calculation task unloading process of legal sensing equipment is interfered; thetai(t) traffic status, σ, observed by defense cooperator i2(t) is the variance of the internal DDoS attack rate.

In a sensing edge cloud network, sensing of a flow weight control strategy and internal DDoS attack behaviors of defense collaborators is related to unloading capacity of a computing task. The calculation task unloading amount generated by the internal DDoS attacker o is as follows:

for cooperative defender i (edge node) and other cooperative defenders-i, the calculated task offload amounts received for each are:

the invention uses the same task offload amount threshold qthTo measure the computational task offloading behavior of the sensing device. If the calculated task unloading amount of the sensing equipment exceeds the threshold value, the sensing equipment is hijacked and becomes an internal DDoS attacker, and the normal calculation task unloading process of the legal sensing equipment is interfered. Because the flow weight of an internal DDoS attacker cannot be controlled and a cooperative defender can only control the flow weight of an edge node end, the invention designs an active flow weight control strategy taking the edge node as a center and only considers phii(t),φ-i(t)≥qthThe case (1). At this time, the condition is satisfied:

defining functions

In order to minimize the internal DDoS attack traffic, the invention designs a cost function which integrates the traffic state observed by the edge node and the traffic threshold of the internal DDoS attacker. When the calculation task unloading amount of the sensing equipment exceeds a threshold value, an internal DDoS attack occurs, and a defender minimizes a cost function by cooperatively adjusting the flow weight. The cost function is expressed as follows:

in order to conveniently analyze the dynamic property of internal DDoS attack flow, J (t) is more than 0, and a secondary increasing function is used as a cost function, so that the cost function can reduce the damage degree of the internal DDoS attack on a task unloading process by controlling the flow weight.

For an optimal control strategy, i.e. over an attack duration [0, T]Inner, all defense collaborator action values when minimizing the average cost functionA set of (a); namely:

wherein eta isTIs the cost at time T. Each defender (edge node), duration of internal DDoS attack [0, T]In the mean time, his optimal strategy will be decidedMinimizing the cost function value

The dynamic random game model describes the attack action space of an internal DDoS attacker and the action space of a defender, and is beneficial to designing a distributed active defense algorithm with multi-edge node cooperation. In addition, the game model considers the dynamic randomness of the internal DDoS attack traffic state, and the influence of the attack characteristics on the optimal strategy solution is added into the cost function. The present invention characterizes these effects by a value function.

For dynamic random game GsThe function of the values u (T, s (T)) at time T and state s (T) is defined as follows:

where u (T, S (T)) is a function of the value at time T and state S (T). According to the Bellman optimization principle, the final optimization strategy depends on the result of the previous optimization strategy. Thus, canIt is derived that for the attack duration T ∈ [0, T ∈]Value function of the if-final optimization strategyThen w*(T → T) is the optimal task offload traffic weight.

Under the Nash equilibrium state, the optimal control strategy is as follows:

wherein the content of the first and second substances,andrespectively configuring action values for the flow weights of the defense cooperator i and other defense cooperators-i when the value function meets the Nash equilibrium condition; the game model GsThe Nash equilibrium conditions of (1) are:

wherein the content of the first and second substances,the optimal traffic weight to take for the cooperative defending edge node i,the optimal traffic weight, u (T), taken for the other cooperative defending edge node-i is the value of the function of the value at time T.

At this time:

optimal flow weightAndthe normal task unloading flow and the restrained DDoS attack flow reach a balanced state, and the cost function is minimum at the moment. However, since the number h of network nodes in the sensing edge cloud task high-density offload connection is huge, it is very difficult to obtain nash equilibrium solution. Thus, the present invention converts dynamic random games (DSG) into Mean Field Games (MFG) to solve. This enables each cooperative defense edge node to optimize the weight configuration policy in view of the self-observed traffic state when facing the internal DDoS attack of high-density connections.

Preferably, a dynamic random game model is approximately solved by adopting a mean field game, and a control strategy for obtaining the maximum profit R (t) of the task unloading capacity in the Nash equilibrium state of the mean field game, namely the minimum cost function J (t)As an optimal control strategy.

The mean field game is a special differential game in which each game participant interacts with a large number of other game participants. The invention mainly solves the problem of cooperative defense among multi-edge nodes, so that the mean field game is a mean field cooperative game model which can be expressed as a binary group (u (t, s)m(t)), v (t, s)), wherein u (t, s)m(t)) is a value function of the cooperative defense edge node i, and v (t, s) is a probability distribution of traffic weights for all cooperative defense edge nodes, expressed as:

h represents the number of network nodes in the high-density unloading connection of the sensing edge cloud task; sm(t)=[si(t),s-i(t)]Traffic states, s, observed for all cooperative defending edge nodesi(t) traffic observed for cooperative defense edge node iState, s-i(t) traffic status observed for other defense collaborators-i; i is an indication function, when all cooperative defense edge nodes observe a traffic state sm(t) when the flow state s is equal, the value of I is 1, otherwise 0, the flow state s is a settable parameter;

when being attacked by internal DDoS, the flow state s observed by all cooperative defense edge nodes is givenm(t)=[si(t),s-i(t)]The mean field of the cooperative defense edge nodes is the probability distribution of the traffic weights of all the cooperative defense edge nodes. And for a given moment t, calculating the probability distribution of the task unloading flow state on the cooperative defense edge node set when the average field represents the attack of the internal DDoS. And the cooperative defense edge nodes update the value functions in the process of executing the distributed flow weight configuration strategy action.

In the cooperative defense process, the traffic weight configuration policy action of the cooperative defense edge node i will affect the traffic weight configuration policy action of other cooperative defense edge nodes, and the traffic state change of the cooperative defense edge node i is represented as:

dsi(t)=wo(t)dqo(t)+σ2(t)dB(t)

the traffic state change of the other cooperative defense edge nodes-i is represented as:

ds-i(t)=w-i(t)ω-i(t)dt+σ2(t)dB(t)

wherein the content of the first and second substances,b (t) is a standard Brownian motion function,τ denotes the number of time intervals, εiRepresents a random value in a standard normal distribution, and Δ t represents the variance of the brownian motion variation.

For the mean field game (u (t, s)m(t)),v(t,si) Its value function u (t, s)m(t)) is:

wherein, R (t) is a reward function, and is calculated according to the following method:

where ω is a penalty factor, the loss of the sum of the flows over the duration of the attack when the defender's cooperative action is not allowed; Δ hi(t)=hi(t)-hi(t-1),Δwi(t)=wi(t)-wi(t-1);ξtIn a system with M edge nodes, a fairness factor of traffic distribution of each edge node under a traffic weight reconfiguration strategy is calculated according to the following method:

wherein xi=hi(t)qi(t),hi(t) receiving rate of cooperative defense edge nodes, qi(t)=qo(t) represents the internal DDoS attack rate.

Mean field cooperative gaming is a dynamic optimization process. Duration T ∈ [0, T ] of internal DDoS attack]Traffic weight optimized per cooperative defense edge nodeTo maximize the revenue R of its capacity for task offloadingi(t), the mean-field cooperative gambling solution is a nash equilibrium of cooperative feedback, wherein the feedback is referred to as rewards. Therefore, the method comprises the following steps:

the average field game Nash equilibrium state is that the flow weight of the edge node is defended in a cooperative waySatisfies the followingConditions are as follows:

at the moment, the probability distribution of the flow weight of all cooperative defense edge nodes reaches the optimal v*(t, s) and minimizes the cost function. And the cooperative defense nodes inhibit the DDoS attack flow at the Nash equilibrium point, and simultaneously ensure the normal task unloading flow and the equilibrium of the inhibited DDoS attack flow by maximizing the profit.

When the average field game reaches the feedback Nash equilibrium, the defender obtains the optimal strategy action valueThe optimal distribution of the flow state of the edge node reaches the optimal v*(t, s) and satisfy

For rational cooperative defense nodes, a balanced flow weight is adopted to control a strategy action value w*After (t), no other strategy is adopted any more, and at this time, the probability distribution of the traffic state of the corresponding edge node is v*(t,s)。

The method uses the random partial differential equation to obtain the solution of the average field cooperative game feedback Nash equilibrium strategy, and the cooperative defense nodes can observe the flow state s at any time tm(T) and the duration of the internal DDoS attack T ∈ [0, T ∈]In the method, all flow states are observed, and the optimal flow weight w is found*(t) to slow down or suppress internal DDoS attack traffic.

According to the optimal control theory and the Bellman optimization principle, the solution of the HJB equation of the minimized cost function of the cooperative defense edge nodes is adopted as the optimal flow weight w*Value function u (t, s) at (t)m(t)) calculating to reach the optimal flow weight w by adopting an FPK equation*Probability distribution of optimal traffic weight at (t)v*(t,s);

The minimized cost function HJB equation of the cooperative defense edge node is as follows:

if an optimal solution is solved from the above formula, the value function u (t, s) is explainedm(t)) can be obtained by the HJB equation when the traffic state in the value function corresponds to the optimal traffic weight w of the cooperative defense edge node*(t)。

The method for calculating the optimal flow weight w*Probability distribution v of optimal traffic weight at (t)*The FPK equation for (t, s) is:

the key point of solving the HJB equation and the FPK equation is to obtain the probability distribution v0(t, s) and update the value function u (t, s) according to the Bellman principlem(t)) to obtain an optimal traffic weight control policy action value w for the cooperative defense edge node*(t) of (d). The whole solving process requires a large amount of calculation.

Probability distribution v at given initial state0In the case of (t, s), the value function u (t, s) is updated bym(t)) to solve the optimal traffic weight control policy action value w*(t) of (d). When under an internal DDoS attack, the last traffic state due to cooperative defense edge nodes is defined as the sum of the traffic r (t) over the attack duration.

Preferably, a model-free reinforcement learning update value function is adopted, and an HJB equation is solved to obtain the optimal weight; preferably, a reinforcement learning Q function is adopted to carry out an update value function, and an HJB equation is solved, specifically as follows:

the reinforcement learning samples are: de1=(sm(t),w-i(t),Ri(t),sm(t +1)), wherein sm(t)=[si(t),s-i(t)]For all collaborationDefending against the observed traffic state, w, of edge nodes-i(t) is the communication frequency, namely the flow state, on the connection of other defense collaborators-i and the sensing equipment node, Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic status observed by all cooperative defense edge nodes in the next decision period.

The flow weight value parameterization Q value and the new function of the reinforcement learning Q function cooperation defense edge node i are as follows:

wherein, alpha represents the learning rate,in order to be a function of the reward,

wherein the content of the first and second substances,average traffic weight for cooperative defense edge nodesMiRepresenting the size of the set of other cooperative defenders other than defender i,and (3) representing the probability distribution of the flow weight control strategy of the defense cooperative edge node i.The probability distribution of the flow weight control strategy of other defense cooperative edge nodes except the defense cooperative edge node i is represented by:

from the average motion value of the previous momentCalculating;

where β is a constant that represents the search rate temperature over-parameter and can be set.

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network, which adjusts the parameters by the networkTo adjust, gamma is a discount factor,is sm(t) the mean field Q function value of the state, the value of which is obtained using an evaluation network, the network regulation parameter being

The gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the average field game feedback Nash equilibrium condition is achieved.

Obtaining the flow weight of the cooperative defense edge node when the reinforcement learning Q function is converged as the flow weight of the optimal cooperative defense edge node

According to the mean field game value function, the value function in the HJB equation can be approximated by an enhanced learning method, and meanwhile, the optimal flow weight is obtained. In the invention, M cooperative defense edge nodes are considered to cooperate to take defense actions, and the M cooperative defense edge nodes need to estimate the action value of the joint defense strategy. In order to solve the problem, the traditional reinforcement learning is expanded into mean field multi-game participant reinforcement learning, and a Q function in the reinforcement learning is used for approximating a value function in an HJB equation. Parameterizing a Q function using the traffic state and the traffic weight value of the cooperative defending edge node as:

wherein M (i) represents the set of other cooperative defense edge nodes except the cooperative defense edge node i, and the size of the set is Mi| m (i) |. Calculating an average action value of a traffic weight control strategy according to a cooperative defense edge node set M (i)Parameterization of the Q-function using the traffic state and the traffic weight values of the cooperative defending edge nodes can be approximated as:

due to the fact that the mean field approximation method is used, the Q function of the mean field cooperative game is simplified, and the Q function of the mean field cooperative game between cooperative prevention edge nodes is simplified into

Can obtain the optimal strategy for converting the problem of the intensified learning of the multiple game participants in the mean field into the solution of the cooperative defender iIs problematic in thatMean value of action of defenders in cooperation with each otherAre related to, and

whereinIs the average action value at the previous moment and the flow weight w of the cooperation of other defenders-i(t) routing policyIt was decided that he was weighted by the average flow at the previous momentInfluence. Subsequently, the strategyAccording to average flow weightUpdate, policyAnd average actionThe relation of (A) is as follows:

wherein β represents the search rate temperature over-parameter and is a settable constant.

Preferably, average action values of other collaborators are adoptedApproximate frequency of communication between other defense collaborators-i and sensing equipment node connection, namely flow weight w-i(t), specifically as follows:

the reinforcement learning samples are:wherein s ism(t)=[si(t),s-i(t)]Traffic states observed for all cooperative defending edge nodes, wvec=[w1(t),...,wM(t)]For the traffic weight vectors of all cooperative defending edge nodes,whereinAs an average of the traffic weights of the cooperative defending edge nodes,Rvec=[R1(t),...,RM(t)],Ri(t) value of cooperative defense edge node i reward function, i.e. reward obtained, sm(t +1) is the traffic state observed by all cooperative defense edge nodes in the next decision period.

Updating the target network parameters of the flow weight value parameterization Q value of the reinforcement learning Q function cooperation defense edge node i into

Wherein, alpha represents the learning rate,in order to evaluate the parameters of the network,for the target network parameter, initiateAndupdating the parameters of the evaluation network for presetting by using a random gradient descent methodThen is made ofUpdating target network parametersAs shown in fig. 2.

The loss function is:

wherein the content of the first and second substances,is the target mean field Q value, which is estimated by the target network, gamma is the discounting factor, yiBy parametersSo as to adjust the position of the movable part,is sm(t) the mean field Q function value of the state, the value of which is obtained using an evaluation network which evaluates network parametersAdjusting;

the gradient of the training of the reinforcement learning Q function is as follows:

the convergence condition of the reinforcement learning Q function is as follows: the condition of Nash equilibrium feedback of the average field game is achieved, and the flow weight of the cooperative defense edge node at the moment is obtained and used as the flow weight of the optimal cooperative defense edge node

Feedback Nash equilibrium of mean field cooperative game with M defenders is a combined flow weight configuration strategy action valueAnd the flow weight configuration strategy meets the following conditions:

specifically, the following procedure can be represented, as shown in fig. 3:

step 1: initial assessment of network parametersAnd target network parametersAverage action value of other cooperative defendersAnd Flag state is not under Nash equilibrium state, i.e. Flag is 1

Step 2: whileflag ═ 1do

Step 3 Fori 1toNdo

(ii) for each defender i, sampling the flow weight wi(t) using the current average traffic weightComputingThe method comprises the following steps:

② for each defender i, calculating new average action valueThe following were used:

taking combined flow weight control reinforcement learning action value w for each defendervec=[w1(t),...,wM(t)]And observe its reward Rvec=[R1(t),...,RM(t)]And the next traffic state sm(t+1)。

Fourthly, storing the data in an experience pool DWherein

Endfor

And 4, step 4: fori 1to M do

Sampling k experiences from an experience pool

Second, sampling from experience pool at the previous momentAnd

setting

Fourthly, through minimizing loss functionUpdating parameters in an evaluation network

Updating the target network parameters by using the learning rate alpha for each defender:

Endfor

and 5: when the feedback Nash equilibrium condition is reachedAnd when the training is finished, the flag is equal to 0, otherwise, the step 4 is continuously executed.

End while

Step 6: outputting the status s of each defenderm(t) corresponding optimal action value

(2) The optimal control strategy obtained according to the step (1)Reconfiguring collaborationThe flow weight of the defending edge node end reaches the flow weight of the cooperative defending edge nodeRealizing a nash equilibrium state.

The edge sensing systems before and after defense using the present invention are shown in fig. 3(a) and 3(b), respectively.

It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种面向位置隐私保护的任务卸载方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!