Intelligent unloading optimization method and system based on federal learning

文档序号:8677 发布日期:2021-09-17 浏览:29次 中文

1. An intelligent unloading optimization method based on federal learning is characterized by comprising the following steps:

the method comprises the steps of learning and calculating task characteristics by adopting a multi-layer perceptron model, carrying out type division on tasks, and training by adopting a federal learning training model;

calculating the execution scores of all types of tasks, and taking N tasks before the execution scores as a setN is a positive integer;

calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setIf so, the task is executed locally; when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

when time slotThe time of day is lower than the last time slotUpdate the offload policyWherein, in the step (A),to representComputing node is in time slotIn an operating state of (1), whereinRespectively represent an edge server, a local device, a cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks.

2. The intelligent offloading optimization method based on federated learning of claim 1, wherein a multi-layer perceptron model is employed to learn and calculate task characteristics, and the tasks are classified into types and trained using a federated learning training model; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs an activation function;

hidden layer processing and testing of the model; the output of each layer of the hidden layer isIn order to be the weight, the weight is,is an offset;the number of hidden layers;

training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

the cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

3. The intelligent offloading optimization method based on federated learning of claim 2, wherein a federated learning training model is employed for task training; the method specifically comprises the following steps:

use ofCommunicating between internet of things devices, edge servers, and cloud servers as model parametersWhereinIs a model parameter;

when each round of training starts, P local Internet of things devices are randomly selected to participate in the round of training, wherein P is a positive integer;

for the P devices, each device downloads model parameters from the edge server and initializes the model parameters, updates the parameters after training and carries out edge aggregation, namely all the devices upload the model parameters to the edge server;

and updating the parameters of the edge servers and carrying out global aggregation, namely uploading the model parameters of all the edge servers to a global cloud server and updating the parameters of the cloud server.

4. The intelligent optimization method for offloading based on federated learning of claim 1, wherein classifying the tasks specifically comprises:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

5. The intelligent offload optimization method based on federated learning as claimed in claim 3, wherein the execution score of each type of task is calculated, and N tasks before the execution score are taken as a setThe method specifically comprises the following steps:

the probability of execution in the edge server isThen the probability of not being suitable for execution in the edge server isIn proportion of the twoRepresents;

the scoring formula is:

wherein the content of the first and second substances,andis a constant.

6. An intelligent offload optimization system based on federated learning, comprising:

a task classification unit: the method comprises the steps of learning and calculating task characteristics by adopting a multi-layer perceptron model, carrying out type division on tasks, and training by adopting a federal learning training model;

a calculation scoring unit: calculating the execution scores of all types of tasks, and taking N tasks before the execution scores as a setN is a positive integer;

a cost calculation unit: calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setThen the task is executed locally(ii) a When the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

an unloading strategy updating unit: when time slotThe time of day is lower than the last time slotUpdate the offload policyTo representComputing node is in time slotIn an operating state of (1), whereinRespectively represent an edge server, a local device, a cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks.

7. The unloading intelligent optimization system based on federal learning of claim 6, wherein the task classification unit is used for learning and calculating task characteristics by adopting a multilayer perceptron model, classifying the types of tasks and training by adopting a federal learning training model; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs a laserA live function;

hidden layer processing and testing of the model; the output of each layer of the hidden layer isIn order to be the weight, the weight is,in order to be offset in the amount of the offset,the number of hidden layers;

training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

the cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

8. The federal learning based intelligent offload optimization system of claim 7, wherein a federal learning training model is used for task training; the method specifically comprises the following steps:

use ofCommunicating as model parameters between the internet of things device, the edge server, and the cloud server, whereinIs a model parameter;

when each round of training starts, P local Internet of things devices are randomly selected to participate in the round of training, wherein P is a positive integer;

for the P devices, each device downloads model parameters from the edge server and initializes the model parameters, updates the parameters after training and carries out edge aggregation, namely all the devices upload the model parameters to the edge server;

and updating the parameters of the edge servers and carrying out global aggregation, namely uploading the model parameters of all the edge servers to a global cloud server and updating the parameters of the cloud server.

9. The intelligent optimization system for offloading based on federated learning of claim 6, wherein classifying the tasks specifically comprises:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

10. The system of claim 8, wherein the calculation scoring unit is configured to calculate an execution score for each type of task, and take N tasks before the execution score as a setThe method specifically comprises the following steps:

the probability of execution in the edge server isThen the probability of not being suitable for execution in the edge server isIn proportion of the twoRepresents;

the scoring formula is:

wherein the content of the first and second substances,andis a constant.

Background

The internet of things equipment can generate a large amount of data in the working process, and if the data are completely unloaded to the cloud server, the cloud server can be overloaded, the bandwidth is high in the transmission process, and the safety problem can be caused. However, the internet of things equipment is limited by its size, so that the computing capacity of the internet of things equipment is weak and is not enough to support the computation of complex tasks. Mobile Edge Computing (MEC) becomes a viable solution to support the offloading of complex Computing tasks or applications by providing Computing resources to support networked devices. Through MEC, can effectively solve thing networking device computational capability not enough, delay height, data security scheduling problem to cloud server uninstallation. However, privacy issues and conflict of interests exist for multiple MEC participants (e.g., different internet of things devices and edge/cloud servers), and it is a challenge to establish trust between these participants and implement a federated multi-participant computing offload scheme.

To explore the dynamic MEC environment, machine learning-based computational offloading becomes a viable solution. Some research works in the field of Deep Reinforcement Learning (DRL) have proposed a state space where computational offloading strategies are highly dependent on the problem. Moreover, the computation offload strategy is usually very complex, and pure Q-Learning is not suitable for solving the optimization problem of computation offload. While finding the optimal offloading policy, the agent of the DRL learns the policy through action learning. And the whole process is time-consuming and occupies a large amount of system resources. Privacy is a key issue in machine learning, especially in different mobile internet of things device providers of MECs, how to integrate data of multiple mobile internet of things device providers is a great challenge to protect private data of all participants. Due to the above problems, Federal Learning (FL) facing the industrial internet of things has attracted extensive attention in both academic and industrial fields. As a new type of distributed machine learning, federated learning can train all participants' data locally, updating and aggregating global models through cloud/edge servers.

In existing solutions to the computational offload problem, research work can be divided into two categories. One is a computation offload scheme based on a traditional heuristic algorithm, and the other is an online learning computation offload scheme based on machine learning. One of the challenges faced by a computation offload scheme based on a heuristic algorithm is that the assumption is more, the effect is better under a specific scene, but the portability and robustness of the algorithm are poor. In the MEC and 5G era, wireless communication environments and computing tasks have become more complex, and it is very challenging to design a computation offload optimization algorithm that can effectively improve system efficiency and meet system requirements. Machine learning based computational offloading schemes can learn future directions from data and thus can effectively address offloading strategies in some complex systems. However, in a highly dynamic real-time system, the intelligent offload decision and privacy data problem is particularly critical and important.

Disclosure of Invention

The invention mainly aims to overcome the defects in the prior art, and provides an unloading intelligent optimization method based on federal learning.

The invention adopts the following technical scheme:

an unloading intelligent optimization method based on federal learning comprises the following steps:

the method comprises the steps of learning and calculating task characteristics by adopting a multi-layer perceptron model, carrying out type division on tasks, and training by adopting a federal learning training model;

calculating the execution scores of all types of tasks, and taking N tasks before the execution scores as a set

Calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setIf so, the task is executed locally; when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

when time slotThe time of day is lower than the last time slotUpdate the offload policyTo representComputing node is in time slotIn an operating state of (1), whereinRespectively represent an edge server, a local device, a cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks.

Specifically, a multilayer perceptron model is adopted to learn and calculate task characteristics, the types of the tasks are divided, and a federal learning training model is adopted to train; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs an activation function;

hidden layer processing and testing of the model; the output of each layer of the hidden layer isIn order to be the weight, the weight is,is an offset;

training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

the cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

specifically, a federal learning training model is adopted for task training; the method specifically comprises the following steps:

use ofCommunicating as model parameters between the internet of things device, the edge server, and the cloud server, whereinIs a model parameter;

when each round of training starts, P local Internet of things devices are randomly selected to participate in the round of training, wherein P is a positive integer;

for the P devices, each device downloads model parameters from the edge server and initializes the model parameters, updates the parameters after training and carries out edge aggregation, namely all the devices upload the model parameters to the edge server;

and updating the parameters of the edge servers and carrying out global aggregation, namely uploading the model parameters of all the edge servers to a global cloud server and updating the parameters of the cloud server.

Specifically, classifying the task specifically includes:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

Specifically, the execution scores of each type of task are calculated, and N tasks before the execution scores are used as a setThe method specifically comprises the following steps:

the probability of execution in the edge server isThen the probability of not being suitable for execution in the edge server isIn proportion of the twoRepresents;

the scoring formula is:

wherein the content of the first and second substances,andis a constant.

Another aspect of the embodiments of the present invention provides an unloading intelligent optimization system based on federal learning, including:

a task classification unit: the method comprises the steps of learning and calculating task characteristics by adopting a multi-layer perceptron model, carrying out type division on tasks, and training by adopting a federal learning training model;

a calculation scoring unit: calculating the execution scores of all types of tasks, and taking N tasks before the execution scores as a set

A cost calculation unit: calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setIf so, the task is executed locally; when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

an unloading strategy updating unit: when time slotThe time of day is lower than the last time slotUpdate the offload policyTo representComputing node is in time slotIn an operating state of (1), whereinRespectively representEdge server, local device, cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks.

Specifically, the task classification unit adopts a multilayer perceptron model to learn and calculate task characteristics, divides the types of the tasks and trains the tasks by adopting a federal learning training model; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs an activation function;

hidden layer processing and testing of the model; the output of each layer of the hidden layer isIn order to be the weight, the weight is,is an offset;

training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

the cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the predicted output;

specifically, a federal learning training model is adopted for task training; the method specifically comprises the following steps:

use ofCommunicating as model parameters between the internet of things device, the edge server, and the cloud server, whereinIs a model parameter;

when each round of training starts, P local Internet of things devices are randomly selected to participate in the round of training, wherein P is a positive integer;

for the P devices, each device downloads model parameters from the edge server and initializes the model parameters, updates the parameters after training and carries out edge aggregation, namely all the devices upload the model parameters to the edge server;

and updating the parameters of the edge servers and carrying out global aggregation, namely uploading the model parameters of all the edge servers to a global cloud server and updating the parameters of the cloud server.

Specifically, classifying the task specifically includes:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

Specifically, the calculation scoring unit calculates the execution scores of each type of task, and takes N tasks before the execution scores as a setThe method specifically comprises the following steps:

the probability of execution in the edge server isThen the probability of not being suitable for execution in the edge server isIn proportion of the twoRepresents;

the scoring formula is:

wherein the content of the first and second substances,andis a constant.

As can be seen from the above description of the present invention, compared with the prior art, the present invention has the following advantages:

(1) the invention provides an unloading intelligent optimization method based on federal learning, which is characterized in that in order to optimize a calculation unloading strategy, a multilayer perceptron model is used for learning task characteristics, tasks are classified to obtain tasks more suitable for being unloaded to an edge server or a cloud server, and in order to protect privacy data of different Internet of things equipment, the federal learning is used for training the model, so that data leakage in the process of transmitting the data to the server can be avoided.

(2) In order to control the service cost of the whole system, different weight factors are set for key factors in the unloading process, and the weight factors are set by a system administrator according to actual application scenes.

Drawings

FIG. 1 is a schematic flow chart of an embodiment of the present invention.

Fig. 2 is a structural diagram of a system for computing offload optimization through federal learning according to an embodiment of the present invention.

Detailed Description

The invention is further described below by means of specific embodiments.

The invention provides an intelligent unloading optimization method based on federal learning by researching the calculation unloading optimization problem in mobile edge calculation, and simultaneously, the method can protect user privacy data. The multi-layer perceptron model is used for learning the characteristics of the calculation tasks (task size, task calculation complexity, delay sensitivity and the like), and the tasks of the equipment of the Internet of things are divided into four types. Then, in order to solve the problem that private data of the Internet of things equipment are accessed in model training, a federate learning framework based on edge calculation is adopted to train a calculation task feature extraction model. In the aspect of system cost, different requirements of different application scenes on delay, energy consumption and training time are considered, and a mode capable of adaptively adjusting the calculation cost of the weight by a system administrator is designed. The method controls the total service cost while protecting privacy and improves the system performance.

FIG. 1 is a flowchart of an intelligent offload optimization method based on federated learning according to an embodiment of the present invention; the method specifically comprises the following steps:

s101: adopting a multilayer perceptron model to learn and calculate task characteristics and classify tasks, and adopting a federal learning training model;

specifically, a multilayer perceptron model is adopted to learn and calculate task characteristics and classify tasks, and a federal learning training model is adopted; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs an activation function;

hidden layer processing and testing of the model; if the input layer and the first hidden layer have weight directlyAnd offsetThen the output of the first hidden layer isBy analogy, the output of each layer of the hidden layer is(ii) a Training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

The cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

Federal learning based training; use ofThe method comprises the steps that communication is carried out among the Internet of things equipment, the edge server and the cloud server as model parameters, P local Internet of things equipment is randomly selected to participate in the training of the round when the training of the round is started, and for the P equipment, each equipment downloads the model parameters from the edge server and initializes the model parametersAfter training, the parameters are updated to(ii) a Is undergoingAfter each training round, performing edge aggregation, namely uploading model parameters of all the devices to an edge server, and updating the parameters of the edge server:(ii) a Is undergoingAfter the training rounds, carrying out global aggregation, namely uploading model parameters of all edge servers to a global cloud server, and updating the parameters of the cloud server:

classifying the tasks, specifically comprising:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

S102: calculating the scores of all the classified tasks to obtain N top-ranked tasks as a set

Calculating the scores of all the classified tasks to obtain N top-ranked tasks as a setThe method specifically comprises the following steps:

the probability of being suitable for execution in the edge server isThen the probability of not being suitable for execution in the edge server isIn proportion of the twoRepresents;

the scoring formula is:

wherein the content of the first and second substances,andis a constant.

S103: calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setIf so, the task is executed locally; when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

specifically, calculating the total service cost of the system specifically includes:

the total service cost system of the system is delay, system loss and training time cost, and is specifically expressed as:

the delays include a transmission delay, a computation delay, and a wait delay, wherein,the delay in the transmission of the data is,the delay is calculated such that,waiting for the delay to occur,in order to train the time of the exercise,in order to consume the energy of the system,is the maximum energy consumption limit of the equipment,respectively represent weights;

the propagation delay is expressed by:

whereinRepresenting a transmission delay of an internet of things device;the number of the edge servers is one,respectively, a local device and a cloud server.

The calculated delay is expressed as:

wherein the content of the first and second substances,to represent a compute nodeIn a time slotComputing power, variables ofA computational demand representing a computational delay;indicating i computing node is in time slotIn the operating state ofTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks;

the latency is represented by:

whereinRepresented as a local device, is shown,expressed as the latency of the edge server;

wherein, useAndrespectively representing the waiting time of the tasks left in the local internet of things equipment and the edge server queue,andrepresenting a compute nodeTask size and CPU frequency;

the energy consumption of the system is expressed as:

whereinWhich represents the calculated energy consumption,represents transmission energy consumption;

representation of training time

S104: when time slotThe time of day is lower than the last time slotUpdate the offload policyIndicating i computing node is in time slotIn an operating state of (1), whereinRespectively represent an edge server, a local device, a cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks;

referring to fig. 2, another embodiment of the present invention provides an offload intelligent optimization system based on federal learning, including:

the task classification unit 201: adopting a multilayer perceptron model to learn and calculate task characteristics and classify tasks, and adopting a federal learning training model;

specifically, the task classification unit 201 adopts a multilayer perceptron model to learn and calculate task characteristics and classify tasks, and adopts a federal learning training model; the method specifically comprises the following steps:

initializing parameters; initialization in time slotsThe CPU frequencies of the local device and the edge server and the CPU frequencies of the total local device and the edge server;

initializing a model; initializing model parameters after determining a training model of the multilayer perceptron model; local device model parametersEdge server model parametersCloud server model parametersFeature vector(ii) a Determining a modified linear elementAs an activation function;

hidden layer processing and testing of the model; if the input layer and the first hidden layer have weight directlyAnd offsetThen the output of the first hidden layer isBy analogy, the output of each layer of the hidden layer is(ii) a Training by using a loss function test model, wherein the loss function is represented by the following formula:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

The cost function is the average of the loss functions of all training data sets and is expressed as:

wherein the content of the first and second substances,the samples are represented by a representation of the sample,is the total number of samples and is,the actual label is represented by a representation of,representing the output of the prediction.

Federal learning based training; use ofThe method comprises the steps that communication is carried out among the Internet of things equipment, the edge server and the cloud server as model parameters, P local Internet of things equipment is randomly selected to participate in the training of the round when the training of the round is started, and for the P equipment, each equipment downloads the model parameters from the edge server and initializes the model parametersAfter training, the parameters are updated to(ii) a Is undergoingAfter the training rounds, performing edge aggregation, namely uploading model parameters of all the equipment to an edge server, and performing edge server parameter aggregationUpdating:(ii) a Is undergoingAfter the training rounds, carrying out global aggregation, namely uploading model parameters of all edge servers to a global cloud server, and updating the parameters of the cloud server:

specifically, classifying the task specifically includes:

the task has low complexity and sensitivity to delay and is executed in local Internet of things equipment;

the task has high complexity and sensitivity to delay and is executed in an edge server;

the task is high in complexity and insensitive to delay, and is executed on a cloud server;

tasks cannot be executed in local internet of things equipment, are not sensitive to delay, and are executed in an edge server.

The calculation scoring unit 202: calculating the scores of all the classified tasks to obtain N top-ranked tasks as a set

Specifically, a scoring unit is calculated, scoring of all classification tasks is calculated, and N tasks before ranking are obtained as a setThe method specifically comprises the following steps:

cost calculation unit 203: calculating the total service cost of the system, when the CPU frequency of the local server is less than the edge server frequency of a certain proportion and the current task is not in the setIf so, the task is executed locally; when CP of local serverThe U frequency is less than the edge server frequency of a certain proportion and the current task is in the setIf so, the task is executed at the edge server; otherwise, the task is executed at the cloud server;

specifically, the cost calculating unit calculates a total service cost of the system, and specifically includes:

the total service cost system of the system is delay, system loss and training time cost, and is specifically expressed as:

the delays include a transmission delay, a computation delay, and a wait delay, wherein,the delay in the transmission of the data is,the delay is calculated such that,waiting for the delay to occur,in order to train the time of the exercise,in order to consume the energy of the system,is the maximum energy consumption limit of the equipment,respectively represent weights;

the propagation delay is expressed by:

whereinRepresenting a transmission delay of an internet of things device;the number of the edge servers is one,respectively, a local device and a cloud server.

The calculated delay is expressed as:

wherein the content of the first and second substances,to represent a compute nodeIn a time slotComputing power, variables ofA computational demand representing a computational delay;indicating i computing node is in time slotIn the operating state ofTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks;

the latency is represented by:

whereinRepresented as a local device, is shown,expressed as the latency of the edge server;

wherein, useAndrespectively representing the waiting time of the tasks left in the local internet of things equipment and the edge server queue,andrepresenting a compute nodeTask size and CPU frequency;

the energy consumption of the system is expressed as:

whereinWhich represents the calculated energy consumption,represents transmission energy consumption;

representation of training time

The offload policy update unit 204: when time slotThe time of day is lower than the last time slotUpdate the offload policy,Indicating i computing node is in time slotIn an operating state of (1), whereinRespectively represent an edge server, a local device, a cloud serverTo representThe device performing the computing task, i.e. offloading the computing task toThe device executes; when in useTo representThe device does not perform computational tasks.

The invention provides an unloading intelligent optimization method based on federal learning, which is characterized in that in order to optimize a calculation unloading strategy, a multilayer perceptron model is used for learning task characteristics, tasks are classified to obtain tasks more suitable for being unloaded to an edge server or a cloud server, and in order to protect privacy data of different Internet of things devices, the federal learning is used for training the model, so that data leakage in the process of transmitting the data to the server can be avoided.

The above description is only an embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept should fall within the scope of infringing the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种结合云端与边端的窃电用户识别方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!