Cross-sample federal classification modeling method and device, storage medium and electronic equipment
1. A method for modeling across sample federal classification based on neural networks and knowledge distillation, the method comprising:
obtaining label standard information to be formulated by a federal classification modeling task, and performing structure self-definition and parameter initialization processing on a local neural network model of the federal classification modeling task according to the label standard information and local training data;
training the local neural network model to obtain a prediction label vector of the local training data, and performing knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category;
acquiring federal modeling parameters corresponding to the federal classified modeling task, and sending the soft label vector of each local category to a coordinator according to the federal modeling parameters;
and receiving the federal label vector returned by the coordinator according to the soft label vector, and continuously training the local neural network model according to the federal label vector and the local training data to finish training to obtain a federal classification modeling model.
2. The method of claim 1, wherein training the local neural network model to obtain the predicted label vector of the local training data comprises:
acquiring original training data for training the federal classification modeling task, and performing label alignment processing and data filtering processing on the original training data to obtain local target training data;
and training the local neural network model by using the local target training data to obtain a prediction label vector of the local target training data.
3. The method of claim 2, wherein training the local neural network model with the local target training data to obtain the predictive label vector of the local target training data comprises:
performing data division on the local target training data to obtain a local training data set, and performing data division on the local training data set to obtain a plurality of groups of data to be trained;
and performing iterative training on the local neural network model by using the multiple groups of data to be trained to obtain a predicted label vector of the local target training data.
4. The method for modeling across-sample federal classification based on neural networks and knowledge distillation as claimed in claim 1, wherein the knowledge distillation processing of the predicted tag vectors to obtain local soft tag vectors of each category comprises:
acquiring temperature parameters related to knowledge distillation, and performing knowledge distillation calculation on the prediction label vector of the local training data and the temperature parameters to obtain a distillation vector of the prediction label of the local training data;
and carrying out average calculation on the distillation vectors of the prediction labels of the local training data of the same category to obtain the soft label vector of each local category.
5. The method of claim 1, wherein the federal modeling parameters include a number of federal training rounds, and the local neural network model is trained continuously according to the federal tag vector and the local training data to complete training to obtain a federal classification modeling model, and the method comprises:
obtaining label data corresponding to the local training data, and performing loss calculation on the predicted label vector and the label data to obtain a first loss value;
performing loss calculation on the predicted tag vector and the federal tag vector to obtain a second loss value, and updating the local neural network model according to the first loss value and the second loss value;
and continuing training the updated local neural network model until the training times of the neural network model reach the federal training rounds, and obtaining a trained federal classified modeling model.
6. The neural network and knowledge distillation based cross-sample federal classification modeling method as claimed in claim 1, wherein the federal modeling parameters include communication frequency conditions;
the step of sending the local soft label vector of each category to a coordinator according to the federal modeling parameters comprises the following steps:
and when the training process of the local neural network model meets the communication frequency condition, sending the local soft label vector of each category to a coordinator.
7. The method for modeling by federal classification across samples based on neural networks and knowledge distillation as claimed in any one of claims 1-6, wherein the performing of structure customization and parameter initialization on the local neural network model according to the label standard information and local training data comprises:
determining standard structure information of the local neural network model, and performing structure self-definition on the local neural network model according to the standard structure information to obtain network structure information of the local neural network model;
and performing parameter initialization processing on the local neural network model.
8. A cross-sample federal classification modeling device based on neural networks and knowledge distillation, comprising:
the model definition module is configured to acquire label standard information to be formulated by a federal classification modeling task, and perform structure self-definition and parameter initialization processing on a local neural network model of the federal classification modeling task according to the label standard information and local training data;
the model training module is configured to train the local neural network model to obtain a prediction label vector of the local training data, and perform knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category;
the vector sending module is configured to obtain the federal modeling parameters corresponding to the federal classified modeling task and send the local soft label vector of each category to a coordinator according to the federal modeling parameters;
and the training completion module is configured to receive the federal label vector returned by the coordinator according to the soft label vector, and continue to train the local neural network model according to the federal label vector and the local training data so as to complete training and obtain a federal classification modeling model.
9. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the neural network and knowledge distillation based cross-sample federal classification modeling method of any of claims 1-7.
10. An electronic device, comprising:
a processor;
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the neural network and knowledge distillation based cross-sample federal classification modeling method of any of claims 1-7 via execution of the executable instructions.
Background
With the deep study and development of computer devices, artificial neural networks are widely applied to the field of computer artificial intelligence. In order to ensure that the trained artificial neural network has good performance, a large amount of data is usually required to be trained.
However, in some scenarios, training data is scattered in different organizations or organizations, and due to data privacy concerns, the training requirements of the artificial neural network cannot be met by means of data sharing. Even if the artificial neural network can be trained through shared data, requirements for communication and transmission complexity of training data are difficult to meet, and the training effect of the artificial neural network cannot be guaranteed.
In view of the above, the invention provides a cross-sample federal classification modeling method and device based on a neural network and knowledge distillation.
Disclosure of Invention
The invention aims to provide a cross-sample federal classification modeling method and device based on a neural network and knowledge distillation, a computer readable storage medium and electronic equipment, which are beneficial to training an artificial neural network through shared data under the condition of ensuring data privacy of all federal participants, and simultaneously guarantee the training effect of the artificial neural network on the basis of meeting the requirements of communication and transmission complexity of training data.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of embodiments of the present invention, there is provided a method of cross-sample federal classification modeling based on neural networks and knowledge distillation, the method comprising:
obtaining label standard information to be formulated by a federal classification modeling task, and performing structure self-definition and parameter initialization processing on a local neural network model by the federal classification modeling task according to the label standard information and local training data;
training the local neural network model to obtain a prediction label vector of the local training data, and performing knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category;
acquiring federal modeling parameters corresponding to the federal classified modeling task, and sending the soft label vector of each local category to a coordinator according to the federal modeling parameters;
and receiving the federal label vector returned by the coordinator according to the soft label vector, and continuously training the local neural network model according to the federal label vector and the local training data to finish training to obtain a federal classification modeling model.
In one exemplary embodiment of the present invention,
the training the local neural network model to obtain the predictive label vector of the local training data includes:
acquiring original training data for training the federal classification modeling task, and performing label alignment processing and data filtering processing on the original training data to obtain local target training data;
and training the local neural network model by using the local target training data to obtain a prediction label vector of the local target training data.
In an exemplary embodiment of the present invention, the training the local neural network model by using the local target training data to obtain a predictive label vector of the local target training data includes:
performing data division on the local target training data to obtain a local training data set, and performing data division on the local training data set to obtain a plurality of groups of data to be trained;
and performing iterative training on the local neural network model by using the multiple groups of data to be trained to obtain a predicted label vector of the local target training data.
In one exemplary embodiment of the present invention,
the knowledge distillation processing is carried out on the prediction label vector to obtain the local soft label vector of each category, and the method comprises the following steps:
acquiring temperature parameters related to knowledge distillation, and performing knowledge distillation calculation on the prediction label vector of the local training data and the temperature parameters to obtain a distillation vector of the prediction label of the local training data;
and carrying out average calculation on the distillation vectors of the prediction labels of the local training data of the same category to obtain the soft label vector of each local category.
In one exemplary embodiment of the present invention,
the federal modeling parameters comprise federal training rounds, and the local neural network model is continuously trained according to the federal label vector and the local training data to finish training to obtain a federal classification modeling model, which comprises the following steps:
obtaining label data corresponding to the local training data, and performing loss calculation on the predicted label vector and the label data to obtain a first loss value;
performing loss calculation on the predicted tag vector and the federal tag vector to obtain a second loss value, and updating the local neural network model according to the first loss value and the second loss value;
and continuing training the updated local neural network model until the training times of the neural network model reach the federal training rounds, and obtaining a trained federal classified modeling model.
In one exemplary embodiment of the present invention,
the federal modeling parameters include communication frequency conditions;
the step of sending the local soft label vector of each category to a coordinator according to the federal modeling parameters comprises the following steps:
and when the training process of the local neural network model meets the communication frequency condition, sending the local soft label vector of each category to a coordinator.
In one exemplary embodiment of the present invention,
the performing structure self-defining processing and parameter initialization processing on the local neural network model according to the label standard information and the local training data comprises:
determining standard structure information of the local neural network model, and performing structure self-definition on the local neural network model according to the standard structure information to obtain network structure information of the local neural network model;
and performing parameter initialization processing on the local neural network model.
According to a second aspect of an embodiment of the present invention, there is provided a federal classification modeling apparatus based on neural networks and knowledge distillation, the apparatus including:
the model definition module is configured to acquire label standard information to be formulated by a federal classification modeling task, and perform structure self-definition and parameter initialization processing on a local neural network model by the federal classification modeling task according to the label standard information and local training data;
the model training module is configured to train the local neural network model to obtain a prediction label vector of the local training data, and perform knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category;
the vector sending module is configured to obtain the federal modeling parameters corresponding to the federal classified modeling task and send the local soft label vector of each category to a coordinator according to the federal modeling parameters;
and the training completion module is configured to receive the federal label vector returned by the coordinator according to the soft label vector, and continue to train the local neural network model according to the federal label vector and the local training data so as to complete training and obtain a federal classification modeling model.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus including: a processor and a memory; wherein the memory has stored thereon computer readable instructions that, when executed by the processor, implement a method for cross-sample federal classification modeling based on neural networks and knowledge distillation in any of the exemplary embodiments described above.
According to a fourth aspect of an embodiment of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for cross-sample federal classification modeling based on neural networks and knowledge distillation in any of the exemplary embodiments described above.
According to the technical scheme, the method, the device, the computer storage medium and the electronic equipment for cross-sample federal classification modeling based on the neural network and knowledge distillation in the exemplary embodiments of the disclosure have at least the following advantages and positive effects:
in the method and the device provided by the exemplary embodiment of the disclosure, the local neural network model is subjected to structure customization and parameter initialization processing, so that the occurrence of over-fitting or under-fitting of the local neural network model can be effectively prevented. Furthermore, under the condition that the local neural network model is trained locally, knowledge distillation processing is carried out on the prediction label vector, the classification capability generated by the training data volume of other neural network models is learned, and the training effect is better on the basis of greatly reducing the computing resources. In addition, in the training process, only the soft label vector and the federal label vector need to be transmitted with the coordinating party, so that the communication cost and the transmission cost can be reduced. The artificial neural network is trained through the shared data, and the training effect of the artificial neural network is guaranteed on the basis of meeting the requirements of communication and transmission complexity of training data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow diagram of a cross-sample federal classification modeling method based on neural networks and knowledge distillation in an exemplary embodiment of the disclosure;
FIG. 2 schematically illustrates a node schematic of a neural network model in an exemplary embodiment of the disclosure;
FIG. 3 schematically illustrates a node schematic of a complex-structured neural network model in an exemplary embodiment of the present disclosure;
FIG. 4 schematically illustrates a structural schematic of a convolutional neural network in an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating a method for performing structure customization and parameter initialization processing by the participants of the federated classification modeling task in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of a method of training a local neural network model in an exemplary embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of a method of training a local neural network model using target training data in an exemplary embodiment of the present disclosure;
FIG. 8 schematically illustrates a flow diagram of a method of knowledge distillation processing in an exemplary embodiment of the disclosure;
FIG. 9 is a schematic flow chart diagram illustrating a method for continued training of a local neural network model by a participant of each federated modeling task in an exemplary embodiment of the present disclosure;
FIG. 10 is a schematic flow diagram illustrating a neural network and knowledge distillation based cross-sample federal classification modeling method in an application scenario in an exemplary embodiment of the present disclosure;
fig. 11 is a schematic diagram illustrating a network structure of a participant a in an application scenario in an exemplary embodiment of the present disclosure;
fig. 12 is a schematic structural diagram schematically illustrating a network structure of a participant B in an application scenario in an exemplary embodiment of the present disclosure;
fig. 13 is a schematic structural diagram schematically illustrating a network structure of a participant C under an application scenario in an exemplary embodiment of the present disclosure;
FIG. 14 schematically illustrates a structural schematic diagram of a cross-sample federal classification modeling apparatus based on neural networks and knowledge distillation in an exemplary embodiment of the disclosure;
FIG. 15 schematically illustrates a structural schematic diagram of an electronic device for implementing cross-sample federal classification modeling based on neural networks and knowledge distillation in an exemplary embodiment of the disclosure;
FIG. 16 schematically illustrates a structural schematic diagram of a computer-readable storage medium for implementing cross-sample federal classification modeling based on neural networks and knowledge distillation in exemplary embodiments of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
The terms "a," "an," "the," and "said" are used in this specification to denote the presence of one or more elements/components/parts/etc.; the terms "comprising" and "having" are intended to be inclusive and mean that there may be additional elements/components/etc. other than the listed elements/components/etc.; the terms "first" and "second", etc. are used merely as labels, and are not limiting on the number of their objects.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities.
In order to solve the problems in the related art, the disclosure provides a cross-sample federal classification modeling method based on a neural network and knowledge distillation, wherein a graphical user interface is provided through a first terminal device, and the graphical user interface displays at least part of game scenes. Fig. 1 shows a flow chart of a cross-sample federal classification modeling method based on neural networks and knowledge distillation, and as shown in fig. 1, the cross-sample federal classification modeling method based on neural networks and knowledge distillation at least comprises the following steps:
and S110, obtaining label standard information to be formulated by the Federal classification modeling task, and performing structure self-definition and parameter initialization processing on a local neural network model of the Federal classification modeling task according to the label standard information and local training data.
And S120, training the local neural network model to obtain a prediction label vector of local training data, and performing knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category.
And S130, acquiring federal modeling parameters corresponding to the federal classification modeling task, and sending the soft label vector of each local category to a coordinator according to the federal modeling parameters.
And S140, receiving the federal label vector returned by the coordinator according to the soft label vector, and continuously training the local neural network model according to the federal label vector and local training data to finish training to obtain the federal classification modeling model.
And repeating the iteration steps S120-S140 to finish training to obtain the federal classification modeling model.
In the exemplary embodiment of the disclosure, the participator can firstly perform structure self-defining and parameter initialization processing on the neural network model of the party, and can effectively prevent the over-fitting and under-fitting of the model. Furthermore, data transmission and communication between the participator and the coordinator only comprise the soft label vector and the federal label vector, so that the communication and transmission cost is greatly reduced, extra encryption and decryption calculation is not required for the communication content, the training rate and the communication rate of the neural network model are accelerated, and the data safety of the participator is guaranteed. In addition, the communication content does not need to be stored in a centralized mode, and therefore the requirement for resource storage is reduced. And then, the neural network model is trained continuously according to the federal label vector, and other participants are used as teacher networks by the participants, so that the classification capability of the teacher networks can be learned when the training data amount owned locally is small or the network structure is simple, and the performance of the neural network model is better than that of the neural network model obtained by independently training only by using the local data. Finally, the participants only need to train locally, and the amount of computing resources is reduced.
The following describes each step of the training method of the neural network model in detail.
In step S110, label standard information to be formulated by the federal classification modeling task is obtained, and structure customization and parameter initialization processing are performed on the local neural network model of the federal classification modeling task according to the label standard information and the local training data.
In an exemplary embodiment of the present disclosure, an Artificial Neural network (Artificial Neural net rk), which may be referred to simply as a Neural network, is a mathematical or computational model that mimics the structure and function of a biological Neural network. Neural networks are formed by the concatenation of nodes, where individual nodes mimic biological neurons, receiving single or multiple inputs, and producing one output.
Fig. 2 shows a schematic node diagram of a neural network model, which is composed of nodes composed of 3 inputs and 1 output, as shown in fig. 2.
Furthermore, a plurality of nodes can form complex neural networks with different structures according to different connection modes. Fig. 3 shows a node schematic diagram of a neural network model with a complex structure, which is composed of 10 nodes including 4 input layers, 2 intermediate layers and 1 output layer, as shown in fig. 3.
With the deep learning research and the development of computing devices, a convolutional Neural Network (Co nvolution Neural Network) is widely applied to the fields of computer vision and the like. A convolutional neural network is a neural network that contains both convolutional calculations and a deep structure.
A typical convolutional neural network contains convolutional layers, pooling layers, and fully-connected layers. The convolutional layer is generally responsible for extracting local features in an image, the pooling layer is used for reducing parameter magnitude, and the fully-connected layer is similar to a neural network layer and outputs a desired result.
Fig. 4 shows a schematic structural diagram of a convolutional neural network, and as shown in fig. 4, the neural network model includes 1 convolutional layer, 1 pooling layer, and 2 fully-connected layers.
The label criteria established for the federal classification modeling task may be established by an initiator participating in the federal classification modeling task. In the federal classified modeling task, the participators are all data providers which request to be added into the federal classified modeling task; the initiator is one of the participants, and only one initiator exists in one federal classification modeling task; the coordinator is negotiated and confirmed by all the participants before the initiation of the federal classification modeling task, and can be one of the participants, or can be acted by an organization or an organization outside the participants, and only one coordinator exists in one federal classification modeling task.
Therefore, the tag standard information is information of a tag standard formulated by the initiator. The label standard information defines classification categories and uniform labels of all categories in the federal classification modeling task. And the initiator can make a federal modeling request to other participants in the federal classified modeling task and send the label standard information to the participants who agree to join the federal classified modeling task so as to require that all the participants who agree to participate in the federal classified modeling task carry out labeling and training according to the label standard formulated by the initiator.
In an alternative embodiment, fig. 5 is a flowchart illustrating a method for performing structure customization and parameter initialization processing by a party involved in federal classification modeling, where as shown in fig. 5, the method at least includes the following steps: in step S510, standard structure information of the neural network model is determined, and the local neural network model is structurally customized according to the standard structure information, so as to obtain network structure information of the local neural network model. Each participant can customize the structure of the local neural network model according to the amount of the local data and the standard information of the tags.
The local neural network structure customization is to define the nodes, the layer number and the like of each layer according to standard structure information, namely, under the condition of containing a necessary neural network model structure, but the number of the nodes of the output layer which is required to follow the last layer in the defining process is equal to the number of classification types which can be defined in the label standard information.
When the local data of the participator is more and the classification categories defined in the label standard information are more, a relatively complex neural network model can be defined so as to obtain a neural network model with higher precision; when the local data of the participants is less, a simple neural network model can be defined, and the condition that the model is not fit is prevented.
For example, the network structure information of the neural network model obtained by structure customization may be that the neural network model includes 2 convolutional layers, 2 pooling layers, and 2 fully-connected layers.
In step S520, a parameter initialization process is performed on the local neural network model.
After the network structure information is obtained, parameter initialization processing can be performed on the network structure. The parameter initialization processing is to initialize the weights of each layer of the local neural network model, and each participant can select the weights according to the neural network model.
In the exemplary embodiment, the participator can autonomously set according to self conditions through structure self-defining processing and parameter initialization processing, so that the condition of model under-fitting is prevented, a neural network model with higher precision can be set, and the subsequent training effect is ensured.
In step S120, the local neural network model is trained to obtain a predicted label vector of the local training data, and knowledge distillation processing is performed on the predicted label vector to obtain a soft label vector of each local category.
In an exemplary embodiment of the present disclosure, the training of the local neural network model of the participants of each federated classification modeling task is primarily by computing the error between the network output and the true label of the input data and back-propagating the error to iteratively update parameters within the network, so that the network output is as close as possible to the true label of the input data. In the prediction process, when label-free data is input into the trained network structure, the neural network can output the label prediction value of the data.
In an alternative embodiment, fig. 6 shows a flow chart of a method for training a local neural network model, as shown in fig. 6, the method at least includes the following steps: in step S610, original training data for training the federal classification modeling task is obtained, and label alignment processing and data filtering processing are performed on the original training data to obtain local target training data.
The raw training data is data that is stored locally at each participant and that is consistent with this federal classification modeling task. In order to avoid processing irrelevant data, the original training data may be subjected to label alignment processing and data filtering processing.
Specifically, the participant may label the original training data in the label standard of this time, and the labeling manner is specified according to the label standard of this time.
In addition, the raw data may include other classes of data that are not involved in the federal classification modeling task at this time. When the original training data provided by the participator contains other training data except the label standard, the data can be deleted from the original data of the participator to obtain target training data, namely, meaningless training data is filtered out, and when all the original training data of the participator are not in the category defined by the label standard, the participator is an invalid participator of the federal classification modeling task at this time, the participator needs to quit the federal classification modeling task at this time, so as to avoid the invalid participator from participating in the federal classification modeling task at this time.
In step S620, the local neural network model is trained using the local target training data, so as to obtain a prediction label vector of the local training data.
In an alternative embodiment, fig. 7 is a flow chart illustrating a method for training a local neural network model using target training data, as shown in fig. 7, the method at least includes the following steps: in step S710, data division is performed on the local target training data to obtain a local training data set, and data division is performed on the local training data set to obtain a plurality of groups of data to be trained.
Each participant can perform data division on owned target training data to obtain two parts, namely a training data set and a verification data set. For example, the target training data may be divided into a training data set and a validation data set on a 4:1 scale.
Furthermore, the training data set can be subjected to data division to obtain a plurality of groups of data to be trained. Specifically, the whole training sample is divided into several batchs (Batch/Batch sample), and the Batch Size (Batch Size, Size of each Batch sample) of the batchs can be set by the participant.
In step S720, iterative training is performed on the local neural network model by using multiple sets of data to be trained, so as to obtain a predicted label vector of the local target training data.
After obtaining a plurality of sets of data to be trained, the data to be trained can be input into the neural network model, and a prediction label vector of the corresponding data is obtained.
In the exemplary embodiment, the training of the neural network model can be realized through the target training data, the data of the participators participate in the model training locally, information such as data and labels does not need to be leaked to any other organization or mechanism, and the data security and privacy information of each participator is guaranteed.
After the predicted label vector of the local training data is obtained, knowledge distillation treatment can be carried out on the predicted label vector to obtain a local soft label vector.
In the present exemplary embodiment, fig. 8 shows a schematic flow diagram of a method of knowledge distillation processing, which, as shown in fig. 8, comprises at least the following steps: in step S810, a temperature parameter related to knowledge distillation is obtained, and knowledge distillation calculation is performed on the prediction label vector of the local training data and the temperature parameter to obtain a distillation vector of the prediction label of the local training data.
The temperature parameter is a parameter for subsequent knowledge distillation calculation and can be set by self.
Knowledge distillation refers to the fact that Soft labels (Soft targets) output by a Teacher Network (Teacher Network) are used as part input of error calculation of a Student Network (Student Network) to guide the training of the Student Network, so that the Student Network can achieve the same capacity as a complex Teacher Network by using a simple small Network structure, achieve the same or equivalent result as a complex large Teacher Network, and achieve knowledge migration and compression of a complex model.
The soft label comprises probability data obtained after the teacher network processes the input sample data, and is prediction data obtained by the teacher network according to the sample data. In knowledge distillation, the prediction data can be used for training of student networks, and the nature of the prediction data is close to that of the label data, but different from that of the real label, so that the prediction data is called a soft label.
In a neural network structure, a Softmax layer is usually used to make the output result be expressed in the form of probability, for example, the output vector of the neural network is [3,0, -3], and each element in the output vector is substituted into a Softmax calculation formula:
qi=exp(Zi)/∑jexp(Zj) (1)
the conversion value obtained by the calculation using the formula (1) is about [0.95,0.0476,0.0024], i.e., the probability that the vector belongs to the first class, the second class, and the third class is 0.95,0.0476, and 0.0024, respectively.
Further, in the case of setting the temperature parameter, the knowledge distillation calculation of the output vector and the temperature parameter may be performed according to equation (2):
qi=exp(Zi/T)/∑jexp(Zj/T) (2)
wherein T is a temperature parameter, q isiIs the sample distillation vector. The sample distillation vector has a higher entropy value, can provide more information and a smaller gradient variance, and therefore is easier to train than the original teacher network and is used with higher learning efficiency.
Because the value of the sample distillation vector is between 0 and 1, the distribution of the value of the sample distillation vector between 0 and 1 is more moderate when the value of T is larger, namely the softer the sample distillation vector, the more the instruction function of a teacher network to a student network can be played. This is because the disturbance is added in the transfer learning process, so that the student network is more effective in learning for reference, has stronger generalization capability, and is a strategy for inhibiting overfitting.
For example, when T is 3, the transformed sample distillation vector is found to be about [0.663,0.246,0.091], which is a probability of being "softer" than the [0.95,0.0476,0.0024] distribution. Therefore, in the process of student network training, the teacher network can use a higher temperature T to output a softer distribution, and the output of the student model is similar to the teacher network, so that the knowledge in the teacher network is extracted, and the knowledge distillation is called.
In step S820, an average value of the distillation vectors of the prediction labels of the local training data of the same category is calculated, so as to obtain a local soft label vector of each category.
After the sample distillation vector is obtained, the distillation vectors of the sample prediction labels of the target training data with the same type of real label can be averaged to obtain the soft label vector of the target training data. The average calculation may be a general evaluation calculation, a weighted average calculation, or a calculation according to an actual situation, which is not particularly limited in this exemplary embodiment.
In the exemplary embodiment, each participant is regarded as the teacher network of other participants by means of knowledge distillation calculation to guide model training of other participants in the subsequent training process, and even if training data of the participants is less or the network structure is simpler, the classification capability of the teacher network of other participants can be learned, so that the performance of the model training is better than that of the model training only by using the current participant data. In addition, under the condition that the training data types owned by the participants are not complete, such as the training data lack of a certain type of data, the classification capability of the missing type can be learned by the model of the participants, the capability training of model training is completed, and the trained network model is complete and accurate and has excellent effect.
In step S130, federal modeling parameters corresponding to the federal classification modeling task are obtained, and the local soft label vector of each category is sent to the coordinator according to the federal modeling parameters.
In the exemplary embodiment of the disclosure, the knowledge federation aims to ensure that each participant exchanges "knowledge" in data without leaving the local, so as to establish a model which makes full use of each participant data, and achieve the purposes of "data is available and invisible, and knowledge is shared together".
Wherein "knowledge" may be understood as information transferred between the server or terminal of the participant and the terminal or server of the coordinator. For example, "knowledge" in the participant's terminal or server may be extracted or calculated from sample data locally.
According to the data distribution characteristics of each participant, the knowledge federation can be divided into a cross-feature federation, a cross-sample federation and a compound federation. The cross-sample federation means that data of each training party has the same characteristics, but samples of all the parties participating in modeling are independent, and each training party has label data corresponding to the sample of each training party.
The purpose of cross-sample federated modeling is to fully utilize sample and label data of all training parties under the condition that the data is not out of the local, and obtain a federated model which has better effect than the model trained by only using local data.
In order to realize the knowledge federation among all the participants, the initiator can determine parameters related to the knowledge federation at this time, namely, the federation modeling parameters. The federal modeling parameters may include federal training rounds and communication frequency conditions.
Wherein the number of federal training rounds is the Epoch (number of rounds). When a complete data set passes through the neural network once and back once, the process is called epoch. That is, all training samples have been forward propagated and backward propagated in the neural network. In another colloquial point, an Epoch is the process of training all training samples once.
The communication frequency conditions dictate the frequency and conditions of communication between the knowledge distillation process and the various parties. For example, the communication frequency condition may be that knowledge distillation processing and communication are performed once per Epoch.
In an alternative embodiment, the federal modeling parameters include communication frequency conditions. And when the training process of the local neural network model meets the communication frequency condition, sending the local soft label vector of each category to the coordinator.
Each participant sends the local soft label vector to the coordinator for communication, so that the local soft label vector can be sent only when the training process of the neural network model meets the communication frequency condition.
After receiving the soft label vectors sent by each participant according to the communication frequency condition, the coordinator can calculate the soft label vectors of each participant to obtain the federal label vector of each participant. The federal label vector of each participant is the mean of the corresponding category soft label vectors provided by other participants except for the federal label vector.
Further, the coordinator sends the federate label vector of each participant to the corresponding participant again.
In step S140, the receiving coordinator continues to train the local neural network model according to the federate label vector returned by the soft label vector and the federate label vector and the local training data, so as to complete training and obtain the federate classification modeling model.
In an exemplary embodiment of the present disclosure, after the coordinator calculates the federate tag vector corresponding to each participant, the federate tag vector may be returned to the participant, and thus, the participant may receive the corresponding federate tag vector for subsequent training.
In an alternative embodiment, the federated modeling parameters include a federated training round number. Fig. 9 is a flow chart illustrating a method for continuing training of the local neural network model by the participants of each federal classification modeling task, as shown in fig. 9, the method at least includes the following steps: in step S910, label data corresponding to the local training data is obtained, and a loss calculation is performed on the predicted label vector and the label data to obtain a first loss value.
It is worth noting that training one Batch is an Iteration. To complete an iteration, parameters in the neural network model are iterated through errors between the output vectors and the real label data, so that the output vectors of the neural network model are as close as possible to the input data, and a trained neural network model is obtained.
Therefore, each federal classification modeling participant acquires label data corresponding to local target training data, and the label data is a real label of the target training data.
Further, the first loss value of the output vector and the tag data may be calculated according to equation (3):
equation (3) is a cross entropy loss function. Cross Entropy (Cross Entropy) is an important concept in shannon information theory, and is mainly used for measuring difference information between two probability distributions. In the information theory, the cross entropy is the difference information representing two probability distributions p, q, where p represents the true distribution and q represents the false distribution, and in the same set of events, the false distribution q represents the average number of bits required for an event to occur.
The cross entropy can be used as a loss function in machine learning, p represents the distribution of real marks, q represents the distribution of predicted marks of the trained model, and the similarity of p and q can be measured through the cross entropy loss function. Cross entropy as a loss function also has the advantage that the problem of the learning rate of the mean square error loss function being reduced when the gradient is reduced can be avoided by using the sigmoid function, because the reduction rate of the learning rate can be controlled by the output error.
In addition, the method of performing the loss calculation on the output vector and the tag data to obtain the first loss value may also be implemented by using formula (3), and other calculation methods may also be used, which is not particularly limited in this exemplary embodiment.
In step S920, performing loss calculation on the predicted tag vector and the federal tag vector to obtain a second loss value, and updating the local neural network model according to the first loss value and the second loss value.
Specifically, the second loss value may be obtained by performing loss calculation on the output vector and the federal tag vector, and the calculation manner may refer to formula (3), or may adopt other manners, which is not particularly limited in this exemplary embodiment.
Furthermore, each federal classified modeling participant uses the first loss value and the second loss value as loss errors of updating of the local neural network model, and updates parameters of the local neural network model.
In step S930, the updated local neural network model continues to be trained until the number of times of training of the neural network model reaches the number of federal training rounds, so as to obtain a trained federal classification modeling model.
After the update of the neural network model is finished, the iterative training of the neural network model is finished. Also, the number of training times of the neural network model may be increased by one.
Further, each federal classification modeling participant can continue to train the updated local neural network model. Similarly, as in the first training of the local neural network model, knowledge distillation processing is performed on the prediction label vectors which are continuously trained, communication with the coordinator is performed, the federal label vectors of the coordinator are updated, and the neural network model is also updated again.
In the process of continuously counting the training times of the neural network model, when the training times of the neural network model reach the number of federal training rounds, the completion of the federal classification modeling task at this time is indicated, namely the iterative training of the neural network model is completed, and each participant can obtain the trained neural network model and quit the training.
In the exemplary embodiment, the trained neural network model can be trained iteratively by updating the neural network model through the loss value, and meanwhile, the training of the plurality of neural network models is completed, so that the training mode has better flexibility, the plurality of neural network models can be influenced mutually by using the federal label vector, and the modeling effect is better.
The neural network and knowledge distillation based cross-sample federal classification modeling method in the disclosed embodiments is described in detail below in conjunction with an application scenario.
Fig. 10 shows a flow diagram of a cross-sample federal modeling method based on neural network and knowledge distillation in an application scenario, as shown in fig. 10, in step S1010, an initiator initiates a task and formulates a label standard.
Initiators participating in the federal classification modeling task may formulate labeling criteria formulated by the federal classification modeling task. In the federal classified modeling task, the participators are all data providers which request to be added into the federal classified modeling task; the initiator is one of the participants, and only one initiator exists in one federal classification modeling task; the coordinator is negotiated and confirmed by all the participants before the initiation of the federal classification modeling task, and can be one of the participants, or can be single for an organization or an organization besides the participants, and only one coordinator exists in the federal classification modeling task at a time.
For example, the institution a has three images of a dog, a cat, and a cow, but since the sample size of the dog and the cow is 2000, the sample size of the cat is 20000, the sample number of the cat is greater than the sample number of the dog and the cow, and to train an image classification model of three animals, firstly, a label standard needs to be established, as shown in table 1:
TABLE 1
Categories
Label (R)
Dog
0
Cat (cat)
1
Cattle
2
And obtaining label standard information after the label standard is established. The label standard information defines classification categories in the federal classification modeling task at this time to include three types, namely dog, cat and cattle, wherein the label of the dog is 0, the label of the cat is 1, and the label of the cattle is 2.
Further, agency a simultaneously sends a federal modeling request to agency B, C, D, E, where agency B, C, D agrees to join federal modeling and agency E refuses to join, so agency a sends tag standard information to agency B, C, D agreeing to join federal modeling.
In step S1020, the participant data filter is aligned with the tag.
Organization A, B, C, D collects images of dogs, cats, and cattle in own possession data as defined in the labeling standards, with each party having associated sample size (number of images) statistics as shown in table 2:
TABLE 2
In addition, when the original training data provided by the participant includes other training data except the label standard, the target training data can be obtained by deleting the data from the original data of the participant, that is, the original training data is subjected to data filtering processing to filter out meaningless training data, so that the other types of training data do not participate in the federal classification modeling task at this time. And when all the original training data of the participator are not in the category defined by the label standard, the participator is an invalid participator of the federal classified modeling task, and the participator needs to quit the federal classified modeling task.
Since institution D does not have any image data of dogs, cats, cattle, the participant is exited. However, organization A, B, C agreed to have organization D as the coordinator and organization D answered as the coordinator to join this federal classification modeling task, so during this training process A, B, C was the participant and a was the task initiator and D was the coordinator. Thereafter, the mechanism A, B labels the images of the dog, cat, and cow belonging to the own system as 0, 1, and 2, respectively, according to the label standard, and the mechanism C labels the data of the cat and cow belonging to the own system as 1 and 2, respectively, according to the label standard.
In step S1030, the participating party defines the neural network structure and the parameter initialization manner.
The participant mechanism A, B, C performs a structure definition process to customize the neural network structure according to the amount of data and classification task of the present party, respectively.
Fig. 11 shows a schematic structural diagram of a network structure of a participant a in an application scenario, and as shown in fig. 11, the network structure information of the participant a is that the neural network model is composed of 1 convolutional layer, 1 pooling layer and 2 fully-connected layers.
Fig. 12 shows a schematic structural diagram of a network structure of a participant B in an application scenario, and as shown in fig. 12, the network structure information of the participant B is that the neural network model is composed of 2 convolutional layers, 2 pooling layers, and 3 fully-connected layers.
Fig. 13 shows a schematic structural diagram of a network structure of a participant C in an application scenario, and as shown in fig. 13, the network structure information of the participant C is that the neural network model is composed of 2 convolutional layers, 2 pooling layers, and 2 fully-connected layers.
In addition, the output layers of the network structures shown in fig. 11, 12, and 13 each have 3 nodes, and the output values represent the probabilities that the image samples are a dog, a cat, and a cow, respectively.
After the network structure information is obtained, parameter initialization processing can be performed on the network structure information. The parameter initialization processing is to set the weight of each layer of the neural network model, and each participant can select the weight according to the neural network model.
In step S1040, the participant distills the category soft label according to the real label local training.
Participant mechanisms A, B and C are each programmed to perform a 4:1 (of course, the specific ratio may be set according to actual situations, and is not limited here), and is divided into a training data set and a verification data set, so that A, B, C shows each type of sample data in the training set and the verification set as shown in table 3:
TABLE 3
Participant A, B, C divides the present training data set by batch, for example, A, B, C each defines a training batch with data sizes of 32, 256, 128.
And the initiator a determines federal modeling parameters, such as 10 epochs of federal training rounds of each participant, and soft tag communication frequency, i.e. communication frequency condition, of performing soft tag distillation and communication once for each epoch. Further, initiator a sends these parameters to participant B, C and coordinator D.
Then, the participants A, B, C start iterative training of the local neural network model according to the own training data set, so as to calculate the training loss of each iteration according to the output vector of the local model and the real label of the training sample, and the loss is propagated in the reverse direction, and the parameters of the local model are updated.
And after the first round of training is finished, carrying out knowledge distillation calculation on the output vector of the first round of training and the corresponding temperature parameter according to a formula (2) to obtain a sample distillation vector.
Then, the sample distillation vector labeled 0 is averaged to obtain a soft label vector of the dog sample, the sample distillation vector labeled 1 is averaged to obtain a soft label vector of the cat sample, and similarly, the sample distillation vector labeled 2 is averaged to obtain a soft label vector of the cow sample.
For example, participants A, B and C calculated soft label vectors for dogs, cats, and cattle as [0.6,0.3,0.1], [0.4,0.5,0.1], [0.2,0.2,0.6], respectively. Participant A, B, C sends soft label vectors for each type of sample to coordinator D, respectively.
In step S1050, it is determined whether all participants have finished training.
After each training, the training ending situation of each federal classification modeling participant is judged.
In step S1060, the coordinator federal soft tag computation.
When the judgment of the training end condition of the participants is that all the participants do not end the training, the coordinator can calculate the federate label vector of each participant.
After receiving the soft tag vectors of each type of samples sent by the participant A, B, C in the first round, the organization D calculates the federal tag vectors of the participants A, B, C and sends the A, B, C federal tag vectors to the corresponding participants A, B, C.
For example, where party C's cat federal tag vector [0.35, 0.550.1 ] is the average of the cat tag vectors [0.4,0.5,0.1], [0.3,0.6,0.1] sent from party A, B.
In step S1070, the participant trains locally according to the real label and the federal soft label; distillation category soft label.
Participant A, B, C begins the next round of training based on the federal tag vector and local data sent by coordinator D. And updating parameters of the neural network model according to the loss value of each batch training, wherein the loss value of each batch training comprises a first loss value between the output vector and the real label and a second loss value between the output vector and the federal label vector.
After the next round of data training is completed, the soft label for each type of sample is redistilled and the updated soft label is sent to mechanism D.
And repeating the steps S1050-S1070 until all the participants finish 10 rounds of training, namely the training is finished after reaching the number of the federal training rounds, and the mechanisms A, B, C all obtain the trained neural network model.
In the aspect of privacy protection, compared with a centralized machine learning method, in the cross-sample federal classification modeling method based on a neural network model and knowledge distillation, data of participants locally participate in training of the neural network model, the data and label information of the participants do not need to be disclosed to other organizations or organizations, each type of soft label vector only needs to be transmitted to a coordinating party, and the soft label vector is an average value, so that real labels of training data of the participants are not disclosed. In a word, the data does not need to be shared among all the participants and the coordinating party, and the data security of all the participants is ensured.
In terms of resource requirements, compared with a centralized machine learning method, each participant trains only according to the local data amount in the model training process, and the computing resource of each participant is far lower than that required by centralized training. On the other hand, each participant only carries out iterative updating of the model according to the training data and the federal label vector of the participant, the training data of all participants do not need to be stored in a centralized manner, and the requirement on storage resources is lower than that of a centralized storage mode. Furthermore, training data of each participant does not need to be transmitted to other organizations or mechanisms, each participant and the coordinator are only communicated with each type of soft label vector, communication content is small, and communication and transmission cost is greatly reduced.
In the aspect of modeling effect, each participant can define the complexity of the neural network model according to the training data volume of the participant, and can effectively prevent the over-fitting and under-fitting of the model. In addition, each participant takes other participants as a teacher network, and when the training data amount owned locally is small or the network structure is simple, the classification capability of the teacher network can be learned, so that the performance of the neural network model is better than that of the neural network model trained by using the local data. And when the participant completes the owned sample category, for example, when a certain type of sample is lacked, the participant can learn the classification capability of the lacked category, so that the completeness is better.
In terms of flexibility and maneuverability, each participant does not need to define the same neural network structure, and compared with other federal parameter aggregation schemes, the scheme has more flexibility. In addition, in the process of the federal classification modeling task, the participator can apply for adding or quitting the process of the federal modeling at any time.
In the aspect of training rate and communication rate, compared with other federal parameter aggregation schemes, each participant only needs to transmit and receive each type of federated tag vector after each training is finished under the application scene, and the communication content does not need to perform extra encryption and decryption calculation, so that the training rate and the communication rate of the neural network model are greatly accelerated.
Through the data training artificial neural network shared in the scheme, the training effect of the artificial neural network can be simultaneously guaranteed on the basis of meeting the requirements of communication and transmission complexity of training data.
Furthermore, in exemplary embodiments of the present disclosure, a cross-sample federal classification modeling apparatus based on neural networks and knowledge distillation is also provided. Fig. 14 shows a schematic structural diagram of the across-sample federal classification modeling apparatus based on neural network and knowledge distillation, and as shown in fig. 14, the across-sample federal classification modeling apparatus 1400 based on neural network and knowledge distillation may include: a model definition module 1410, a model training module 1420, a vector transmission module 1430, and a training completion module 1440.
Wherein:
the model definition module 1410 is configured to acquire label standard information to be formulated by a federal classification modeling task, and perform structure self-definition and parameter initialization processing on a local neural network model by the federal classification modeling task according to the label standard information and local training data;
the model training module 1420 is configured to train the local neural network model to obtain a prediction label vector of the local training data, and perform knowledge distillation processing on the prediction label vector to obtain a soft label vector of each local category;
the vector sending module 1430 is configured to obtain federal modeling parameters corresponding to the federal classification modeling task and send the local soft label vector of each category to a coordinator according to the federal modeling parameters;
and a training completion module 1440 configured to receive the federal label vector returned by the coordinator according to the soft label vector, and continue to train the local neural network model according to the federal label vector and the local training data, so as to complete training and obtain a federal classification modeling model.
In an exemplary embodiment of the present invention, the training the local neural network model to obtain the predictive label vector of the local training data includes:
acquiring original training data for training the federal classification modeling task, and performing label alignment processing and data filtering processing on the original training data to obtain local target training data;
and training the local neural network model by using the local target training data to obtain a prediction label vector of the local target training data.
In an exemplary embodiment of the present invention, the training the local neural network model by using the local target training data to obtain a predictive label vector of the local target training data includes:
performing data division on the local target training data to obtain a local training data set, and performing data division on the training data set to obtain a plurality of groups of data to be trained;
and performing iterative training on the local neural network model by using the multiple groups of data to be trained to obtain a predicted label vector of the local target training data.
In one exemplary embodiment of the present invention,
the knowledge distillation processing is carried out on the prediction label vector to obtain the local soft label vector of each category, and the method comprises the following steps:
acquiring temperature parameters related to knowledge distillation, and performing knowledge distillation calculation on the prediction label vector of the local training data and the temperature parameters to obtain a distillation vector of the prediction label of the local training data;
and carrying out average calculation on the distillation vectors of the prediction labels of the local training data of the same category to obtain the soft label vector of each local category.
In one exemplary embodiment of the present invention,
the federal modeling parameters comprise federal training rounds, and the local neural network model is continuously trained according to the federal label vector and the local training data to finish training to obtain a federal classification modeling model, which comprises the following steps:
obtaining label data corresponding to the local training data, and performing loss calculation on the predicted label vector and the label data to obtain a first loss value;
performing loss calculation on the predicted tag vector and the federal tag vector to obtain a second loss value, and updating the local neural network model according to the first loss value and the second loss value;
and continuing training the updated local neural network model until the training times of the neural network model reach the federal training rounds, and obtaining a trained federal classified modeling model.
In one exemplary embodiment of the present invention,
the federal modeling parameters include communication frequency conditions;
the step of sending the local soft label vector of each category to a coordinator according to the federal modeling parameters comprises the following steps:
and when the training process of the local neural network model meets the communication frequency condition, sending the local soft label vector of each category to a coordinator.
In one exemplary embodiment of the present invention,
the performing structure self-defining processing and parameter initialization processing on the local neural network model according to the label standard information and the local training data comprises:
determining standard structure information of the local neural network model, and performing structure self-definition on the local neural network model according to the standard structure information to obtain network structure information of the local neural network model;
and performing parameter initialization processing on the local neural network model.
The specific details of the above-mentioned cross-sample federal classification modeling apparatus 1400 based on neural networks and knowledge distillation have been described in detail in the training method of the corresponding neural network model, and therefore are not described herein again.
It should be noted that although several modules or units of a cross-sample federal classification modeling apparatus 1400 based on neural networks and knowledge distillation are mentioned in the above detailed description, such partitioning is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In addition, in an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
An electronic device 1500 according to such an embodiment of the invention is described below with reference to fig. 15. The electronic device 1500 shown in fig. 15 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 15, electronic device 1500 is in the form of a general purpose computing device. Components of electronic device 1500 may include, but are not limited to: the at least one processing unit 1510, the at least one storage unit 1520, a bus 1530 connecting different system components (including the storage unit 1520 and the processing unit 1510), and a display unit 1540.
Wherein the memory unit stores program code that is executable by the processing unit 1510 to cause the processing unit 1510 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification.
The storage unit 1520 may include readable media in the form of volatile storage units, such as a random access memory unit (RAM)1521 and/or a cache memory unit 1522, and may further include a read-only memory unit (ROM) 1523.
The storage unit 1520 may also include a program/utility 1524 having a set (at least one) of program modules 1525, such program modules 1525 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 1530 may be any bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 1500 can also communicate with one or more external devices 1700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 1500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 1500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 1550. Also, the electronic device 1500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1560. As shown, the network adapter 1540 communicates with the other modules of the electronic device 1500 via the bus 1530. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 1500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above-mentioned "exemplary methods" section of the present description, when said program product is run on the terminal device.
Referring to fig. 16, a program product 1600 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.