Model compression method and device

文档序号:8726 发布日期:2021-09-17 浏览:30次 中文

1. A method of model compression, comprising:

acquiring a network model to be compressed and a target compression ratio;

determining a node compression mode according to the target compression ratio, and compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain a network model to be assigned;

determining a weight value corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and determining a target compression network model based on the weight values and the network model to be assigned.

2. The method of claim 1, wherein the node compression mode comprises an intra-layer node deletion mode;

the step of compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain the network model to be assigned comprises the following steps:

determining the number of layer nodes of the layer to be compressed aiming at each layer to be compressed of the network model to be compressed;

determining the number of reserved nodes according to the number of layer nodes and the target compression ratio;

and constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be compressed.

3. The method of claim 1, wherein the node compression modes include an intra-layer node and a layer deletion mode;

the step of compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain the network model to be assigned comprises the following steps:

determining at least one layer to be deleted according to the network model to be compressed and the target compression ratio, and deleting each layer to be deleted from the network model to be compressed to obtain a network model to be processed;

determining the proportion of nodes to be reserved according to the network model to be compressed, the network model to be processed and the target compression ratio;

determining the number of layer nodes of each layer to be processed of the network model to be processed;

determining the number of reserved nodes according to the number of the layer nodes and the proportion of the nodes to be reserved;

and constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be processed.

4. The method according to claim 1, wherein the determining, according to the network model to be compressed and the network model to be assigned, a weight value corresponding to each node to be assigned in the network model to be assigned comprises:

determining each node to be processed in the network model to be compressed corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and determining the weight value of each node to be processed as the weight value of each node to be assigned.

5. The method according to claim 4, wherein the determining, according to the to-be-compressed network model and the to-be-assigned network model, each to-be-processed node in the to-be-compressed network model corresponding to each to-be-assigned node in the to-be-assigned network model includes:

aiming at each layer to be evaluated in the network model to be evaluated, determining a layer to be compressed in the network model to be compressed corresponding to the layer to be evaluated;

and determining the nodes to be processed in the layer to be compressed, which correspond to the nodes to be assigned in the layer to be assigned, according to the number of the nodes to be assigned in the layer to be assigned and the weight values of the nodes to be processed in the layer to be compressed.

6. The method according to claim 1, wherein the determining a node compression mode according to the target compression ratio comprises:

if the target compression ratio is smaller than or equal to a preset compression ratio, determining that the node compression mode is an intra-layer node deletion mode;

and if the target compression ratio is larger than the preset compression ratio, determining that the node compression mode is an intra-layer node and layer deletion mode.

7. The method according to claim 1, wherein the network model to be compressed comprises a convolutional layer and a batch normalization layer corresponding to the convolutional layer;

determining a weight value corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned, including:

and determining the weight value of each to-be-assigned node in the to-be-assigned layer in the to-be-assigned network model corresponding to the batch normalization layer and the weight value of each to-be-assigned node in the to-be-assigned layer in the to-be-assigned network model corresponding to the convolution layer according to the weight value corresponding to each to-be-processed node in the batch normalization layer.

8. The method according to claim 7, wherein determining the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer and the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolution layer according to the weight values corresponding to the nodes to be processed in the batch normalization layer comprises:

determining the number of nodes to be assigned in the to-be-assigned layer in the to-be-assigned network model corresponding to the batch normalization layer as a first node number;

sorting the weighted values corresponding to the nodes to be processed in the batch normalization layer from large to small, and determining the weighted value of the first node quantity at the front of the sorting as a first node weighted value;

taking the weighted value of each first node as the weighted value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer;

and determining each node to be processed in the convolutional layer according to the node to be processed corresponding to each first node weight value, and taking the weight value of each node to be processed as the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolutional layer.

9. The method of claim 1, further comprising, after the determining the target compressed network model:

and training the target compression network model on a training set, reducing the training error of the network through back propagation, updating the weight value of each node to be finely adjusted in the target compression network model, and determining the target adjusted compression network model.

10. A pattern compression apparatus, comprising:

the device comprises a to-be-compressed network model obtaining module, a target compression ratio obtaining module and a compression model generating module, wherein the to-be-compressed network model obtaining module is used for obtaining a to-be-compressed network model and a target compression ratio;

the to-be-assigned network model determining module is used for determining a node compression mode according to the target compression ratio, and compressing the to-be-compressed network model based on the node compression mode and the target compression ratio to obtain the to-be-assigned network model;

the weight value determining module is used for determining weight values corresponding to the nodes to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and the target compression network model determining module is used for determining a target compression network model based on the weight values and the network model to be assigned.

Background

With the popularization of deep learning, various deep learning models are widely applied in the fields of computer vision, speech recognition, natural language processing and the like.

However, as the network scale of the deep learning model is larger and larger, the model is more and more complex, the storage of the deep learning model occupies a larger space, and the consumed computing resource is too large and the time is too long when the operation is performed. Moreover, when the deep learning models are transplanted and deployed, the speed is too slow, and even the deep learning models cannot be transplanted and deployed.

At present, the model compression often has a customized phenomenon, a unified generalized compression algorithm is not available, and the method can cover various fields such as vision, voice, encoders and the like, so that the algorithm development and the model compression are not interfered with each other.

Disclosure of Invention

The embodiment of the invention provides a model compression method and a model compression device, which are used for realizing the technical effects of compressing a deep learning model and saving resources occupied by the model.

In a first aspect, an embodiment of the present invention provides a model compression method, where the method includes:

acquiring a network model to be compressed and a target compression ratio;

determining a node compression mode according to the target compression ratio, and compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain a network model to be assigned;

determining a weight value corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and determining a target compression network model based on the weight values and the network model to be assigned.

In a second aspect, an embodiment of the present invention further provides a model compression apparatus, where the apparatus includes:

the device comprises a to-be-compressed network model obtaining module, a target compression ratio obtaining module and a compression model generating module, wherein the to-be-compressed network model obtaining module is used for obtaining a to-be-compressed network model and a target compression ratio;

the to-be-assigned network model determining module is used for determining a node compression mode according to the target compression ratio, and compressing the to-be-compressed network model based on the node compression mode and the target compression ratio to obtain the to-be-assigned network model;

the weight value determining module is used for determining weight values corresponding to the nodes to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and the target compression network model determining module is used for determining a target compression network model based on the weight values and the network model to be assigned.

In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:

one or more processors;

a storage device for storing one or more programs,

when the one or more programs are executed by the one or more processors, the one or more processors implement the model compression method according to any one of the embodiments of the present invention.

In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the model compression method according to any one of the embodiments of the present invention.

According to the technical scheme of the embodiment of the invention, the network model to be compressed and the target compression ratio are obtained, the node compression mode is determined according to the target compression ratio, the model compression effect is enabled to accord with the target compression ratio, the network model to be compressed is compressed based on the node compression mode and the target compression ratio, the network model to be assigned is obtained, the weight value corresponding to each node to be assigned in the network model to be assigned is determined according to the network model to be compressed and the network model to be assigned, and the target compression network model is determined based on each weight value and the network model to be assigned, so that the problems that the deep learning model occupies a large storage space and consumes much resources when the model is used are solved, the compression of the deep learning model is realized, and the technical effect that the model occupies resources is saved.

Drawings

In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be clear that the described figures are only views of some of the embodiments of the invention to be described, not all, and that for a person skilled in the art, other figures can be derived from these figures without inventive effort.

Fig. 1 is a schematic flowchart of a model compression method according to an embodiment of the present invention;

fig. 2 is a schematic flowchart of a model compression method according to a second embodiment of the present invention;

fig. 3 is a schematic flowchart of a model compression method according to a third embodiment of the present invention;

fig. 4 is a schematic structural diagram of a model compressing apparatus according to a fourth embodiment of the present invention;

fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.

Example one

Fig. 1 is a schematic flowchart of a model compression method according to an embodiment of the present invention, where the embodiment is applicable to a case where a deep learning model is compressed when the deep learning model is built and stored, and the method may be executed by a model compression apparatus, and the apparatus may be implemented in the form of software and/or hardware, where the hardware may be an electronic device, and optionally, the electronic device may be a mobile terminal, and the like.

As shown in fig. 1, the method of this embodiment specifically includes the following steps:

and S110, acquiring a network model to be compressed and a target compression ratio.

The network model to be compressed can be a complete deep learning network model, and the target compression ratio can be a compression ratio set according to requirements or a compression ratio input by a user.

Specifically, after completing the construction of the deep learning network model, the user may use the complete deep learning network model as a network model to be compressed, and set a target compression ratio. If the user does not set the target compression ratio, the default compression ratio may be selected as the target compression ratio, such as: 30%, etc. When it is detected that the user inputs the network model to be compressed and the target compression ratio is completed, the network model to be compressed and the target compression ratio can be obtained for model compression.

S120, determining a node compression mode according to the target compression ratio, and compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain the network model to be assigned.

The node compression mode may be a mode of deleting nodes used in subsequent model compression, and may include a mode of deleting nodes in each layer in the network model to be compressed, and may also include a mode of deleting nodes in the whole layer, and the like. The network model to be assigned may be a network model without a weight value constructed for a structure in which part of nodes are deleted from the network model to be compressed.

Specifically, the node compression mode used when the network model to be compressed is compressed can be determined according to the target compression ratio. Furthermore, the network model to be compressed can be compressed according to the node compression mode to obtain the network model structure without the weight value, the compressed network model structure reaches the target compression ratio, and the compressed model structure is used as the network model to be assigned.

Optionally, the node compression method may be determined by:

if the target compression ratio is smaller than or equal to the preset compression ratio, determining that the node compression mode is an intra-layer node deletion mode; and if the target compression ratio is larger than the preset compression ratio, determining that the node compression mode is an in-layer node and layer deletion mode.

The preset compression ratio may be a discrimination value of two node compression modes. The intra-layer node deletion mode may be a mode of deleting nodes from each intra-layer of the network model to be compressed. The intra-layer nodes and the layer deletion mode may be a mode of deleting nodes in the remaining layers as well as some layers in the network model to be compressed.

Specifically, if the target compression ratio is less than or equal to the preset compression ratio, it indicates that the number of nodes to be deleted is not large, and node compression may be performed by deleting nodes in the layer, so that it may be determined that the node compression mode is the layer node deletion mode. If the target compression ratio is larger than the preset compression ratio, the number of the nodes needing to be deleted is large, and the effect of the model can be seriously influenced by only deleting the nodes in the layer. At this time, some model layers can be extracted to delete the whole layer, so that the model after node compression can meet the requirement of the target compression ratio and can achieve the effect of reducing the precision loss of the model, and the node compression mode can be determined to be an in-layer node and layer deletion mode.

Taking two node compression modes as examples respectively, how to compress the network model to be compressed based on the node compression mode and the target compression ratio is introduced to obtain the network model to be assigned:

1. the node compression mode is an intra-layer node deletion mode.

Step one, aiming at each layer to be compressed of the network model to be compressed, determining the number of layer nodes of the layer to be compressed.

Wherein, the layer to be compressed can be each layer structure in the network model to be compressed.

Specifically, each layer to be compressed of the network model to be compressed can be obtained according to the network model to be compressed, and the number of layer nodes can be calculated for each layer to be compressed.

And step two, determining the number of reserved nodes according to the number of layer nodes and the target compression ratio.

The number of the reserved nodes may be the number of the remaining nodes in each layer to be compressed after compression.

Specifically, the number of the remaining nodes in each layer to be compressed, that is, the number of the remaining nodes, may be determined according to the product of the number of nodes in each layer and the target compression ratio.

Illustratively, the number of layer nodes of one to-be-compressed layer of a certain to-be-compressed network model is 100, and the target compression ratio is 20%, then it may be determined that the number of nodes to be deleted is 20, and the number of remaining nodes is 80, that is, the number of reserved nodes is 80.

And step three, constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be compressed.

Specifically, after the number of the reserved nodes corresponding to each layer to be compressed is determined, a new network model structure may be constructed according to the number of the reserved nodes, and the new network model structure is used as the network model to be assigned.

2. The node compression mode is an in-layer node and layer deletion mode.

Step one, determining at least one layer to be deleted according to the network model to be compressed and the target compression ratio, and deleting each layer to be deleted from the network model to be compressed to obtain the network model to be processed.

The layer to be deleted may be a layer to be compressed, which needs to be deleted by all nodes.

Specifically, the number of layers to be compressed that need to be deleted at present may be determined according to the target compression ratio. Furthermore, the specific position of the layer to be deleted in the network model to be compressed can be determined, and the network model to be processed is further constructed.

Illustratively, a certain network model to be compressed includes 100 layers to be compressed, the target compression ratio is 50%, and the preset compression ratio is 30%. At this time, it may be determined that about 30% of the nodes may be deleted by the intra-layer node deletion method, but the remaining about 20% of the nodes need to be deleted by the deletion layer method. The product of the number of layers to be deleted being 20% and the number of layers to be compressed may be roughly determined, that is, 20 layers to be compressed are determined to be layers to be deleted. After determining the number of layers to be deleted, the layers to be deleted may be selected from among the layers to be compressed uniformly or in other selection manners. Further, it may be determined whether the total number of nodes of the layer to be deleted can reach 20%, and if the total number of nodes exceeds 20% and is less than or equal to 50%, the next step is performed; if the total node number of the layer to be deleted exceeds 20 percent, adding a layer to be deleted, and recalculating the total node number of the layer to be deleted until the total node number of the layer to be deleted exceeds 20 percent and is less than or equal to 50 percent; if the total node number of the layer to be deleted exceeds 20% and is less than or equal to 50%, reducing one layer to be deleted, and recalculating the total node number of the layer to be deleted until the total node number of the layer to be deleted exceeds 20%.

And step two, determining the proportion of the nodes to be reserved according to the network model to be compressed, the network model to be processed and the target compression ratio.

The ratio of the nodes to be reserved may be the ratio of the nodes that can be reserved in the remaining layers to be processed.

Specifically, the total number of nodes in the network model to be processed may be determined, the total number of nodes in the network model to be compressed may also be determined, and at this time, the number of nodes that still need to be deleted may be determined. According to the number of nodes which need to be deleted, the proportion of the reserved nodes in the current network model to be processed can be determined.

Illustratively, the to-be-compressed network model includes 1000 nodes, the to-be-processed network model includes 700 nodes, and the target compression ratio is 50%. At this time, it can be known that the number of nodes that still need to be deleted is 200. According to the number of nodes which need to be deleted and the number of nodes in the network model to be processed, the proportion of the nodes to be reserved is calculated to be 200/700-28.57%.

It should be noted that, this example only lists an algorithm for calculating the ratio of the nodes to be reserved, and the ratio of the nodes to be reserved can also be obtained through other calculation methods, which is not specifically limited in this embodiment.

And step three, determining the number of layer nodes of the layer to be processed aiming at each layer to be processed of the network model to be processed.

Specifically, after the proportion of the nodes to be reserved is determined, the number of layer nodes included in each layer to be processed of the network model to be processed may be determined.

And step four, determining the number of reserved nodes according to the number of layer nodes and the proportion of the nodes to be reserved.

Specifically, the number of reserved nodes can be determined by multiplying the number of layer nodes by the ratio of nodes to be reserved. Through the method, the number of the reserved nodes corresponding to each layer to be processed can be determined.

And step five, constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be processed.

Specifically, after the number of the reserved nodes corresponding to each layer to be processed is determined, each layer structure can be reconstructed, and the layer structures are connected to construct a to-be-assigned network model not containing a weight value.

S130, determining a weight value corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned.

The nodes to be assigned may be nodes to which no weight value is assigned in the network model to be assigned. The network model to be assigned may be composed of a plurality of layers to be assigned, and each layer to be assigned may be composed of a plurality of nodes to be assigned.

Specifically, according to the network model to be compressed and the network model to be assigned, the layer to be compressed in the network model to be compressed corresponding to each layer to be assigned in the network model to be assigned can be determined, that is, the original layer structure corresponding to each layer to be assigned in the network model to be assigned is determined. Because each node in the network model to be compressed has a weight value, the weight value of each node to be assigned in the network model to be assigned can be determined according to the weight values.

Taking any layer to be assigned in the network model to be assigned as an example, a layer to be compressed in the network model to be compressed corresponding to the layer to be assigned may be determined first. At this time, the number of layer nodes of the layer to be assigned should be less than or equal to the number of layer nodes of the layer to be compressed. Therefore, the nodes of the layer to be compressed can be arranged from large to small according to the weight values, and the weight value of the number of the layer nodes of the layer to be assigned is determined. And subsequently, assigning the determined weight values to each node to be assigned in the layer to be assigned according to the sequence in the layer to be compressed.

S140, determining a target compression network model based on the weight values and the network model to be assigned.

The target compression network model may be a network model obtained after compression.

Specifically, each weight value is assigned to each node to be assigned in the network model to be assigned, so that a compression network model with the weight value can be obtained, and the compression network model with the weight value is used as a target compression network model.

According to the technical scheme of the embodiment of the invention, the network model to be compressed and the target compression ratio are obtained, the node compression mode is determined according to the target compression ratio, the model compression effect is enabled to accord with the target compression ratio, the network model to be compressed is compressed based on the node compression mode and the target compression ratio, the network model to be assigned is obtained, the weight value corresponding to each node to be assigned in the network model to be assigned is determined according to the network model to be compressed and the network model to be assigned, and the target compression network model is determined based on each weight value and the network model to be assigned, so that the problems that the deep learning model occupies a large storage space and consumes much resources when the model is used are solved, the compression of the deep learning model is realized, and the technical effect that the model occupies resources is saved.

Example two

Fig. 2 is a schematic flow chart of a model compression method according to a second embodiment of the present invention, and the present embodiment refers to the technical solution of the present embodiment for determining the weight value based on the above embodiments. The same or corresponding terms as those in the above embodiments are not explained in detail herein.

As shown in fig. 2, the method of this embodiment specifically includes the following steps:

s210, obtaining a network model to be compressed and a target compression ratio.

S220, determining a node compression mode according to the target compression ratio, and compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain the network model to be assigned.

And S230, determining each node to be processed in the network model to be compressed corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned.

The nodes to be processed may be nodes in a network model to be compressed.

Specifically, each node corresponding to each node to be assigned in the network model to be assigned may be determined from the nodes to be processed in the network model to be compressed, so as to be used for subsequently assigning a weight value.

Optionally, each node to be processed in the network model to be compressed corresponding to each node to be assigned in the network model to be assigned may be determined based on the following steps:

step one, aiming at each layer to be assigned in the network model to be assigned, determining a layer to be compressed in the network model to be compressed corresponding to the layer to be assigned.

The layer to be assigned may be each layer structure in the network model to be assigned.

Specifically, according to the network model to be compressed and the network model to be assigned, the layer to be compressed in the network model to be compressed corresponding to each layer to be assigned in the network model to be assigned can be determined, that is, the original layer structure corresponding to each layer to be assigned in the network model to be assigned is determined.

And step two, determining the nodes to be processed in the layer to be compressed corresponding to the nodes to be assigned in the layer to be assigned according to the number of the nodes to be assigned in the layer to be assigned and the weight values of the nodes to be processed in the layer to be compressed.

Specifically, each node to be processed in the layer to be compressed has a corresponding weight value, and therefore, the weight values can be assigned to the corresponding nodes to be assigned. First, the number of nodes to be assigned in the layer to be assigned may be determined, that is, the number of required weight values is determined. Further, a weight value of the required number of weight values may be determined from the layer to be compressed corresponding to the layer to be assigned. The weight values in the layer to be compressed may be arranged from large to small, and the weight value of the number of the nodes to be assigned that are ranked in the front is determined. And taking the nodes to be processed corresponding to the determined weight values as the nodes to be processed corresponding to the nodes to be assigned.

And S240, determining the weight value of each node to be processed as the weight value of each node to be assigned.

Specifically, after the corresponding relationship between the node to be processed and the node to be assigned is determined, the weight value of the node to be processed may be determined as the weight value of the node to be assigned according to the corresponding relationship.

In the deep learning network model, a convolutional layer and a Batch Normalization layer (BN) corresponding to the convolutional layer are usually included, and at this time, the assignment process of the weight value can be simplified. The method can be specifically completed by the following steps:

and determining the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer and the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolution layer according to the weight value corresponding to each node to be processed in the batch normalization layer.

Wherein, the batch normalization layer and the convolution layer have corresponding relation.

Specifically, since determining the weight value of the batch normalization layer is simpler and more convenient than determining the weight value of the convolution layer, each node to be processed in the batch normalization layer may correspond to each node to be processed in the convolution layer according to the correspondence, that is, the node to be processed in the batch normalization layer may be used as an index, and the node to be processed in the convolution layer corresponding to the batch normalization layer may be determined according to the index. Furthermore, the determined weight value in the batch normalization layer may be used as the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer. And determining corresponding weight values in the convolution layer according to the determined weight values in the batch normalization layer as indexes, and taking the weight values as the weight values of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolution layer.

It should be noted that, for the determined weight value of each node to be assigned, quantization processing may be performed, so as to facilitate storage and subsequent calculation. For example, the weight value before quantization is 32-bit floating point data, and the weight value after quantization is quantized data with a fixed bit width.

Optionally, the weight value of each to-be-assigned node in the to-be-assigned layer in the to-be-assigned network model may be determined through the following steps:

the method comprises the steps of firstly, determining the number of nodes to be assigned in a layer to be assigned in a network model to be assigned corresponding to a batch normalization layer as a first node number.

The first node number may be the number of nodes to be assigned in the layer to be assigned corresponding to the batch normalization layer, and since the convolution layer corresponds to the batch normalization layer, the first node number is also the number of nodes to be assigned in the layer to be assigned corresponding to the convolution layer.

Specifically, the to-be-assigned layer corresponding to the batch normalization layer may be determined, and then, the number of the to-be-assigned nodes in the to-be-assigned layer is determined to be the first node number, that is, the number of the weighted values to be determined.

And step two, sorting the weighted values corresponding to the nodes to be processed in the batch normalization layer from large to small, and determining the weighted value of the first node quantity at the front of the sorting as the first node weighted value.

The first node weight value may be a weight value selected from the weight values in the batch normalization layer and used for assigning to the layer to be assigned.

Specifically, the weighted values corresponding to the nodes to be processed in the batch normalization layer may be sorted from large to small, and the weighted value of the first node number before the sorting is selected as the first node weighted value to assign the layer to be assigned corresponding to the batch normalization layer.

And step three, taking the weight value of each first node as the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer.

Specifically, the nodes to be processed corresponding to the weighted values of the first nodes are determined, and the weighted values of the sorted first nodes are determined as the weighted values of the nodes to be assigned in the layer to be assigned corresponding to the batch normalization layer according to the arrangement sequence of the nodes to be processed.

And step four, determining each node to be processed in the convolutional layer according to the node to be processed corresponding to the weight value of each first node, and taking the weight value of each node to be processed as the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolutional layer.

Specifically, the nodes to be processed of the first number of nodes in the batch normalization layer can be determined according to the first node weight values, and then, the corresponding nodes to be processed in the convolution layer can be determined according to the nodes to be processed. The weight values of the nodes to be processed in the convolutional layer are considered to be the weight values of the nodes to be assigned in the layers to be assigned in the network model to be assigned corresponding to the convolutional layer.

And S250, determining a target compression network model based on the weight values and the network model to be assigned.

On the basis of the foregoing embodiments, in order to adapt the effect of the compressed network model to the effect of the network model to be compressed, the target compressed network model may be finely adjusted, which specifically may be: training a target compression network model on a training set, reducing training errors of the network through back propagation, updating the weight value of each node to be finely adjusted in the target compression network model, and determining the target adjusted compression network model.

The training set may be a predetermined data set, or a training data set used when a network model to be compressed is established. The nodes to be trimmed can be nodes in the target compression network model. The target adjusted compressed network model may be a model obtained after network fine tuning.

Specifically, since part of the nodes are deleted in the node compression process, the accuracy of the target compression network model is reduced. In order to compensate for the reduced accuracy, the target compression network model can be subjected to fine tuning, i.e., training is continued on a training set, the training error of the network model is further reduced through back propagation, the weight values of nodes to be subjected to fine tuning in the target compression network model are updated, the learning rate can be changed, and the like.

It should be noted that a model accuracy loss threshold may be preset to ensure that the accuracy of the target adjusted compressed network model after fine tuning is within an acceptable range.

The technical scheme of the embodiment of the invention obtains the network model to be assigned by obtaining the network model to be compressed and the target compression ratio, determining a node compression mode according to the target compression ratio to ensure that the model compression effect accords with the target compression ratio, compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain the network model to be assigned, determining each node to be processed in the network model to be compressed corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned, determining the weight value of each node to be processed as the weight value of each node to be assigned to accurately determine the network model to be assigned, and determining the target compression network model based on each weight value and the network model to be assigned, thereby solving the problems that a deep learning model occupies a large storage space, resources are consumed when the model is used, and the model effect obtained after the model is compressed is poor, the compression of the deep learning model is realized on the premise of ensuring the model precision, and the technical effect of saving the resource occupied by the model is achieved.

EXAMPLE III

As an alternative implementation of the above embodiments, fig. 3 is a schematic flow chart of a model compression method provided in a third embodiment of the present invention. The same or corresponding terms as those in the above embodiments are not explained in detail herein.

As shown in fig. 3, the method of this embodiment specifically includes the following steps:

1. an uncompressed model (network model to be compressed) is obtained.

Specifically, an uncompressed model based on a PyTorch (an open source Python machine learning library) framework may be obtained. If the uncompressed model is of the rest framework models, the uncompressed model can be obtained by using tools such as Open Neural Network Exchange (ONNX).

2. And setting a compression ratio (target compression ratio), and calculating the number of layers and the number of nodes of the uncompressed model in the transverse and longitudinal directions.

Specifically, the compression ratio may be preset, and the number of layers of the model and the number of nodes in each layer may be determined according to the uncompressed model.

3. And calculating the node weight reserved in each layer, and assigning to a new model (a network model to be assigned).

Specifically, nodes that can be reserved in the uncompressed model can be determined according to the node compression mode, and the reserved nodes form a new model. And then, assigning the weights of the reserved nodes to the new model.

Optionally, the weights may be quantized during the assignment process.

Optionally, when calculating the node weights retained in each layer, the BN factor may be used as a determination condition for the corresponding convolutional layer.

It should be noted that the calculation process of the BN layer is as follows:

wherein, muBAnd σBIs the mean and variance of the activation values, ε is a minimum constant, zinIs an input value of the BN layer,is a normalized result.

Where γ and β are trainable affine transformation parameters (scale and shift), zoutIs the output value of the BN layer.

It can be understood that after the normalization result is obtained, the distribution characteristics of the data are changed due to the limitation of the standard deviation, and further, two reconstruction parameters γ and β are introduced, and the original distribution is reconstructed through the learned reconstruction parameters.

Specifically, the weights of the BN layer of the uncompressed model are sorted from large to small, the weights of the number of nodes to be assigned are recorded, the nodes corresponding to the weights are used as indexes in the BN layer, and the corresponding weights in the convolutional layer can be obtained according to the indexes.

4. And outputting the compression model.

Specifically, the new model obtained by assignment may be output as a compression model.

5. And (5) fine-tuning the compression model.

Specifically, the compression model may be finely adjusted to obtain an optimum effect, thereby obtaining a compression model that is put to practical use.

6. And outputting the practical compression model.

According to the steps, model compression can be realized, and the table 1 is a comparison table of the size, the recall ratio and the inference speed of a certain model under different compression ratios. Therefore, the model compression algorithm provided by the embodiment of the invention can ensure higher recall rate under the condition of higher compression ratio, and saves storage space and calculation time.

TABLE 1

Compression Ratio (Ratio) Size (Size) Recall ratio (Recall) Reasoning Speed (Speed)
0 246.4M 98.98% 230ms~250ms
25% 139.5M 98.54% 170ms~190ms
50% 57.6M 97.7% 100ms~130ms

The technical scheme of this embodiment, through obtaining the uncompressed model, set for the compression ratio, calculate the number of piles and the node number of uncompressed model on horizontal and longitudinal direction, calculate the node weight that each layer was kept, and assign to new model, compress to the uncompressed model, and then, output compression model and finely tune compression model, output the compression model of practicality, the speed that the degree of depth learning model faced when actual deployment is too slow has been solved, the problem that the precision is not high, realized establishing general model compression algorithm, impel the miniaturization of model, promote the technological effect of model deployment ability.

Example four

Fig. 4 is a schematic structural diagram of a model compression apparatus according to a fourth embodiment of the present invention, where the apparatus includes: the method comprises a to-be-compressed network model obtaining module 410, a to-be-assigned network model determining module 420, a weight value determining module 430 and a target compression network model determining module 440.

The module 410 for obtaining a network model to be compressed is used for obtaining a network model to be compressed and a target compression ratio; the to-be-assigned network model determining module 420 is configured to determine a node compression mode according to the target compression ratio, and compress the to-be-compressed network model based on the node compression mode and the target compression ratio to obtain the to-be-assigned network model; a weight value determining module 430, configured to determine, according to the to-be-compressed network model and the to-be-assigned network model, a weight value corresponding to each to-be-assigned node in the to-be-assigned network model; and a target compression network model determining module 440, configured to determine a target compression network model based on the weight values and the to-be-assigned network model.

Optionally, the node compression mode includes an intra-layer node deletion mode; the to-be-assigned network model determining module 420 is further configured to determine, for each to-be-compressed layer of the to-be-compressed network model, the number of layer nodes of the to-be-compressed layer; determining the number of reserved nodes according to the number of layer nodes and the target compression ratio; and constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be compressed.

Optionally, the node compression mode includes an intra-layer node and a layer deletion mode; the to-be-assigned network model determining module 420 is further configured to determine at least one to-be-deleted layer according to the to-be-compressed network model and the target compression ratio, and delete each to-be-deleted layer from the to-be-compressed network model to obtain a to-be-processed network model; determining the proportion of nodes to be reserved according to the network model to be compressed, the network model to be processed and the target compression ratio; determining the number of layer nodes of each layer to be processed of the network model to be processed; determining the number of reserved nodes according to the number of the layer nodes and the proportion of the nodes to be reserved; and constructing a network model to be assigned according to the number of the reserved nodes corresponding to each layer to be processed.

Optionally, the weight value determining module 430 is further configured to determine, according to the to-be-compressed network model and the to-be-assigned network model, each to-be-processed node in the to-be-compressed network model corresponding to each to-be-assigned node in the to-be-assigned network model; and determining the weight value of each node to be processed as the weight value of each node to be assigned.

Optionally, the weight value determining module 430 is further configured to determine, for each to-be-assigned layer in the to-be-assigned network model, a to-be-compressed layer in the to-be-compressed network model corresponding to the to-be-assigned layer; and determining the nodes to be processed in the layer to be compressed, which correspond to the nodes to be assigned in the layer to be assigned, according to the number of the nodes to be assigned in the layer to be assigned and the weight values of the nodes to be processed in the layer to be compressed.

Optionally, the to-be-assigned network model determining module 420 is further configured to determine that the node compression mode is an intra-layer node deletion mode if the target compression ratio is less than or equal to a preset compression ratio; and if the target compression ratio is larger than the preset compression ratio, determining that the node compression mode is an intra-layer node and layer deletion mode.

Optionally, the network model to be compressed includes a convolutional layer and a batch normalization layer corresponding to the convolutional layer; the weight value determining module 430 is further configured to determine, according to the weight value corresponding to each node to be processed in the batch normalization layer, the weight value of each node to be assigned in the to-be-assigned layer in the to-be-assigned network model corresponding to the batch normalization layer and the weight value of each node to be assigned in the to-be-assigned layer in the to-be-assigned network model corresponding to the convolution layer.

Optionally, the weight value determining module 430 is further configured to determine that the number of nodes to be assigned in the to-be-assigned layer in the to-be-assigned network model corresponding to the batch normalization layer is a first number of nodes; sorting the weighted values corresponding to the nodes to be processed in the batch normalization layer from large to small, and determining the weighted value of the first node quantity at the front of the sorting as a first node weighted value; taking the weighted value of each first node as the weighted value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the batch normalization layer; and determining each node to be processed in the convolutional layer according to the node to be processed corresponding to each first node weight value, and taking the weight value of each node to be processed as the weight value of each node to be assigned in the layer to be assigned in the network model to be assigned corresponding to the convolutional layer.

Optionally, the apparatus further comprises: and the fine tuning module is used for training the target compression network model on a training set after the target compression network model is determined, reducing the training error of the network through back propagation, updating the weight value of each node to be fine tuned in the target compression network model, and determining the target adjusted compression network model.

According to the technical scheme of the embodiment of the invention, the network model to be compressed and the target compression ratio are obtained, the node compression mode is determined according to the target compression ratio, the model compression effect is enabled to accord with the target compression ratio, the network model to be compressed is compressed based on the node compression mode and the target compression ratio, the network model to be assigned is obtained, the weight value corresponding to each node to be assigned in the network model to be assigned is determined according to the network model to be compressed and the network model to be assigned, and the target compression network model is determined based on each weight value and the network model to be assigned, so that the problems that the deep learning model occupies a large storage space and consumes much resources when the model is used are solved, the compression of the deep learning model is realized, and the technical effect that the model occupies resources is saved.

The model compression device provided by the embodiment of the invention can execute the model compression method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.

It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiment of the invention.

EXAMPLE five

Fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention. FIG. 5 illustrates a block diagram of an exemplary electronic device 50 suitable for use in implementing embodiments of the present invention. The electronic device 50 shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.

As shown in fig. 5, electronic device 50 is embodied in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: one or more processors or processing units 501, a system memory 502, and a bus 503 that couples the various system components (including the system memory 502 and the processing unit 501).

Bus 503 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.

Electronic device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 50 and includes both volatile and nonvolatile media, removable and non-removable media.

The system memory 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)504 and/or cache memory 505. The electronic device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 506 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. System memory 502 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.

A program/utility 508 having a set (at least one) of program modules 507 may be stored, for example, in system memory 502, such program modules 507 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may include an implementation of a network environment. Program modules 507 generally perform the functions and/or methodologies of embodiments of the invention as described herein.

The electronic device 50 may also communicate with one or more external devices 509 (e.g., keyboard, pointing device, display 510, etc.), with one or more devices that enable a user to interact with the electronic device 50, and/or with any devices (e.g., network card, modem, etc.) that enable the electronic device 50 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 511. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 512. As shown, the network adapter 512 communicates with the other modules of the electronic device 50 over the bus 503. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.

The processing unit 501 executes various functional applications and data processing, for example, implementing a model compression method provided by an embodiment of the present invention, by executing a program stored in the system memory 502.

EXAMPLE six

An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for model compression, the method including:

acquiring a network model to be compressed and a target compression ratio;

determining a node compression mode according to the target compression ratio, and compressing the network model to be compressed based on the node compression mode and the target compression ratio to obtain a network model to be assigned;

determining a weight value corresponding to each node to be assigned in the network model to be assigned according to the network model to be compressed and the network model to be assigned;

and determining a target compression network model based on the weight values and the network model to be assigned.

Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于复合熵的遗传算法参数优化方法、系统、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!