Condition calculation method based on unit importance

文档序号:8711 发布日期:2021-09-17 浏览:29次 中文

1. A condition calculation method based on unit importance degree is characterized by comprising the following steps:

s1: pre-training a trunk residual error network M, wherein the trunk residual error network M comprises n residual error units;

s2: constructing a gate control network G for the pre-trained main residual error network M;

s3: calculating the importance of each residual error unit in the main residual error network M to each input image;

s4: forming the importance of the input image and each residual unit corresponding to the input image into an input-label pair, constructing a data set, fixing the main residual network M, and training the gating network G through the data set;

s5: after the gating network G is trained, fixing the gating network G, and finely adjusting the main residual error network M to adapt to dynamic cutting;

s6: and repeating the steps S3-S5 until the cutting rate and the precision of the model meet the preset conditions.

2. The unit importance-based condition calculation method according to claim 1,

the specific method for calculating the importance of each residual unit in the main residual network M to each input image in step S3 is to calculate by the following formula:

imp(x,i)=loss(M-Block[i],x)-loss(M,x)

where x is the input image, M-Block [ i ] is a sub-network of the remaining n-1 residual units when the ith residual unit in M is clipped, loss is the loss function for the given current task, and imp (x, i) is the importance of the ith residual unit in M to the input x.

3. The unit importance-based condition calculation method according to claim 2,

in the step S4, the importance is marked as reward, the output G (x) of the gated network G is used as a predicted value of each gate, the predicted value of each gate is converted into an opening probability by a Sigmoid function, and then the gated network G is trained by using a class reinforcement learning algorithm.

4. The unit importance-based condition calculation method according to claim 3,

the objective function in step S4 is calculated by the following formula,

wherein G (x) is the predicted value of each gate, and the training adopts gradient rise to maximize the objective function.

5. The unit importance-based condition calculation method according to claim 1,

in the step S5, when the main residual network M is trimmed, each of the input images only passes through a specific subset of all n residual units, the trimming of the main residual network M is performed only on the residual units in the specific subset for a certain input image.

6. The unit importance-based condition calculation method according to claim 1,

the gating network constructed in step S2 is a ResNet8 convolutional neural network, or a neural network with LSTM cyclic neural network as the main body, or n independent MLPs, each corresponding to one of the residual units.

Background

At present, the deep learning model compression mainly comprises clipping, quantization, knowledge distillation and the like. The clipping can be divided into neuron-level clipping, filter-level clipping, even residual unit-level clipping, and the like according to granularity division, and the filter-level or residual unit-level clipping is usually adopted in consideration of the actual inference acceleration effect of a general processor in an actual application scene. In a common clipping scheme, a filter or a residual unit importance evaluation index is usually designed, then the importance of each clipping candidate unit is measured, and the lower importance is clipped until the computational complexity of the model meets the requirement.

The condition calculation is a novel deep learning model compression means, and utilizes the characteristics that the features extracted by different filters or different residual error units are different from each other and different input images have different features, so as to individually decide a proper calculation path according to different input images. The existing condition calculation method is mainly condition calculation of residual unit level granularity, and a small gating network is usually trained through reinforcement learning to predict the opening and closing of each residual unit according to an input or intermediate feature map.

However, most of the existing condition calculation methods adopt reinforcement learning, reinforcement learning rewarded is constructed according to the classified cross entropy loss and the clipping rate, and the rewarded is returned to all gated outputs for training. This makes the search space for the gated network very large, making good dynamic clipping difficult to achieve with limited data set capacity.

Disclosure of Invention

The invention provides a condition calculation method based on unit importance, which adopts the following technical scheme:

a condition calculation method based on unit importance degree comprises the following steps:

s1: pre-training a trunk residual error network M, wherein the trunk residual error network M comprises n residual error units;

s2: constructing a gating network G for the pre-trained trunk residual error network M;

s3: calculating the importance of each residual error unit in the main residual error network M to each input image;

s4: forming the input image and the importance of each residual error unit corresponding to the input image into an input-label pair, constructing a data set, fixing a main residual error network M, and training a gate control network G through the data set;

s5: after the gating network G is trained, fixing the gating network G, and finely adjusting a main residual error network M to adapt to dynamic cutting;

s6: and repeating the steps S3-S5 until the cutting rate and the precision of the model meet the preset conditions.

Further, the specific method for calculating the importance of each residual unit in the main residual network M to each input image in step S3 is to calculate by the following formula:

imp(x,i)=loss(M-Block[i],x)-loss(M,x)

wherein x is the input image, M-Block [ i ] is a sub-network formed by the remaining n-1 residual units when the ith residual unit in M is cut out, function is the objective function of the given current task, and imp (x, i) is the importance of the ith residual unit in M to the input x.

Further, in step S4, the importance degree is labeled as reward, the output G (x) of the gated network G is used as a predicted value of each gate, the predicted value of each gate is converted into an opening probability by a Sigmoid function, and then the gated network G is trained by using a class reinforcement learning algorithm.

Further, the objective function in step S4 is calculated by the following formula,

wherein G (x) is the predicted value of each gate, and the training adopts gradient rise to maximize the objective function.

Further, when the main residual network M is trimmed in step S5, each input image is only passed through a specific subset of all n residual units, and for a certain input image, the trimming of the main residual network M is only performed on the residual units in the specific subset.

Further, the gating network constructed in step S2 is a ResNet8 convolutional neural network, or a neural network with LSTM cyclic neural network as the main body, or n independent MLPs, each corresponding to one residual unit.

The method has the advantages that the method for calculating the condition based on the unit importance degree firstly trains the trunk residual error network M in advance, and then constructs the gating network G for the pre-trained trunk residual error network M to predict the importance degree and the opening and closing of the residual error units in all the trunk residual error networks M. In order to train the gating network M, the importance of each residual unit in the main residual network M on the training set to each input image is calculated, and a data set is constructed for training the gating network G, so that the gating network G can predict the importance of different residual units according to the input images and the intermediate characteristic diagram. Therefore, residual error units with low importance or invalid or even harmful current input can be dynamically cut out from different inputs in an inference stage so as to realize model cutting and precision improvement.

Drawings

FIG. 1 is a schematic diagram of a condition calculation method based on unit importance according to the present invention.

Detailed Description

The invention is described in detail below with reference to the figures and the embodiments.

Fig. 1 shows a condition calculation method based on unit importance of the present invention, which mainly comprises the following steps: step S1: and training a trunk residual error network M in advance, wherein the trunk residual error network M comprises n residual error units. Step S2: and constructing a gating network G for the pre-trained trunk residual error network M. The gate control network G is used for controlling the opening and closing of the n residual error units in the main residual error network M. If the residual error unit is started, the residual error unit is normally calculated during forward reasoning; if the residual error unit is closed, only short connection in the residual error unit is passed through during forward reasoning, and the residual error unit is cut without any calculation. Step S3: selecting a plurality of input images, and calculating the importance of each residual unit in the main residual network M to each input image. Step S4: and forming the input image and the importance of each residual error unit corresponding to the input image into an input-label pair, constructing a data set, fixing a main residual error network M, and training a gating network G through the data set. Step S5: and (3) after the gating network G is trained, fixing the gating network G, and finely adjusting the main residual error network M to adapt to dynamic cutting. Step S6: and repeating the steps S3-S5 until the cutting rate and the precision of the model meet the preset conditions. Through the steps, firstly, the trunk residual error network M is trained in advance, and then the gating network G is constructed for the pre-trained trunk residual error network M and used for predicting the importance and opening and closing of residual error units in all the trunk residual error networks M. In order to train the gating network M, the importance of each residual unit in the main residual network M on the training set to each input image is calculated, and a data set is constructed for training the gating network G, so that the gating network G can predict the importance of different residual units according to the input images and the intermediate characteristic diagram. Therefore, residual error units with low importance or invalid or even harmful current input can be dynamically cut out from different inputs in an inference stage so as to realize model cutting and precision improvement.

As a preferred embodiment, in step S3, the specific method for calculating the importance of each residual unit in the main residual network M to each input image is to calculate by the following formula:

imp(x,i)=loss(M-Block[i],x)-loss(M,x)

where x is the input image, M-Block [ i ] is a sub-network of the remaining n-1 residual units when the ith residual unit in M is clipped, loss is the loss function for the given current task, and imp (x, i) is the importance of the ith residual unit in M to the input x.

As a preferred embodiment, in step S4, the importance label is regarded as reward, the output G (x) of the gated network G is regarded as the predicted value of each gate, the predicted value of each gate is converted into an opening probability by a Sigmoid function, and then the gated network G is trained by using a class reinforcement learning algorithm.

As a preferred embodiment, the loss function in step S4 is calculated by the following formula,

wherein G (x) is the predicted value of each gate, and the training adopts gradient rise to maximize the objective function.

As a preferred embodiment, when the main residual network M is trimmed in step S5, each input image is only passed through a specific subset of all n residual units, and for a certain input image, the main residual network M is trimmed only for the residual units in the specific subset.

Specifically, the data distribution information of the BN layer statistics in the backbone residual network M pre-training process, including running _ mean, running _ var, etc., is destroyed due to the clipping of the residual unit granularity. Before formally applying the gating network G for dynamic clipping, we need to fix the gating network G and perform dynamic clipping under the guidance of the gating network G, so that each input image x only passes through a specific subset of all n residual error units. For example, for input x0, under the guidance of the gating network G, we cut out the 3 rd and 6 th residual units, and at this time, the subset of residual units that x0 needs to pass through is U ═ Block [1], Block [2], Block [4], Block [5], Block [7], …, Block [ n }, and in the fine adjustment step of the whole step S5, the image x0 only uses the residual units in U to perform inference, and for the image x0, the fine adjustment of the main residual network only performs on the residual units in U.

As a preferred embodiment, the gating network constructed in step S2 is a convolutional neural network. The convolutional neural network is ResNet 8. The gate control network G is independent of the main residual error network M, directly receives an input image as a network input, and outputs all gate control prediction results at a full connection layer.

The gate control network of the convolutional neural network type can enable the user to obtain all gate control prediction results at one time before the operation of the trunk residual error network, so that the decision of unit cutting is convenient to be made in advance, and meanwhile, the cost of the gate control network cannot be increased along with the increase of the capacity of the trunk network.

When a convolutional neural network type gating network is employed, since prediction results of all gating can be obtained in advance, a greedy method can be directly used: finding out one or several residual error units with the lowest importance degree and cutting out. A threshold method may also be employed: setting a threshold value alpha, cutting out all units with-G (x) > alpha and reserving-G (x) < alpha; alternatively, the threshold α may be set after calculating Softmax (-G (x)), and the unit for Softmax (-G (x)) > α may be trimmed out and the unit for Softmax (-G (x)) < α may be left.

As an alternative implementation, as a preferred implementation, the gating network constructed in step S2 is a neural network with an L-cycle neural network as a main body. In a preferred embodiment, the recurrent neural network is LSTM. And (3) using an LSTM equal-cycle neural network as a gate control network, forming a sequence by using the input characteristic diagrams of each residual error unit in the main residual error network, reducing the dimension, inputting the sequence into the gate control network, and predicting the gate control corresponding to each residual error unit in the sequence one by the gate control network.

The use of a cyclic neural network type gating network enables the prediction of the next residual unit gating to be performed using the sequence information of all residual units in the shallow layer in cooperation.

As another alternative, the gating network constructed in step S2 is n independent MLPs (Multilayer Perceptron), each MLP corresponding to one residual unit. With a gating network of the MLP type, each cell is assigned an independent gating cell so that the training of the gating network is easier and more stable.

When a cyclic neural network type or MLP type gating network is adopted, because prediction results of all gating can not be obtained in advance, a decision of dynamic cutting needs to be carried out simultaneously in the forward reasoning process of the main residual error network M, and only a threshold value method can be adopted. Under the conditions of sensitive precision and low calculation overhead limitation, the method can also be used for firstly carrying out forward reasoning on the trunk residual error network M once for collecting the prediction results of all gates, then guiding dynamic cutting by using a greedy method and carrying out forward reasoning on the trunk residual error network M once again.

The foregoing illustrates and describes the principles, general features, and advantages of the present invention. It should be understood by those skilled in the art that the above embodiments do not limit the present invention in any way, and all technical solutions obtained by using equivalent alternatives or equivalent variations fall within the scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种深度卷积神经网络加速方法、模块、系统及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!