Multilayer image classification method based on delay mechanism
1. A multi-layer image classification method based on a delay mechanism is characterized by comprising the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting the image set to obtain a trained image classification model;
s3, classifying the images by adopting the trained image classification model to obtain the image types;
the image classification model comprises a feature extraction unit, a pulse delay coding unit and a multilayer classifier which are sequentially connected; the feature extraction unit is used for extracting features of the image to obtain feature image data; the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence; the multi-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image.
2. The multi-layered image classification method based on the delay mechanism as claimed in claim 1, wherein the pulse delay coding unit codes the characteristic image data according to the formula:
ti=tmax-ln(axi+1)
wherein, tiFor the excitation pulse time point, t, corresponding to the ith pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
3. The method for multi-layered image classification based on the delay mechanism as claimed in claim 1, wherein the multi-layered classifier comprises: an input layer, a hidden layer and an output layer;
the training method of the multilayer classifier comprises the following steps:
a1, inputting the excitation pulse time sequence into a multi-layer classifier, determining the target ignition time of each neuron of each layer, and recording the ignition time of the non-ignited neuron as-1;
a2, selecting the firing neurons on the output layer or the hidden layer with the firing time not being-1;
a3, calculating learning parameters of all firing neurons of an output layer and learning parameters of all firing neurons of a hidden layer according to the target firing time of each neuron of each layer;
a4, adjusting the delay of all firing neurons between the output layer and the hidden layer according to the learning parameters of all firing neurons of the output layer and the learning parameters of all firing neurons of the hidden layer, and adjusting the delay of all firing neurons between the input layer and the hidden layer;
a5, calculating the learning parameters of all the firing neurons of the output layer and the learning parameters of all the firing neurons of the hidden layer again according to the multi-layer classifier after delay adjustment;
a6, adjusting the synaptic weights of all the firing neurons between the output layer and the hidden layer according to the learning parameters calculated in the step A5, and adjusting the synaptic weights of all the firing neurons between the input layer and the hidden layer;
a7, acquiring the actual ignition time of the current output layer according to the multi-layer classifier after weight adjustment, and calculating the mean square error between the actual ignition time and the target ignition time;
a8, judging whether the mean square error is less than a set error threshold value, if so, obtaining the trained multi-layer classifier, and if not, jumping to the step A1.
4. The multi-layer image classification method based on the delay mechanism as claimed in claim 3, wherein the learning parameter δ of all firing neurons of the hidden layer in the step A3iAnd learning parameter delta of all firing neurons of the output layerjThe calculation formula of (2) is as follows:
wherein, gamma isiFor all neuron sets connected to firing neuron i,for the ith firing neuronAnd the jth firing neuronSynaptic weights, neurons on the input layer labeled with h, firing neurons on the hidden layer labeled with i, firing neurons on the output layer labeled with j,for the partial derivative operation, τ is the membrane time constant of the neuron,for the target firing time of the jth firing neuron,target firing time, t, for the ith firing neuroniThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold,for the hidden layer and the output interlayer firing neuronDelay of synaptic weight, dlThe delay of the ith synaptic weight of the firing neuron between the input layer and the hidden layer,the ith synaptic weight between the ith firing neuron and the h neuron.
5. The multi-layer image classification method based on the delay mechanism as claimed in claim 4, wherein the delay of all firing neurons between the output layer and the hidden layer is adjusted in step A4, wherein the increment of the adjustment isThe calculation formula of (2) is as follows:
adjusting the delay of all firing neurons between the input layer and the hidden layer, wherein the increment of the adjustment Δ dhThe calculation formula of (2) is as follows:
wherein, deltajFor the learning parameter of the j-th firing neuron on the output layer, δiFor learning parameters of the ith firing neuron on the hidden layer,for the ith firing neuron and the jth firing neuronSynaptic weights, ΓjFor all neuron sets connected to firing neuron j, ΓiFor all neuron sets connected to firing neuron i, firing neurons on the hidden layer are labeled with i, firing neurons on the output layer are labeled with j,is the l synaptic weight, t, between the ith firing neuron and the h neuronhThe time when the membrane voltage of the h-th neuron first exceeds the threshold, t is the time,for the hidden layer and the output interlayer firing neuronDelay of synaptic weight, dlDelay of the first synaptic weight of the firing neuron between the input layer and the hidden layer, tiThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold.
6. The multi-layer image classification method based on the delay mechanism as claimed in claim 5, wherein the synaptic weights of all firing neurons between the output layer and the hidden layer are adjusted in step A6, wherein the increment of the adjustment is calculated by the formula:
adjusting synaptic weights of all firing neurons between the input layer and the hidden layer, wherein the increment of adjustment is calculated by:
wherein, eta is the learning rate,for the target firing time of the jth firing neuron,the target firing time of the ith firing neuron.
Background
The topology of the Spikeprop algorithm belongs to a model of a multi-layer feedforward neural network. The neurons are arranged in layers, and when a stimulation signal is introduced, the neurons convert the stimulation signal into a pulse signal, and may correspond to an input layer. The last layer is an output layer, which is a pulse output layer generated after the stimulation is integrated. The intermediate layers between the coding neuron layer and the output layer are called learning neurons, and correspond to hidden layers in a traditional neural network. The hidden layer can be n, and the value of n is a positive integer greater than or equal to 1, so that the deep pulse neural network is formed. And synapse connection is carried out among all layers of neurons. The number of synapses between two neurons is not unique. And signal transmission between neurons can be processed and integrated between synapses using delay times and connection weights. This mechanism enables the input signal of a spiking neuron to have a longer lasting effect on the post-synaptic neuron.
In the course of learning algorithms, the Spikeprop algorithm has introduced a delay mechanism, but its delay mechanism is only used to distinguish between different synaptic signals between two neurons.
Disclosure of Invention
Aiming at the defects in the prior art, the multilayer image classification method based on the delay mechanism solves the problem that the delay mechanism of the Spikeprep algorithm is only used for distinguishing different synaptic signals between two neurons.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a multi-layer image classification method based on a delay mechanism comprises the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting the image set to obtain a trained image classification model;
s3, classifying the images by adopting the trained image classification model to obtain the image types;
the image classification model comprises a feature extraction unit, a pulse delay coding unit and a multilayer classifier which are sequentially connected; the feature extraction unit is used for extracting features of the image to obtain feature image data; the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence; the multi-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image.
Further, the pulse delay encoding unit encodes the feature image data according to the formula:
ti=tmax-ln(axi+1)
wherein, tiIs the ithExcitation pulse time point, t, corresponding to pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
Further, the multi-layered classifier includes: an input layer, a hidden layer and an output layer;
the training method of the multilayer classifier comprises the following steps:
a1, inputting the excitation pulse time sequence into a multi-layer classifier, determining the target ignition time of each neuron of each layer, and recording the ignition time of the non-ignited neuron as-1;
a2, selecting the firing neurons on the output layer or the hidden layer with the firing time not being-1;
a3, calculating learning parameters of all firing neurons of an output layer and learning parameters of all firing neurons of a hidden layer according to the target firing time of each neuron of each layer;
a4, adjusting the delay of all firing neurons between the output layer and the hidden layer according to the learning parameters of all firing neurons of the output layer and the learning parameters of all firing neurons of the hidden layer, and adjusting the delay of all firing neurons between the input layer and the hidden layer;
a5, calculating the learning parameters of all the firing neurons of the output layer and the learning parameters of all the firing neurons of the hidden layer again according to the multi-layer classifier after delay adjustment;
a6, adjusting the synaptic weights of all the firing neurons between the output layer and the hidden layer according to the learning parameters calculated in the step A5, and adjusting the synaptic weights of all the firing neurons between the input layer and the hidden layer;
a7, acquiring the actual ignition time of the current output layer according to the multi-layer classifier after weight adjustment, and calculating the mean square error between the actual ignition time and the target ignition time;
a8, judging whether the mean square error is less than a set error threshold value, if so, obtaining the trained multi-layer classifier, and if not, jumping to the step A1.
Further, hiding all ignition gods of the layers in the step A3Learning parameter delta of channel elementiAnd learning parameters of all firing neurons of the output layerThe calculation formula of (2) is as follows:
wherein riFor all neuron sets connected to firing neuron i,for the ith firing neuron and the ithBetween firing neuronsSynaptic weights, neurons on the input layer labeled h, firing neurons on the hidden layer labeled i, firing neurons on the output layer labeled hThe mark is marked on the surface of the substrate,for the partial derivative operation, τ is the membrane time constant of the neuron,is as followsThe target firing time of each firing neuron,target firing time, t, for the ith firing neuroniThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold,for the hidden layer and the output interlayer firing neuronThe delay of each of the synaptic weights is,for the input layer and the hidden layer firing neuronThe delay of each of the synaptic weights is,for the ith between the ith firing neuron and the h neuronAnd synaptic weights.
Further, the delay of all firing neurons between the output layer and the hidden layer is adjusted in step A4, with the increment of the adjustmentThe calculation formula of (2) is as follows:
adjusting the delay of all firing neurons between the input layer and the hidden layer, wherein the increment of the adjustment Δ dhThe calculation formula of (2) is as follows:
wherein the content of the first and second substances,is the first on the output layerLearning parameter of individual firing neurons, deltaiFor learning parameters of the ith firing neuron on the hidden layer,for the ith firing neuron and the ithBetween firing neuronsThe weight of each synapse is determined by the weight of the synapse,for all and firing neuronsConnected set of neurons, ΓiFor all neuron sets connected to firing neuron i, the firing neuron on the hidden layer is labeled with i, and the firing neuron on the output layer is labeled with iThe mark is marked on the surface of the substrate,for the ith between the ith firing neuron and the h neuronSynaptic weight, thThe time when the membrane voltage of the h-th neuron first exceeds the threshold, t is the time,for the hidden layer and the output interlayer firing neuronThe delay of each of the synaptic weights is,for the input layer and the hidden layer firing neuronDelay of synaptic weight, tiThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold.
Further, in step a6, synaptic weights of all firing neurons between the output layer and the hidden layer are adjusted, wherein the increment of the adjustment is calculated as:
adjusting synaptic weights of all firing neurons between the input layer and the hidden layer, wherein the increment of adjustment is calculated by:
wherein, eta is the learning rate,is as followsThe target firing time of each firing neuron,the target firing time of the ith firing neuron.
In conclusion, the beneficial effects of the invention are as follows:
(1) the multi-layer classifier optimizes a delay mechanism of the Spikeprep algorithm, and takes delay time as another adjustment parameter besides weight, so that the method is not only dependent on one parameter of the weight, and the robustness of the algorithm is improved. Meanwhile, as the auxiliary adjustment variable of the weight is used, the change trend of the auxiliary adjustment variable also changes along with the weight, so that the model can be converged to the target moment more quickly, and the image classification efficiency is improved.
(2) The multi-layer classifier provided by the invention optimizes the learning mechanism of the Spikeprep algorithm based on the back propagation algorithm of the delay mechanism, improves the classification efficiency of the model, ensures that the whole model has certain anti-noise capability and is not easily interfered by external information.
Drawings
Fig. 1 is a flowchart of a multi-layer image classification method based on a delay mechanism.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a multi-layer image classification method based on a delay mechanism includes the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting the image set to obtain a trained image classification model;
s3, classifying the images by adopting the trained image classification model to obtain the image types;
the image classification model comprises a feature extraction unit, a pulse delay coding unit and a multilayer classifier which are sequentially connected; the feature extraction unit is used for extracting features of the image to obtain feature image data; the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence; the multi-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image.
The pulse delay coding unit codes the characteristic image data according to the formula:
ti=tmax-ln(axi+1)
wherein, tiFor the excitation pulse time point, t, corresponding to the ith pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
The multi-layer classifier includes: an input layer, a hidden layer and an output layer;
the training method of the multilayer classifier comprises the following steps:
a1, inputting the excitation pulse time sequence into a multi-layer classifier, determining the target ignition time of each neuron of each layer, and recording the ignition time of the non-ignited neuron as-1;
a2, selecting the firing neurons on the output layer or the hidden layer with the firing time not being-1;
a3, calculating learning parameters of all firing neurons of an output layer and learning parameters of all firing neurons of a hidden layer according to the target firing time of each neuron of each layer;
step A3 hiding learning parameter delta of all firing neurons in layeriAnd learning parameters of all firing neurons of the output layerThe calculation formula of (2) is as follows:
wherein, gamma isiFor all neuron sets connected to firing neuron i,for the ith firing neuron and the ithBetween firing neuronsSynaptic weights, neurons on the input layer labeled h, firing neurons on the hidden layer labeled i, firing neurons on the output layer labeled hThe mark is marked on the surface of the substrate,for the partial derivative operation, τ is the membrane time constant of the neuron,is as followsThe target firing time of each firing neuron,target firing time, t, for the ith firing neuroniThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold,for the hidden layer and the output interlayer firing neuronThe delay of each of the synaptic weights is,for the input layer and the hidden layer firing neuronThe delay of each of the synaptic weights is,for the ith between the ith firing neuron and the h neuronAnd synaptic weights.
A4, adjusting the delay of all firing neurons between the output layer and the hidden layer according to the learning parameters of all firing neurons of the output layer and the learning parameters of all firing neurons of the hidden layer, and adjusting the delay of all firing neurons between the input layer and the hidden layer;
in step A4, the delay of all firing neurons between the output layer and the hidden layer is adjusted by incrementsThe calculation formula of (2) is as follows:
adjusting the delay of all firing neurons between the input layer and the hidden layer, wherein the increment of the adjustment Δ dhThe calculation formula of (2) is as follows:
wherein the content of the first and second substances,is the first on the output layerLearning parameter of individual firing neurons, deltaiFor learning parameters of the ith firing neuron on the hidden layer,for the ith firing neuron and the ithBetween firing neuronsThe weight of each synapse is determined by the weight of the synapse,for all and firing neuronsConnected set of neurons, ΓiFor all neuron sets connected to firing neuron i, the firing neuron on the hidden layer is labeled with i, and the firing neuron on the output layer is labeled with iThe mark is marked on the surface of the substrate,for the ith between the ith firing neuron and the h neuronSynaptic weight, thThe time when the membrane voltage of the h-th neuron first exceeds the threshold, t is the time,for the hidden layer and the output interlayer firing neuronThe delay of each of the synaptic weights is,for the input layer and the hidden layer firing neuronDelay of synaptic weight, tiThe time at which the membrane voltage of the ith firing neuron first exceeds the threshold.
A5, calculating the learning parameters of all the firing neurons of the output layer and the learning parameters of all the firing neurons of the hidden layer again according to the multi-layer classifier after delay adjustment;
a6, adjusting the synaptic weights of all the firing neurons between the output layer and the hidden layer according to the learning parameters calculated in the step A5, and adjusting the synaptic weights of all the firing neurons between the input layer and the hidden layer;
in step a6, synaptic weights of all firing neurons between the output layer and the hidden layer are adjusted, wherein the increment of the adjustment is calculated as:
adjusting synaptic weights of all firing neurons between the input layer and the hidden layer, wherein the increment of adjustment is calculated by:
wherein, eta is the learning rate,is as followsThe target firing time of each firing neuron,the target firing time of the ith firing neuron.
A7, acquiring the actual ignition time of the current output layer according to the multi-layer classifier after weight adjustment, and calculating the mean square error between the actual ignition time and the target ignition time;
a8, judging whether the mean square error is less than a set error threshold value, if so, obtaining the trained multi-layer classifier, and if not, jumping to the step A1.