Image classification method based on DW-Tempotron algorithm

文档序号:8614 发布日期:2021-09-17 浏览:36次 中文

1. An image classification method based on a DW-Tempotron algorithm is characterized by comprising the following steps:

s1, constructing a Tempotron algorithm;

s2, applying delay to synapses in the Tempotron algorithm to obtain a DW-Tempotron algorithm; the LIF model in the DW-Tempotron algorithm is as follows:

wherein V (t) is the LIF model postsynaptic neuron membrane voltage; omegaiWeight of the ith synapse; k (·) is a kernel function representing the contribution of each afferent spike to the membrane voltage of the postsynaptic neuron; t is the desired ignition time, tiInputting a time for spike for the ith pre-synaptic neuron; diIs an applied delay; vrestIs a resting potential; τ is the post-synaptic membrane integration time constant; tau issIs the synaptic current decay time constant; v0A factor normalized to membrane voltage;

s3, initializing parameters of the DW-Tempotron algorithm; the initialized parameters comprise input pulse time, synaptic weight, ignition threshold, delay, learning rate, postsynaptic membrane integration time constant, synaptic current decay time constant and iteration number;

s4, respectively acquiring membrane voltage of a postsynaptic neuron of the LIF model in the current DW-Tempotron algorithm under a + mode and membrane voltage under a-mode;

s5, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the delay, recalculating the membrane voltage in the + mode, and entering the step S6; otherwise, go to step S7;

s6, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S5; otherwise, go to step S7;

s7, judging whether the membrane voltage under the current-mode is larger than the ignition threshold value, if so, updating the delay and recalculating the membrane voltage under the mode, and entering the step S8; otherwise, ending the training of the current round, and entering the step S9;

s8, judging whether the membrane voltage under the current mode is larger than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S7; otherwise, ending the training of the current round, and entering the step S9;

s9, judging whether the training round reaches the iteration number, if so, classifying the images by adopting the current DW-Tempotron algorithm; otherwise, the process returns to step S4.

2. The image classification method based on the DW-Tempotron algorithm according to claim 1, wherein the specific method for updating the delay in step S5 is as follows:

according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);Is the current synaptic weight.

3. The DW-Tempotron algorithm-based image classification method according to claim 1, wherein the specific method for updating the synaptic weights in step S6 is as follows:

according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,the corresponding membrane voltage is greater than the ignition threshold,the corresponding membrane voltage is less than the ignition threshold.

4. The image classification method based on the DW-Tempotron algorithm according to claim 1, wherein the specific method for updating the delay in step S7 is as follows:

according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);Is the current synaptic weight.

5. The DW-Tempotron algorithm-based image classification method according to claim 1, wherein the specific method for updating the synaptic weights in step S8 is as follows:

according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,the corresponding membrane voltage is greater than the ignition threshold,the corresponding membrane voltage is less than the ignition threshold.

6. The image classification method based on the DW-Tempotron algorithm according to claim 1, characterized in that the loss function of the DW-Tempotron algorithm is:

wherein C represents a loss value;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageGreater than an ignition threshold; thr is the ignition threshold; eta2Is a constant greater than 0;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageLess than the ignition threshold; eta1Is a constant greater than 0.

Background

In the aspect of supervised learning, Spiking neural network learning algorithms can be divided into two categories of membrane voltage driving and pulse driving according to a learning mechanism. The membrane voltage driving is to adjust the weight by taking the difference between the actual membrane voltage and the target membrane voltage at a certain moment as an error; similarly, the pulse driving is to adjust the weight by using the time difference between the actual pulse excitation time and the target pulse excitation time as an error. The most classical of the number Tempotron learning algorithms in the membrane voltage driving algorithm is a two-layer network learning algorithm, and the maximum membrane voltage of a neuron to be ignited exceeds a threshold value by using a gradient descent method; the maximum membrane voltage that should not ignite is below the threshold.

In addition, as is known, the anti-noise capability, i.e. robustness, is also crucial to an algorithm, and good robustness is an important prerequisite for obtaining large-area application of an algorithm. Noise exists everywhere in our environment, and the brain is no exception, but the noise does not have great influence on the noise, and the brain can still accurately and efficiently process information, however, the science cannot explain the characteristics of the brain at present. When the Spiking neural network is applied in a practical scene, the Spiking neural network is undoubtedly influenced by noise, but most of the learning algorithms are trained and learned under a noise-free ideal environment at present, and noise is not taken into consideration, which is irrelevant to the Spiking neural network in a primary development stage.

One current solution is to add noise to the training data and then train the neurons to improve noise immunity. However, the method only shows good effect on the noise used in training, and cannot interfere with other noises. Recently, a new plasticity rule called membrane potential dependent plasticity is proposed, which allows the membrane voltage to be subjected to memory recovery when encountering noise, either during or after training, but this mechanism is inefficient and the membrane voltage recovery effect is not ideal and needs to be further improved.

Disclosure of Invention

Aiming at the defects in the prior art, the DW-Tempotron algorithm-based image classification method provided by the invention solves the problem that the learning efficiency of the existing Tempotron algorithm is low when the image classification is carried out.

In order to achieve the purpose of the invention, the invention adopts the technical scheme that:

an image classification method based on a DW-Tempotron algorithm is provided, which comprises the following steps:

s1, constructing a Tempotron algorithm;

s2, applying delay to synapses in the Tempotron algorithm to obtain a DW-Tempotron algorithm; the LIF model in the DW-Tempotron algorithm is as follows:

wherein V (t) is the LIF model postsynaptic neuron membrane voltage; omegaiWeight of the ith synapse; k (·) is a kernel function representing the contribution of each afferent spike to the membrane voltage of the postsynaptic neuron; t is the desired ignition time, tiInputting a time for spike for the ith pre-synaptic neuron; diIs an applied delay; vrestIs a resting potential; τ is the post-synaptic membrane integration time constant; tau issIs the synaptic current decay time constant; v0A factor normalized to membrane voltage;

s3, initializing parameters of the DW-Tempotron algorithm; the initialized parameters comprise input pulse time, synaptic weight, ignition threshold, delay, learning rate, postsynaptic membrane integration time constant, synaptic current decay time constant and iteration number;

s4, respectively acquiring membrane voltage of a postsynaptic neuron of the LIF model in the current DW-Tempotron algorithm under a + mode and membrane voltage under a-mode;

s5, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the delay, recalculating the membrane voltage in the + mode, and entering the step S6; otherwise, go to step S7;

s6, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S5; otherwise, go to step S7;

s7, judging whether the membrane voltage under the current-mode is larger than the ignition threshold value, if so, updating the delay and recalculating the membrane voltage under the mode, and entering the step S8; otherwise, ending the training of the current round, and entering the step S9;

s8, judging whether the membrane voltage under the current mode is larger than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S7; otherwise, ending the training of the current round, and entering the step S9;

s9, judging whether the training round reaches the iteration number, if so, classifying the images by adopting the current DW-Tempotron algorithm; otherwise, the process returns to step S4.

Further, the specific method for updating the delay in step S5 is as follows:

according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);For the current synaptic weight。

Further, the specific method for updating the synaptic weight in step S6 is as follows:

according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,the corresponding membrane voltage is greater than the ignition threshold,the corresponding membrane voltage is less than the ignition threshold.

Further, the specific method for updating the delay in step S7 is as follows:

according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);Is the current synaptic weight.

Further, the specific method for updating the synaptic weight in step S8 is as follows:

according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,the corresponding membrane voltage is greater than the ignition threshold,the corresponding membrane voltage is less than the ignition threshold.

Further, the loss function of the DW-Tempotron algorithm is:

wherein C represents a loss value;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageGreater than an ignition threshold; thr is the ignition threshold; eta2Is a constant greater than 0;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageLess than the ignition threshold; eta1Is a constant greater than 0.

The invention has the beneficial effects that: according to the DW-Tempotron algorithm, the time of input pulses is adjusted, so that the ignition time of the postsynaptic neurons is directly influenced, and the learning efficiency is improved.

Drawings

FIG. 1 is a schematic flow diagram of the present invention;

FIG. 2 is a DW-Tempotron input pulse profile; (3-5)

FIG. 3 shows the result of neuron classification under initial conditions;

FIG. 4 is the neuron classification results after completion of Tempotron training;

FIG. 5 is the classification result of the neurons after training when the DW-Tempotron adopts the original loss function;

FIG. 6 is an iterative adjustment process for Tempotron weights;

FIG. 7 is a weight iterative adjustment process when the DW-Tempotron adopts the original loss function;

FIG. 8 is the change in weight of two algorithms before and after Tempotron training;

FIG. 9 is the weight change of two algorithms before and after training when the DW-Tempotron adopts the original loss function;

FIG. 10 is a graph of the change in DW-Tempotron synaptic delay before and after training using the original loss function;

FIG. 11 shows the DWDT-Tempotron accuracy. (FIGS. 4-8)

Detailed Description

The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.

As shown in fig. 1, the image classification method based on the DW-Tempotron algorithm includes the following steps:

s1, constructing a Tempotron algorithm;

s2, applying delay to synapses in the Tempotron algorithm to obtain a DW-Tempotron algorithm; the LIF model in the DW-Tempotron algorithm is as follows:

wherein V (t) is the LIF model postsynaptic neuron membrane voltage; omegaiWeight of the ith synapse; k (·) is a kernel function representing the contribution of each afferent spike to the membrane voltage of the postsynaptic neuron; t is the desired ignition time, tiInputting a time for spike for the ith pre-synaptic neuron; diIs an applied delay; vrestIs a resting potential; tau is a projectionThe integral time constant of the postsynaptic membrane; tau issIs the synaptic current decay time constant; v0A factor normalized to membrane voltage;

s3, initializing parameters of the DW-Tempotron algorithm; the initialized parameters comprise input pulse time, synaptic weight, ignition threshold, delay, learning rate, postsynaptic membrane integration time constant, synaptic current decay time constant and iteration number;

s4, respectively acquiring membrane voltage of a postsynaptic neuron of the LIF model in the current DW-Tempotron algorithm under a + mode and membrane voltage under a-mode;

s5, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the delay, recalculating the membrane voltage in the + mode, and entering the step S6; otherwise, go to step S7;

s6, judging whether the membrane voltage in the current + mode is smaller than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S5; otherwise, go to step S7;

s7, judging whether the membrane voltage under the current-mode is larger than the ignition threshold value, if so, updating the delay and recalculating the membrane voltage under the mode, and entering the step S8; otherwise, ending the training of the current round, and entering the step S9;

s8, judging whether the membrane voltage under the current mode is larger than an ignition threshold value, if so, updating the synaptic weight, and returning to the step S7; otherwise, ending the training of the current round, and entering the step S9;

s9, judging whether the training round reaches the iteration number, if so, classifying the images by adopting the current DW-Tempotron algorithm; otherwise, the process returns to step S4.

The specific method of updating the delay in step S5 is as follows: according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);Is the current synaptic weight.

The specific method for updating the synaptic weight in step S6 is as follows: according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,the corresponding membrane voltage is greater than the ignition threshold,the corresponding membrane voltage is less than the ignition threshold.

The specific method of updating the delay in step S7 is as follows: according to the formula:

obtaining a current updated delayWhereinRepresenting the delay before the current update;is the current delay adjustment value;inputting a time for the spike of the current presynaptic neuron; phi taus(lnτ-lnτs)/(τ-τs);Is the current synaptic weight.

The specific method for updating the synaptic weight in step S8 is as follows: according to the formula:

obtaining synaptic weight update value delta omega in DW-Tempotron algorithmiAnd will update the value Δ ωiAdding the current weight value to complete the update of the synapse weight; wherein λ is a constant and greater than 0;andare the times at which the membrane voltage of the postsynaptic neuron reaches a maximum,corresponding membrane electrodeThe pressure is greater than the ignition threshold value,the corresponding membrane voltage is less than the ignition threshold.

In one embodiment of the invention, the DW-Tempotron algorithm uses the original loss function of the Tempotron algorithm:

the continuous LIF neuron model used by Tempotron is characterized in that a gradient descent method is conveniently used, and the original loss function is expanded to the synaptic weight omegaiThe offset derivative is calculated to obtain the adjustment rule of synaptic weight.

Random experiments are adopted to verify the learning ability of the DW-Tempotron algorithm. We set 8 presynaptic neurons that can randomly deliver + mode and-mode pulse signals to the postsynaptic neurons in the specific sequence shown in fig. 2, black for + mode signals and gray for-mode signals. We set the training time to 300ms, the synaptic weights are initialized to Gaussian distribution with both mean and variance 0.1, the delay weight initialization is set to uniform distribution over the interval [0,5] ms, and the firing threshold is set to 1.

After we bring all input spikes with initial weights into the post-synaptic neurons, the membrane voltages are shown in FIG. 3. As can be seen from fig. 3, the membrane voltage reaches a maximum at 250ms in + mode, but never exceeds the ignition threshold; in mode, the membrane voltage reaches a maximum at 111ms and exceeds a threshold to fire the post-synaptic neuron, which no longer receives the input spike, and the membrane voltage decays gradually to 0. Obviously, the classification result at this time is incorrect.

Afterwards, we train the neurons with Tempotron and DW-Tempotron algorithms, respectively, setting the number of training iterations to 100. After training is completed, the voltage of the membrane of the postsynaptic neuron is shown in fig. 4 and 5. At this time, the membrane voltage smoothly exceeds the threshold value in the + mode to ignite the neuron; both algorithms allow neurons to correctly complete the classification of + mode-mode, with the membrane voltage in-mode below the threshold for a period of time.

In the following, we analyze the variation process of each parameter of the two algorithms during training. In terms of synaptic weights, FIGS. 6 and 7 show the variation of the synaptic weights of each of the two algorithms as a function of the number of training iterations. As can be seen from the figure, the synaptic weights of Tempotron are not yet completely stable after 40 iterative training; due to the addition of the delay weight, the DW-Tempotron correspondingly reduces the adjustment of the synaptic weight, and is in a stable state for about 28 times of iterative training, thereby improving the learning efficiency.

Next, we analyze the variation difference between the initial weight and the weight of the burst after training, and fig. 8 depicts the distribution of the initial weight and the weight after Tempotron training; FIG. 9 is the distribution of the weights and initial weights after DW-Tempotron training. In general, DW-Tempotron has a smaller change amplitude of synaptic weight than Tempotron, which is also the meaning of the delay weight introduced in the present invention. Synaptic delay aspect, FIG. 10 is a comparison of initial delay to post-training delay. The delay of each synapse can only be adjusted once at most, and it can be seen that of 8 synapses, 3 synaptic delays are adjusted, and the rest of neurons are in the initial state and are not delayed.

In another embodiment of the present invention, the original loss function of Tempotron in the DW-Tempotron algorithm is updated by using a dynamic threshold, and the new loss function is obtained as follows:

wherein C represents a loss value;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageGreater than an ignition threshold; thr is the ignition threshold; eta2Is a constant greater than 0;is the time at which the membrane voltage of the postsynaptic neuron reaches a maximum value, anCorresponding membrane voltageLess than the ignition threshold; eta1Is a constant greater than 0. The synapse-tuning rules of the DW-Tempotron algorithm do not need to be changed at this time. The adjusted loss function can make the membrane voltage of the post-synaptic neuron far away from the original threshold value in a-mode; in thatThe time exceeds the original threshold value sufficiently, so that the aim of improving the noise resistance of the algorithm is fulfilled.

The DW-Tempotron algorithm in which the loss function is updated by using the dynamic threshold value is named as DWDT-Tempotron algorithm. Random experiments were used to verify the learning ability of the DWDT-Tempotron algorithm. We set up 8 pre-synaptic neurons that can randomly deliver + and-mode pulse signals to post-synaptic neurons. We set the training time to 300ms, the synaptic weights are initialized to Gaussian distribution with both mean and variance 0.1, the delay weight initialization is set to uniform distribution over the interval [0,5] ms, and the firing threshold is set to 1. And simultaneously testing the accuracy of the DWDT-Tempotron by using a randomly extracted test set in the training process. FIG. 11 shows the variation of test set accuracy over the course of training. Before training, the accuracy value is very low, then the accuracy value gradually rises along with iterative training, and after 80 times of iterative training, the accuracy value approaches to 1, which shows that the neuron can finish correct classification of most test data.

The original intention of using dynamic threshold in this embodiment is to improve the anti-noise capability of the algorithm and enhance robustness, and here we verify its anti-noise capability, and train post-synaptic neurons with Tempotron and DWDT-Tempotron, respectively, and then compare their output results under noise interference. The noise in Spiking neural networks is mainly background membrane voltage noise and impulsive perturbation noise, and we verify the resistance of dynamic-threshold to them respectively.

For background membrane voltage noise, the experimental conditions were set as follows: 10 presynaptic neurons, an input pulse sequence is generated according to Poisson distribution, and the generation frequency of the input pulse sequence is 5 Hz. The initial value of the synaptic weight is uniformly distributed in the interval [0,0.1], the initial value of the synaptic delay is uniformly distributed in the interval [0,2] ms, and the time length of the pulse sequence is changed from 300ms to 900 ms. The method comprises the steps of firstly training neurons by two algorithms respectively, then inputting Gaussian white noise voltage to the neurons to simulate a noise environment, enabling the mean value of the noise voltage to be 0, gradually increasing the variance sigma from 0.03mV to 0.33mV, testing the neurons by data randomly extracted from UCI, and recording the accuracy rate of accurate classification of the neurons. Table 1 shows the test results of the other parameters under different pulse sequence time lengths, and table 2 shows the test results of the other parameters under different input pulse frequencies.

Table 1: accuracy conditions of Tempotron and DWDT-Tempotron film voltage noise under different pulse sequence time lengths

Table 2: accuracy conditions of Tempotron and DWDT-Tempotron film voltage noise under different input pulse frequencies

From the above two tables, it can be seen that the accuracy of Tempotron decreases significantly as the noise intensity increases. On the other hand, the increase of the noise intensity does not greatly affect the accuracy of the DWDT-Tempotron, which shows that the DWDT-Tempotron has higher robustness than the Tempotron.

Next we analyze the effect of impulse disturbance noise. For the Tempotron and DWDT-Tempotron algorithms, the experimental condition settings were the same as when discussing the film voltage noise. We also trained the neurons with two algorithms separately and then added the impulse perturbation noise to the input impulse, the mean of the noise voltage was 0 and the variance gradually increased from 0.3mV to 3.3mV, we tested them with the same data and recorded the accuracy with which they could be accurately classified. Table 3 shows the test results of the other parameters under different pulse sequence time lengths, and table 4 shows the test results of the other parameters under different input pulse frequencies.

Table 3: tempotron and DWDT-Tempotron pulse disturbance noise accuracy under different pulse sequence time lengths

Table 4: tempotron and DWDT-Tempotron pulse disturbance noise accuracy under different input pulse frequencies

From the above two tables, it can be seen that the impulse disturbance noise has a large influence on the accuracy of both Tempotron and DWDT-Tempotron. However, the accuracy of the DWDT-Tempotron algorithm decreases by a small amount compared to Tempotron as the noise intensity increases. Therefore, under the interference of impulse disturbance noise, the DWDT-Tempotron algorithm has better anti-noise capability.

In conclusion, the DW-Tempotron algorithm provided by the method has higher learning efficiency and accuracy compared with the traditional Tempotron algorithm.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于延迟机制的单层图像分类方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!