Full-digital pulse neural network hardware system and method based on STDP rule

文档序号:8716 发布日期:2021-09-17 浏览:33次 中文

1. The full digital pulse neural network hardware system based on the STDP rule is characterized in that: the impulse neural network system comprises an input layer neuron module (1), a plasticity learning module (2), a data line control module (3), a synapse array module (4), an output layer neuron module (5) and an experiment report module (6);

the output end of the output layer neuron module (1) is connected with the input end of the data line control module (3) and the input end of the plasticity learning module (2), the output end of the plasticity learning module (2) is connected with the input end of the data line control module (3), the output end of the data line control module (3) is connected with the input end of the synapse array module (4), the output end of the synapse array module (4) is connected with the input end of the output layer neuron (5), and the output end of the output layer neuron (5) is connected with the input end of the experiment report module;

the plasticity learning module comprises a synapse address registering module (21), a plurality of synapse calculating modules (22), a synapse weight register (26) and a plurality of post-synaptic pulse generating modules (27); the output end of the input layer neuron module (1) inputs the presynaptic pulse (12) to each synapse calculation module (22); each synapse calculating module (22) calculates an updated synapse weight and inputs the synapse weight to a synapse weight register (26) through a second synapse weight connection line (25), and inputs the synapse weight to each post-synaptic pulse generating module (27) through a first synapse weight connection line (23); the output end of the post-synaptic pulse generation module (27) inputs post-synaptic pulses to each synapse calculation module (22) through a post-synaptic pulse connection line (24); the synapse weight register (26) is connected with the synapse array module (4) through a synapse weight output port (40); the output end of the synapse address registering module (21) is connected with the synapse array module (4);

each synapse computation module (22) comprising a state machine and an adder (57); the state machine reads synaptic weights from the synaptic array module (4) and inputs the synaptic weights to the post-synaptic pulse generation module (27); the post-synaptic pulse generation module determines the generation time of the post-synaptic pulse (24) according to the magnitude of the synaptic weight, so as to adjust the time interval of the pre-synaptic and post-synaptic pulses, and transmits a synaptic weight variable to an adder (57) according to the time interval, so as to complete the update of the synaptic weight, and finally transmits the synaptic weight variable to a synaptic array module (4) for storage.

2. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the input layer neuron module comprises an input neuron counter (9), an image pixel point counter (10) and an input neuron (11); the input end of the input neuron (11) is connected with an image information input connecting line (7), an input neuron serial number connecting line (8) and an input neuron counter (9), and the identification phase and initialization phase write enable output port (13), the identification phase and initialization phase write address output port (14), the identification phase and initialization phase learning enable output port (15) and the identification phase and initialization phase read-write address output port (16) of the image pixel counter (10) are connected with the data line control module (3); the input layer neuron (11) compresses 196 input layer neurons into 28 comprehensive input layer neuron modules, and corresponds to 28 groups of synapse calculation modules while effectively reducing the usage amount of digital logic.

3. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the data line control module (3) comprises a data selector (35); said data selector (35); in the training stage, a synapse weight input port (28) of a data selector (35) is provided with a write enable signal input port (29), a write address input port (30), a write enable input port (31) and a write address input port (32) which respectively control reading and writing of synapse weights in the learning stage and control reading of synapse addresses; the write enable decision port (33) decides the output of the write enable output port (41) and decides which row of synapse array to train; in the identification stage, a write enable input port (13) in the identification stage and the initialization stage, a write address input port (14) in the identification stage and the initialization stage, a learning enable input port (15) in the identification stage and the initialization stage, and a read-write address input port (16) in the identification stage and the initialization stage respectively control the reading and writing of synaptic weights in the identification stage; the weight input port (34) is initialized, and the input of synaptic weights in the initialization phase is controlled.

4. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the output layer neuron module (5) comprises a state machine, a synapse weight register (43) and a transverse inhibition signal register (44); the state machine jumps states according to a learning enabling signal (18) and a lateral inhibition signal (47), synaptic weights (42) are input into a synaptic weight register (43) from a synaptic array module (4), and in a training phase, when an accumulated value of the synaptic weights exceeds a threshold value, the lateral inhibition signal register (44) sends out lateral inhibition from a lateral inhibition signal output port (45) to inhibit other columnar layer neurons; in the identification phase, the accumulated synaptic weights are input to the experimental report module (6) from a synaptic weight output port (46).

5. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the experimental report module (6) comprises a code conversion module (49), a label number register (50), a membrane potential register (51), a maximum membrane potential register (52), a membrane potential judger (53), a correct image counter (54) and an accuracy calculator (55), wherein in the training stage, a transverse inhibition input port (47) inputs inhibition signals of all output layer neurons (5), the code conversion module (49) converts the input inhibition signals into a 4-bit binary system, and meanwhile, a label input port (48) matches a correct label with the converted 4-bit binary code and registers the label number register (50); in the identification stage, synaptic weights of each layer are input through a weight input port (47), the maximum membrane potential is selected through a membrane potential register (51) and input into a maximum membrane potential register (52), a membrane potential judger (53) judges whether a label of an identification image is consistent with a label of a learning image, a correct image counter (54) counts the correctly identified images, a result is calculated through an accuracy calculator (55), and the result is output through an accuracy output port (56).

6. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the state jump mode of the synapse calculation module specifically comprises the following steps: state S0, an initialization phase of iterative learning, which is to initialize the synapse weight and apply the pseudo-random number to generate the synapse weight; the state S2 is entered with a high level of the learning enable terminal and a post-synaptic pulse under condition (2), and the state S1 is entered with a high level of the learning enable terminal and a pre-synaptic pulse under condition (3);

in the state S1, the synapse weight is enhanced, the synapse weight is increased, and the state S0 is entered under the condition (1), that is, the learning enable terminal is at a low level or the calculator value is 4; state S2, synaptic weight weakening state, reducing synaptic weight, entering state S0 at condition (1) learning enable low level or calculator value 4;

the condition (1) learning enabling end is low level or the calculator value is 4;

condition (2) learning enable is high and there is a post-synaptic pulse;

condition (3) learning enable high and presynaptic pulse;

state S0: an initialization stage of iterative learning;

state S1: synaptic weight enhancement status;

state S2: a synaptic weight weakening state;

state S3: an unlearned initialization state.

7. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the state mode of the neuron of the output layer is specifically as follows:

state S3, initial state not learned, learning Enable high at Condition (5) State S4;

state S4, competition learning state, which row of synapse array modules to learn, enters state S5 when the transversal inhibit signal 62 is high under condition (7), and enters state S6 when the transversal inhibit signal 56 input under condition (6) is high;

state S5, learning race success state, which is mainly the learning phase, entering state S7 with the learning enable terminal at low level under condition (4);

state S6, learning race failed state, entering state S3 with learning enable low under condition (4);

state S7, identification phase, learning enable high level entering state S8 under condition (5);

state S8, no interference stage, in which the neuron after successful learning will not interfere with the non-learned neuron, and enters state S4 with the learning enable end at low level under condition (4);

the learning enabling end of the condition (4) is at low level;

condition (5) the learning enable terminal is at high level;

the lateral suppression signal terminal input under the condition (6) is at a high level;

condition (7) the lateral suppression signal is high level;

state S3: an unlearned initialization state;

state S4: a competitive learning state;

state S5: the learning competition is successful;

state S6: failure of the learning competition;

state S7: a recognition stage;

state S8: the phases are not disturbed.

8. An all-digital pulse neural network hardware system based on STDP rule according to claim 1, wherein: the synapse array module is composed of 10 groups of BRAMs in the FPGA, and write enabling signals among the BRAM modules of each group are independent; in the initialization phase, 10 groups of synapse arrays are written with randomized synapse weights; in the training phase, the iterative update of the synaptic weights is only relevant to the column-wise synaptic array which is successful in the current competition, the write enable of the BRAM module is high level, and the write enable of other BRAM modules is low level.

9. The implementation method of the all-digital pulse neural network hardware system based on the STDP rule of claim 1, wherein: the method comprises an initialization stage, a training stage and an identification stage; in the initialization phase, the weight of synapses is input into a synapse array module through a pseudo-random number; in the training stage, pixel information of an image is input to an input layer neuron module in series, the input layer neuron module firstly judges whether an input pixel point exceeds a threshold value, and if the input pixel point exceeds the threshold value, a pulse square wave, namely a presynaptic pulse, is generated; the presynaptic pulse is input into the plasticity learning module, the plasticity learning module reads out corresponding synaptic weights from the synaptic array module, and determines the speed of sending out the presynaptic pulse according to the size of the synaptic weights so as to determine the increment size of the synaptic weights, and the updated synaptic weights are written into the synaptic array module; the output layer neuron module carries out summation according to the synapse weight input by the synapse array module, the summation value is registered in a membrane potential register, and when the summation value exceeds a set membrane potential threshold value first, a transverse inhibition signal is output so as to inhibit other column-direction output layer neurons, a WTA mechanism is realized, and the learning of a column of synapse array modules on a class of images is realized; in the identification stage, pixels of an image to be identified are sequentially input into neurons of an input layer, and the pixels exceeding a threshold value generate pulses, so that synaptic weights are summed, the maximum membrane potential in the neurons of the output layer is found out, and meanwhile, the label of the maximum membrane potential is compared with the label of an identification image, and whether the identification is correct or not is judged.

Technical Field

Neurological research has progressed significantly in recent years, exploring new properties of many biological neurons. The biological neurons have dynamic and spatiotemporal properties when processing information, and are particularly expressed in the aspects of high efficiency, low energy consumption and the like when processing real-time information. With the development of the highly information-oriented society, the requirement for processing real-time data is continuously increased, and therefore, a software model and a hardware architecture for simulating the neuron working mode become the key points of research. Currently, north chip of IBM corporation, Loihi chip of Inter corporation, Tianji chip of qinghua university and dalvin chip of Zhejiang university are very successful SNN hardware architectures. However, these structures do not reasonably apply the STDP rule to the training phase of SNN, and herein, a fully digital SNN system is proposed, which organically combines the rationality analysis of the STDP rule and realizes the identification of MNIST data set by means of online training.

Disclosure of Invention

Aiming at the defects of the prior art, the invention provides an all-digital pulse neural network hardware system and a method based on an STDP rule.

The full-digital pulse neural network hardware system based on the STDP rule comprises an input layer neuron module, a plasticity learning module, a data line control module, a synapse array module, an output layer neuron module and an experimental report module;

the output end of the output layer neuron module is connected with the input end of the data line control module and the input end of the plasticity learning module, the output end of the plasticity learning module is connected with the input end of the data line control module, the output end of the data line control module is connected with the input end of the synapse array module, the output end of the synapse array module is connected with the input end of the output layer neuron, and the output end of the output layer neuron is connected with the input end of the experiment report module;

the plasticity learning module comprises a synapse address registering module, a plurality of synapse calculating modules, a synapse weight register and a plurality of post-synaptic pulse generating modules; the output end of the input layer neuron module inputs presynaptic pulses to each synapse calculation module; each synapse calculation module calculates updated synapse weight, inputs the updated synapse weight to a synapse weight register through a second synapse weight connecting line, and inputs the updated synapse weight to each post-synaptic pulse generation module through a first synapse weight connecting line; the output end of the post-synaptic pulse generation module inputs post-synaptic pulses to each synapse calculation module through a post-synaptic pulse connection line; the synapse weight register is connected with the synapse array module through a synapse weight output port; the output end of the synapse address registering module is connected with the synapse array module;

each synapse calculation module comprises a state machine and an adder; the state machine reads synaptic weights from the synaptic array module and inputs the synaptic weights to the post-synaptic pulse generation module; the post-synaptic pulse generation module determines the generation time of post-synaptic pulses according to the magnitude of the synaptic weights, so as to adjust the time intervals of the pre-synaptic and post-synaptic pulses, transmit synaptic weight variables to the adder according to the time intervals, complete the updating of the synaptic weights, and finally transmit the synaptic weight variables to the synaptic array module for storage.

Preferably, the input layer neuron module comprises an input neuron counter, an image pixel point counter and an input neuron; the input end of the input neuron is connected with an image information input connecting line, an input neuron serial number connecting line and an input neuron counter, and the identification stage and initialization stage write enable output port, the identification stage and initialization stage write address output port, the identification stage and initialization stage learning enable output port, the identification stage and initialization stage read-write address output port and the data line control module are connected with the image information input connecting line, the input neuron serial number connecting line and the input neuron counter; the input layer neuron compresses 196 input layer neurons into 28 comprehensive input layer neuron modules, and corresponds to 28 groups of synapse computing modules while effectively reducing the usage amount of digital logic.

Preferably, the data line control module comprises a data selector; the data selector. In the training phase, a synapse weight input port of the data selector controls reading and writing of synapse weights in the learning phase and controls reading of synapse addresses; the write enable decision port determines the output of the write enable output port, and determines which row of synapse array to train. In the identification stage, the write enable input port in the identification stage and the initialization stage, the write address input port in the identification stage and the initialization stage, the learning enable input port in the identification stage and the initialization stage, and the read-write address input port in the identification stage and the initialization stage respectively control the reading and writing of the synaptic weights in the identification stage. And initializing a weight input port, and controlling the input of synaptic weight in an initialization stage.

Preferably, the output layer neuron module comprises a state machine, a synapse weight register and a lateral inhibition signal register; and in the training phase, when the accumulated value of the synaptic weights exceeds a threshold value, the transverse inhibition signal register sends out transverse inhibition from a transverse inhibition signal output port to inhibit other column layer neurons. In the identification phase, the accumulated synaptic weights are input to the experimental report module from the synaptic weight output port.

Preferably, the experiment report module comprises a code conversion module, a tag number register, a membrane potential register, a maximum membrane potential register, a membrane potential judger, a correct image counter and an accuracy calculator, wherein in the training stage, the inhibition signal of each output layer neuron is input through the transverse inhibition input port, the code conversion module converts the input inhibition signal into a 4-bit binary system, and meanwhile, the tag input port matches a correct tag with the converted 4-bit binary code and registers the correct tag by using the tag number register. In the identification stage, the weight input port inputs the synaptic weight of each layer, the membrane potential register selects the maximum membrane potential to be input to the maximum membrane potential register, the membrane potential judger judges whether the label of the identification image is consistent with the label of the learning image, the correct image counter counts the correct identification image, the result is calculated by the accuracy calculator, and the result is output by the accuracy output port.

Preferably, the state jump mode of the synapse calculation module specifically includes: state S0, an initialization phase of iterative learning, which is to initialize the synapse weight and apply the pseudo-random number to generate the synapse weight; the state S2 is entered with a high level of the learning enable terminal and a post-synaptic pulse under condition (2), and the state S1 is entered with a high level of the learning enable terminal and a pre-synaptic pulse under condition (3);

in the state S1, the synapse weight is enhanced, the synapse weight is increased, and the state S0 is entered under the condition (1), that is, the learning enable terminal is at a low level or the calculator value is 4; state S2, synaptic weight weakening state, reducing synaptic weight, entering state S0 at condition (1) learning enable low level or calculator value 4;

the condition (1) learning enabling end is low level or the calculator value is 4;

condition (2) learning enable is high and there is a post-synaptic pulse;

condition (3) learning enable high and presynaptic pulse;

state S0: an initialization stage of iterative learning;

state S1: synaptic weight enhancement status;

state S2: a synaptic weight weakening state;

state S3: an unlearned initialization state.

Preferably, the state mode of the neurons in the output layer is specifically as follows:

state S3, initialization state not learned, enters state S4 with the learn enable high at condition (5).

State S4 is a Competition learning state, where the synaptic array module in which the row competes for learning enters state S5 when the transversal inhibit signal 62 is high under condition (7), and enters state S6 when the transversal inhibit signal 56 is high under condition (6).

State S5 learning race success state, which is mainly the learning phase, enters state S7 with the learning enable terminal low under condition (4).

State S6 learning race fail state, learning enable is low on condition (4) and state S3 is entered.

State S7 recognition phase, learning Enable high at Condition (5) enters State S8.

In the state S8, in the non-interference stage, the neuron that has successfully learned does not interfere with the non-learned neuron, and enters the state S4 under the condition (4) that the learning enable terminal is at a low level.

The learning enabling end of the condition (4) is at low level;

condition (5) the learning enable terminal is at high level;

the lateral suppression signal terminal input under the condition (6) is at a high level;

condition (7) the lateral suppression signal is high level;

state S3: an unlearned initialization state;

state S4: a competitive learning state;

state S5: the learning competition is successful;

state S6: failure of the learning competition;

state S7: a recognition stage;

state S8: the phases are not disturbed.

Preferably, the synapse array module is composed of 10 groups of BRAMs in the FPGA, and write enable signals among the BRAM modules in each group are independent. In the initialization phase, 10 sets of synapse arrays are written with randomized synapse weights. In the training phase, the iterative update of the synaptic weights is only relevant to the column-wise synaptic array which is successful in the current competition, the write enable of the BRAM module is high level, and the write enable of other BRAM modules is low level.

Preferably, the method comprises an initialization phase, a training phase and a recognition phase. In the initialization phase, the synapses are weighted by inputting pseudo-random numbers into the synapse array module. In the training stage, pixel information of an image is input to an input layer neuron module in series, the input layer neuron module firstly judges whether an input pixel point exceeds a threshold value, and if the input pixel point exceeds the threshold value, a pulse square wave, namely a presynaptic pulse, is generated. The presynaptic pulse is input to the plasticity learning module, the plasticity learning module reads out corresponding synaptic weights from the synaptic array module, and determines the speed of sending out the presynaptic pulse according to the size of the synaptic weights, so that the increment size of the synaptic weights is determined, and the updated synaptic weights are written into the synaptic array module. The output layer neuron module carries out summation according to the synapse weight input by the synapse array module, the summation value is registered in a membrane potential register, and when the summation value exceeds a set membrane potential threshold value first, a transverse inhibition signal is output so as to inhibit other column-direction output layer neurons, a WTA mechanism is realized, and the learning of a column of synapse array modules on a class of images is realized. In the identification stage, pixels of an image to be identified are sequentially input into neurons of an input layer, and the pixels exceeding a threshold value generate pulses, so that synaptic weights are summed, the maximum membrane potential in the neurons of the output layer is found out, and meanwhile, the label of the maximum membrane potential is compared with the label of an identification image, and whether the identification is correct or not is judged.

Compared with the prior art, the invention has the following effects: the system effectively applies the WTA mechanism and the STDP rule to the impulse neural network; each part of the system can be realized by hardware, and the system can be a pulse neural network hardware system for processing data in a parallelization manner; in the recognition of image, voice and text fields, the architecture judges a learned model through membrane potential output by neurons in an output layer. The impulse neural network architecture of the design is realized by an FPGA (field programmable gate array), and the FPGA has high flexibility and reusability.

Drawings

FIG. 1 is a schematic diagram of a spiking neural network architecture;

FIG. 2 is a schematic diagram of an input layer neuron module;

FIG. 3 is a schematic diagram of a plasticity module;

FIG. 4 is a schematic diagram of a data line control module;

FIG. 5 is a schematic diagram of a synapse array module;

FIG. 6 is a schematic diagram of an output layer neuron module;

FIG. 7 is a schematic diagram of an experiment report module;

FIG. 8 is a schematic diagram of a synapse calculation module;

FIG. 9 is a state jump diagram for a synapse calculation module;

FIG. 10 is a state-jump diagram for output layer neurons.

Detailed description of the invention

The full digital pulse neural network hardware system based on the STDP rule is characterized in that: the impulse neural network system comprises an input layer neuron module 1, a plasticity learning module 2, a data line control module 3, a synapse array module 4, an output layer neuron module 5 and an experiment report module 6;

the output end of the output layer neuron module 1 is connected with the input end of the data line control module 3 and the input end of the plasticity learning module 2, the output end of the plasticity learning module 2 is connected with the input end of the data line control module 3, the output end of the data line control module 3 is connected with the input end of the synapse array module 4, the output end of the synapse array module 4 is connected with the input end of the output layer neuron 5, and the output end of the output layer neuron 5 is connected with the input end of the experiment report module;

the plasticity learning module comprises a synapse address registering module 21, a plurality of synapse calculating modules 22, a synapse weight register 26 and a plurality of post-synaptic pulse generating modules 27; the output of the input layer neuron module 1 inputs the pre-synaptic pulse 12 to each of the synaptic computation modules 22; each synapse calculating module 22 calculates updated synapse weights and inputs the updated synapse weights to a synapse weight register 26 through a second synapse weight connection line 25, and inputs the updated synapse weights to each post-synaptic pulse generating module 27 through a first synapse weight connection line 23; the output terminal of the post-synaptic pulse generating module 27 inputs the post-synaptic pulse to each synapse calculating module 22 through the post-synaptic pulse connecting line 24; the synapse weight register 26 is connected with the synapse array module 4 through a synapse weight output port 40; the output end of the synapse address registering module 21 is connected with the synapse array module 4;

each synapse computation module 22 comprises a state machine and an adder 57; the state machine reads the synaptic weights 40 from the synaptic array module 4 and inputs the synaptic weights to the post-synaptic pulse generation module 27; the post-synaptic pulse generation module determines the generation time of the post-synaptic pulse 24 according to the magnitude of the synaptic weight, so as to adjust the time interval of the pre-synaptic and post-synaptic pulses, and transmits a synaptic weight variable to the adder 57 according to the time interval, so as to complete the update of the synaptic weight, and finally transmits the synaptic weight variable to the synaptic array module 4 for storage.

The input layer neuron module comprises an input neuron counter 9, an image pixel point counter 10 and an input neuron 11; the input end of the input neuron 11 is connected with an image information input connecting line 7, an input neuron serial number connecting line 8 and an input neuron counter 9, and the identification phase and initialization phase write enable output port 13, the identification phase and initialization phase write address output port 14, the identification phase and initialization phase learning enable output port 15 and the identification phase and initialization phase read write address output port 16 of the image pixel counter 10 are connected with the data line control module 3; the input layer neuron 11 compresses 196 input layer neurons into 28 comprehensive input layer neuron modules, and corresponds to 28 groups of synapse calculation modules while effectively reducing the usage amount of digital logic.

The data line control module 3 comprises a data selector 35; the data selector 35. In the training phase, a synapse weight input port 28 of the data selector 35 controls reading and writing of synapse weights in the learning phase and controls reading and writing of synapse addresses through a write enable signal input port 29, a write address input port 30, a write enable input port 31 and a write address input port 32 respectively; the write enable decision port 33 determines the output of the write enable output port 41, and determines which column of synapse array to train. In the recognition phase, the recognition phase and initialization phase write enable input port 13, the recognition phase and initialization phase write address input port 14, the recognition phase and initialization phase learning enable input port 15, and the recognition phase and initialization phase read-write address input port 16, respectively control the reading and writing of the synaptic weights in the recognition phase. The initialization weight input port 34 controls the input of synaptic weights during the initialization phase.

The output layer neuron module 5 comprises a state machine, a synapse weight register 43 and a transverse inhibition signal register 44; in the training phase, when the accumulated value of the synaptic weights exceeds a threshold value, the lateral inhibition signal register 44 sends out lateral inhibition from a lateral inhibition signal output port 45 to inhibit other column layer neurons. In the identification phase, the accumulated synaptic weights are input to the experimental reporting module 6 from the synaptic weight output port 46.

The experimental report module 6 comprises a code conversion module 49, a label number register 50, a membrane potential register 51, a maximum membrane potential register 52, a membrane potential judger 53, a correct image counter 54 and an accuracy calculator 55, wherein in the training stage, the transverse inhibition input port 47 inputs inhibition signals of each output layer neuron 5, the code conversion module 49 converts the input inhibition signals into a 4-bit binary system, and meanwhile, the label input port 48 matches a correct label with the converted 4-bit binary code and registers the label with the label number register 50. In the identification stage, the synaptic weights of each layer are input into the weight input port 47, the maximum membrane potential is selected by the membrane potential register 51 and input into the maximum membrane potential register 52, the membrane potential determiner 53 determines whether the label of the identification image is consistent with the label of the learning image, the correct image counter 54 counts the correctly identified images, the result is calculated by the accuracy calculator 55, and the result is output by the accuracy output port 56.

The state jump mode of the synapse calculation module specifically comprises the following steps: state S0, an initialization phase of iterative learning, which is to initialize the synapse weight and apply the pseudo-random number to generate the synapse weight; the state S2 is entered with a high level of the learning enable terminal and a post-synaptic pulse under condition (2), and the state S1 is entered with a high level of the learning enable terminal and a pre-synaptic pulse under condition (3);

in the state S1, the synapse weight is enhanced, the synapse weight is increased, and the state S0 is entered under the condition (1), that is, the learning enable terminal is at a low level or the calculator value is 4; state S2, synaptic weight weakening state, reducing synaptic weight, entering state S0 at condition (1) learning enable low level or calculator value 4;

the condition (1) learning enabling end is low level or the calculator value is 4;

condition (2) learning enable is high and there is a post-synaptic pulse;

condition (3) learning enable high and presynaptic pulse;

state S0: an initialization stage of iterative learning;

state S1: synaptic weight enhancement status;

state S2: a synaptic weight weakening state;

state S3: an unlearned initialization state.

The state mode of the neuron of the output layer is specifically as follows:

state S3, initialization state not learned, enters state S4 with the learn enable high at condition (5).

State S4 is a Competition learning state, where the synaptic array module in which the row competes for learning enters state S5 when the transversal inhibit signal 62 is high under condition (7), and enters state S6 when the transversal inhibit signal 56 is high under condition (6).

State S5 learning race success state, which is mainly the learning phase, enters state S7 with the learning enable terminal low under condition (4).

State S6 learning race fail state, learning enable is low on condition (4) and state S3 is entered.

State S7 recognition phase, learning Enable high at Condition (5) enters State S8.

In the state S8, in the non-interference stage, the neuron that has successfully learned does not interfere with the non-learned neuron, and enters the state S4 under the condition (4) that the learning enable terminal is at a low level.

The learning enabling end of the condition (4) is at low level;

condition (5) the learning enable terminal is at high level;

the lateral suppression signal terminal input under the condition (6) is at a high level;

condition (7) the lateral suppression signal is high level;

state S3: an unlearned initialization state;

state S4: a competitive learning state;

state S5: the learning competition is successful;

state S6: failure of the learning competition;

state S7: a recognition stage;

state S8: the phases are not disturbed.

The synapse array module is composed of 10 groups of BRAMs in the FPGA, and write enable signals among the BRAM modules of each group are independent. In the initialization phase, 10 sets of synapse arrays are written with randomized synapse weights. In the training phase, the iterative update of the synaptic weights is only relevant to the column-wise synaptic array which is successful in the current competition, the write enable of the BRAM module is high level, and the write enable of other BRAM modules is low level.

The implementation method of the all-digital pulse neural network hardware system based on the STDP rule specifically comprises the following steps: including an initialization phase, a training phase, and an identification phase. In the initialization phase, the synapses are weighted by inputting pseudo-random numbers into the synapse array module. In the training stage, pixel information of an image is input to an input layer neuron module in series, the input layer neuron module firstly judges whether an input pixel point exceeds a threshold value, and if the input pixel point exceeds the threshold value, a pulse square wave, namely a presynaptic pulse, is generated. The presynaptic pulse is input to the plasticity learning module, the plasticity learning module reads out corresponding synaptic weights from the synaptic array module, and determines the speed of sending out the presynaptic pulse according to the size of the synaptic weights, so that the increment size of the synaptic weights is determined, and the updated synaptic weights are written into the synaptic array module. The output layer neuron module carries out summation according to the synapse weight input by the synapse array module, the summation value is registered in a membrane potential register, and when the summation value exceeds a set membrane potential threshold value first, a transverse inhibition signal is output so as to inhibit other column-direction output layer neurons, a WTA mechanism is realized, and the learning of a column of synapse array modules on a class of images is realized. In the identification stage, pixels of an image to be identified are sequentially input into neurons of an input layer, and the pixels exceeding a threshold value generate pulses, so that synaptic weights are summed, the maximum membrane potential in the neurons of the output layer is found out, and meanwhile, the label of the maximum membrane potential is compared with the label of an identification image, and whether the identification is correct or not is judged.

The first embodiment is as follows:

fig. 1 shows an all-digital pulse neural network hardware system based on the STDP rule, which includes an input layer neuron module 1, a plasticity learning module 2, a data line control module 3, a synapse array module 4, an output layer neural module 5, and an experiment report module 6.

In the system, an input layer neuron module 1 converts input image information into presynaptic pulses, the presynaptic pulses are input into a plasticity learning module 2, the plasticity learning module 2 updates synaptic weights according to time intervals of the presynaptic pulses and the postsynaptic pulses, the updated synaptic weights are stored in a synaptic array module 4, an output layer neuron module 5 outputs transverse inhibition signals, and finally an experiment report module 6 calculates the identification rate.

Fig. 2 shows an input layer neuron module 1 according to a first embodiment, wherein image information is input from an image information input terminal 17, and is respectively input into the pre-synaptic pulse generation module 10 via an image information input connection line 7, so as to generate pre-synaptic pulses 12 to the plasticity module 2. The control information of the input neuron counter 9 is sequentially input to the input neuron 11, the image pixel point counter 10, a write enable output port 13 for controlling the identification stage and the initialization stage, a write address output port 14 for controlling the identification stage and the initialization stage, a learning enable output port 15 for controlling the identification stage and the initialization stage, and a read-write address output port 16 for controlling the synaptic weight reading and writing of the initialization stage and the identification stage.

As shown in fig. 3, in the plasticity learning module 2 in the first embodiment, the pre-synaptic pulse 12 is input into the synapse calculating module 22 from the input layer neuron module 1, the synapse calculating module 22 calculates updated synaptic weights, inputs the synaptic weights into the synapse registering module 26, and stores the synaptic weights in the synapse array module 4, and the synapse array module 4 inputs the synaptic weights into the synapse module 22 through the synaptic weight input port; the read-write address input port is connected with a synapse weight registering module.

FIG. 4 shows an embodiment of a data line control module 3, which controls the reading and writing of synaptic weights during the training and recognition phases. In the training phase, the write enable signal output port 29, the write address output port 30, the write enable output port 31, and the write address output port respectively control reading and writing 32 of the synapse weight in the learning phase, and control reading of the synapse weight. The write enable decision port 33 determines the output of the write enable output port 41, and determines which column of synapse array to train. In the identification phase, the identification phase and initialization phase write enable output port 13, the identification phase and initialization phase write address output port 14, the identification phase and initialization phase learning enable output port 15, and the identification phase and initialization phase read-write address output port 16, the reading and writing of the synaptic weight in the identification phase are controlled respectively. The initialization weight input port 34 controls the input of synaptic weights during the initialization phase.

Fig. 5 shows the synaptic array module 4, a read enable output port 36, a read address output port 37, a write address input port 39, and a synapse weight output port 40, which respectively control the reading of the stored synapse weights in the synaptic array module 4 according to one embodiment. During the training process, the write enable signal 41 independently controls each column of the synapse array module 4 to write data, and only one column of the synapse storage module can be written with data for each picture type.

In the first embodiment, as shown in fig. 6, the output layer neuron module 5 performs state jumping according to the learning enable signal 18 and the lateral inhibition signal 47, the synaptic weights 42 are inputted from the synaptic array module 4 to the synaptic weight register 43, and during the training phase, when the accumulated value of the synaptic weights exceeds the threshold, lateral inhibition is issued from the lateral inhibition signal register 44 to inhibit other column layer neurons. In the identification phase, the accumulated synaptic weights are input to the experimental reporting module 6 from the synaptic weight output port 46.

As shown in fig. 7, in the experimental report module 6 in the first embodiment, in the training stage, the transversal inhibition input port 47 inputs the inhibition signal of each output layer neuron 5, the code conversion module 49 converts the input inhibition signal into a 4-bit binary code, and the label input port 48 matches the correct label with the converted 4-bit binary code and registers the matching label with the label number register 50. In the identification stage, the synaptic weights of each layer are input into the weight input port 47, the maximum membrane potential register 51 selects the maximum membrane potential and inputs the maximum membrane potential into the maximum membrane potential register 52, the membrane potential determiner 53 determines whether the label of the identification image is consistent with the label of the learning image, the correct image counter 54 counts the correctly identified images, the result is calculated by the accuracy calculator 55, and the result is output from the accuracy output port 56.

In the first embodiment, as shown in fig. 8, a synapse module 22 is calculated, in a training phase, a pre-synaptic pulse 12 is input from an input layer neuron module 1 to a state machine, the state machine reads synaptic weights 40 from a synapse array module 4, and inputs the synaptic weights to a post-synaptic pulse generation module 27, and determines a time for generating a post-synaptic pulse 24 according to the magnitude of the synaptic weights, so as to adjust a time interval between pre-synaptic and post-synaptic pulses, and according to the time interval, a synaptic weight variable is transmitted to an adder 57, so as to complete an update of the synaptic weights, and finally transmitted to the synapse array module 4 for storage.

FIG. 9 is a state jump diagram of the module for calculating the burst in the first embodiment.

State S0, an initialization phase of iterative learning, where initialization of synaptic weights is mainly performed, and synaptic weights are generated by applying pseudo-random numbers. The state S2 is entered with the learning enable terminal at high level and with a post-synaptic pulse under condition (2), and the state S1 is entered with the learning enable terminal at high level and with a pre-synaptic pulse under condition (3).

State S1 synaptic weight boosting, mainly increasing synaptic weight, enters state S0 with the learning enable terminal at low level or the calculator value at 4 under condition (1).

State S2 synaptic weight weakening, mainly decreasing synaptic weight, enters state S0 with condition (1) learning enable low or calculator value 4.

FIG. 10 is a diagram illustrating state jumps for neurons in the output layer according to one embodiment.

State S3, initialization state not learned, enters state S4 with the learn enable high at condition (5).

State S4 is a Competition learning state, in which the synaptic array module in the row is mainly contended for learning, the lateral inhibit signal 62 goes high under condition (7) to state S5, and the lateral inhibit signal 56 goes high under condition (6) to state S6.

State S5 learning race success state, which is mainly the learning phase, enters state S7 with the learning enable terminal low under condition (4).

State S6 learning race fail state, learning enable is low on condition (4) and state S3 is entered.

State S7 recognition phase, learning Enable high at Condition (5) enters State S8.

In the state S8, in the non-interference stage, the neuron that has successfully learned does not interfere with the non-learned neuron, and enters the state S4 under the condition (4) that the learning enable terminal is at a low level.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种神经网络的定点化方法、装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!