Method for eliminating data copy, neural network processor and electronic product
1. A neural network processor comprising at least first and second neuron clusters and a third neuron cluster, each of the first and second neuron clusters and the third neuron cluster comprising one or more neurons, the neurons being circuits that mimic biological neurons, each of the first and second neuron clusters and the third neuron cluster being a collection of circuits that mimic biological neurons, wherein:
the neural network processor further comprises a first lookup table and a second lookup table;
the first lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in the second neuron cluster when the neuron in the first neuron cluster is activated to output;
the second lookup table is configured to: obtaining weight data of the neuron pointed by the neuron address output by the first lookup table and projected to a target neuron in the third neuron cluster according to the neuron address output by the first lookup table;
projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
2. The neural network processor of claim 1, wherein:
the second lookup table is further configured to: and obtaining a target neuron address projected to the third neuron cluster according to the neuron address output by the first lookup table.
3. The neural network processor of claim 2, wherein:
the neural network processor is a pulse neural network processor, and the neurons are activated and output to be the neurons to emit pulses.
4. The neural network processor of any one of claims 1 to 3, wherein:
the first lookup table is further configured to: when a neuron in the second neuron cluster is activated to output, mapping an address of the activated neuron in the second neuron cluster to an address of the same neuron in the second neuron cluster.
5. The neural network processor of claim 4, wherein:
the output of the first neuron cluster is also projected to the second neuron cluster; or
The output of the first neuron cluster is not projected to the second neuron cluster.
6. The neural network processor of any one of claims 1-3 or 5, wherein:
the first and second neuron clusters comprise equal numbers of neurons, and the weight matrix of the first neuron cluster projected to the third neuron cluster is numerically equal to the weight matrix of the second neuron cluster projected to the third neuron cluster.
7. The neural network processor of any one of claims 1-3 or 5, wherein:
the second lookup table records only addresses of target neurons in the second neuron cluster projected to the third neuron cluster and weight data projected to the target neurons.
8. A neural network processor comprising at least first and second neuron clusters and a third neuron cluster, each of the first and second neuron clusters and the third neuron cluster comprising one or more neurons, the neurons being circuits that mimic biological neurons, each of the first and second neuron clusters and the third neuron cluster being a collection of circuits that mimic biological neurons, wherein:
weight data is physically distributed and stored in a plurality of weight data storage devices in the neural network processor;
the neural network processor further comprises a third lookup table and a fourth lookup table;
the third lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in a second neuron cluster when the neuron in the first neuron cluster is activated to output;
the fourth lookup table is configured to: obtaining a target neuron address projected to the third neuron cluster according to the neuron address output by the third lookup table;
obtaining weight data storage means for target neurons in the third neuron cluster and weight data stored therein projected to the target neurons at least according to the obtained target neuron addresses;
projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
9. The neural network processor of claim 8, wherein:
the third lookup table is further configured to: when a neuron in the second neuron cluster is activated to output, mapping an address of the activated neuron in the second neuron cluster to an address of the same neuron in the second neuron cluster.
10. The neural network processor of claim 8, wherein:
the output of the first neuron cluster is also projected to the second neuron cluster; or
The output of the first neuron cluster is not projected to the second neuron cluster.
11. The neural network processor of any one of claims 8-9, wherein:
the first and second neuron clusters comprise equal numbers of neurons, and the weight matrix of the first neuron cluster projected to the third neuron cluster is numerically equal to the weight matrix of the second neuron cluster projected to the third neuron cluster.
12. The neural network processor of any one of claims 8-10, wherein:
the neural network processor is a pulse neural network processor, and the neurons are activated and output to be the neurons to emit pulses.
13. A neural network processor comprising at least a first neuron cluster and a second neuron cluster, and the first neuron cluster and the second neuron cluster each comprise one or more neurons, the neurons being circuits that emulate biological neurons, the first neuron cluster and the second neuron cluster each being a collection of circuits that emulate biological neurons, wherein:
the neural network processor further includes: a first alias address storage, the first neuron cluster associated with the first alias address storage;
when a neuron in the first neuron cluster is activated to output, looking up a corresponding alias address of the neuron in a first alias address storage, the alias address being an address of a neuron in the second neuron cluster;
registering an output of an activated neuron in the first neuron cluster with the neuron in the second neuron cluster;
outputting an output of at least the activated neuron in the first registered neuron cluster when the neuron in the second neuron cluster is activated to output.
14. The neural network processor of claim 13, wherein:
the neural network processor further comprises a third neuron cluster, and the third neuron cluster comprises one or more neurons; the third neuron cluster is a collection of circuits that mimic biological neurons;
the neurons in the second neuron cluster are activated and output, specifically:
the neurons in the second neuron cluster project outputs to neurons in the third neuron cluster.
15. The neural network processor of claim 13 or 14, wherein:
the outputting of at least the activated neurons in the registered first neuron cluster specifically includes:
outputting an output of the activated neurons in the first neuron cluster that have been registered; and the number of the first and second electrodes,
outputting an output fired based on the inputs of the neurons in the second neuron cluster.
16. The neural network processor of claim 13, wherein:
the number of alias addresses stored in the first alias address storage is equal to or greater than a positive integer multiple of 2 of the number of neurons in the first neuron cluster.
17. The neural network processor of claim 14, wherein:
the output of the first neuron cluster is projected to the second neuron cluster; the output of the second neuron cluster is projected onto the third neuron cluster.
18. The neural network processor of claim 13, 14 or 16, wherein:
the output of the first neuron cluster is not projected to the second neuron cluster.
19. The neural network processor of claim 16, wherein:
when the number of alias addresses stored in the first alias address storage is a positive integer multiple of greater than or equal to 2 of the number of neurons in the first neuron cluster:
when a neuron in the first neuron cluster is activated and output, all alias addresses corresponding to the neuron in the first alias address storage device are searched, the output of the activated neuron in the first neuron cluster is registered with the neurons pointed to by all alias addresses, and when the neuron pointed to by an alias address is activated and output, at least the output of the activated neuron in the registered first neuron cluster is output.
20. The neural network processor of claim 14, 16 or 19, wherein:
the second neuron cluster is associated with a second alias address storage means storing addresses of neurons to which the addresses of the neurons are directed that need to be registered with outputs of activated neurons in the second neuron cluster.
21. The neural network processor of claim 20, wherein:
the second alias address storage stores addresses of neurons in a third cluster of neurons.
22. The neural network processor of claim 13, 14, 16, 19 or 21, wherein:
the first neuron cluster and the second neuron cluster each include only one neuron.
23. The neural network processor of claim 13, 14, 16, 19 or 21, wherein:
for any ith neuron with the index i, the index j of the alias neuron satisfies: j > i, where i and j are positive integers, the labels being based on the projection order of the neurons;
the neural network processor is a pulse neural network processor, and the neurons are activated and output to be the neurons to emit pulses.
24. A neural network processor comprising at least first and second neurons and a third neuron, the first and second neurons and the third neuron each being circuitry that mimics a biological neuron, characterized by:
the neural network processor further includes: a first alias address store, the first neuron associated with the first alias address store;
when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron;
registering an output of the first neuron being activated with the second neuron;
outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
25. The neural network processor of claim 24, wherein:
the first neuron also projects an output to a second neuron; or the first neuron does not project output to the second neuron.
26. A neural network processor comprising at least first and second neuron clusters and a third neuron cluster, each of the first and second neuron clusters and the third neuron cluster comprising one or more neurons, the neurons being circuits that mimic biological neurons, each of the first and second neuron clusters and the third neuron cluster being a collection of circuits that mimic biological neurons, wherein:
the first neuron cluster is aliased to a second neuron cluster; the output of the first neuron cluster is projected to the second neuron cluster;
the output of the second neuron cluster is projected onto the third neuron cluster.
27. The neural network processor of claim 26, wherein:
the alias of the second neuron cluster is the third neuron cluster; the output of the third neuron cluster is projected onto a fourth neuron cluster.
28. An electronic product comprising a first interface module, a second interface module, a processing module, and a response module, wherein: the electronic product further comprising a neural network processor as claimed in any one of claims 1-27; the neural network processor is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module;
the neural network processor identifies an input environment signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
29. A method of eliminating a copy of data for use in a neural network processor comprising at least first and second neurons and a third neuron, the first and second neurons and the third neuron each being circuitry that emulates a biological neuron, the method comprising:
the neural network processor further includes: a first alias address store, the first neuron associated with the first alias address store;
when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron;
registering an output of the first neuron being activated with the second neuron;
outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
Background
In the Spiking Neural Network (SNN) structure, it often happens that the weight data of two neuron clusters (or neurons) are identical. A neuron is a circuit that mimics a biological neuron, and a neuron cluster is a collection of circuits that mimics a biological neuron. For a pulse neural network processor (also called a pseudo-mental processor, a brain-like chip) which features ultra-low power consumption, a larger memory space needs to be designed, which further means the increase of the power consumption of the chip.
Referring to fig. 1, a schematic representation of A, B weights connecting (or projecting) two neuron clusters to neuron cluster C is shown. The weight of the connection between the neuron cluster A and the neuron cluster C is W1 ACAnd the weight of the connection between the neuron cluster B and the neuron cluster C is W2 BC(superscripts 1, 2 represent data stored in different memory spaces). In a typical Neuromorphic (Neuromorphic) system, the network is implemented by initializing two sets of weight data representing the corresponding neuron connections.
Referring to fig. 2, a diagram of A, B equal-weight connections of two neuron clusters to neuron cluster C in the prior art is shown. In some impulse neural network structures, neuron clusters a and B are connected to neuron cluster C, but the connection weight W between neuron clusters A, C1 ACConnection weight W with neuron cluster B, C2 BCNumerically equal (i.e., "equal weights" as the present invention refers), one of the purposes of this is to reduce the complexity of the training process by reducing the number of different weights. Although the stored values are equal, the link is storedReceive weight W1 ACIs different from W2 BCAnd can be labeled as W1 BC(W1 ACIn another notation, the subscripts are all BC meaning index values equal). Therefore, the prior art wastes storage space and also has optimized space.
Referring to fig. 3, a schematic diagram of an implementation of equal weight connections under standard processor platforms (non-pulsed von neumann architecture processors such as CPUs, GPUs, etc.) is shown. To implement the network architecture of FIG. 2, the output of the neuron cluster A, B is summed under standard processor platform, and then a single connection weight W is usedBCThe weighted connection can be implemented.
Referring to fig. 4, a diagram of a residual join structure is shown, which is common in the prior art. Residual concatenation means that the input and output of a certain module are added and taken as the final output, which is very common in constructing impulse neural networks. The connection weight between the neuron clusters A, B is WABThe weight of connection between the neuron clusters B, C is WBC. The inputs and outputs of the neuron cluster B are added, which means that the neuron cluster A is also passed through the weight matrix WBCIs linked to the neuron cluster C. Thus, there is numerically equal weight data W between the neuron cluster A-C connections and the neuron cluster B-C connectionsBCThis results in the waste of storage space described in fig. 2.
Referring to fig. 5, a schematic diagram of a structure of a neuron cluster establishing connection through a LUT (Look Up Table) in the prior art is shown. The first set of neurons 501 includes a number of neurons, which may be divided into a plurality of clusters, for example, a neuron cluster A, B, C. This division may be physical or logical. For the connection between some two neurons between two neuron clusters, a mapping relation from the source neuron to the target neuron is established through the sixth LUT 503, and then the weight data corresponding to the connection between the two neurons is read based on the mapping relation. This solution is ideal enough for the situation shown in fig. 1. However, for the cases shown in fig. 2 and 4, it is a common practice in the art to have to copy a copy of the same weight data and store it in another storage space. There is a potential memory space waste problem.
Referring to fig. 6, a diagram illustrating a prior art scheme for storing weight values locally in neighboring neurons is shown, which is an alternative to fig. 5. In this solution, all weight data are stored adjacent to the neuron (or neuron cluster), such as in prior art 1: WO2016/174113A 1. The weight data for the neuron cluster A, B, C in fig. 6 that is located in the second neuron set 601 is stored locally for that neuron cluster. In this way, all connection weight data connected (or projected) to a certain neuron is adjacent to that neuron. Note that the term "contiguous" is used herein to refer to a relative concept compared to traditional centralized storage, similar to the concept of distributed storage. After a certain source neuron in the neuron cluster A, B, C sends a pulse or a pulse sequence, a target neuron of the source neuron is found in the seventh LUT 603, a weight value corresponding to the source neuron in a weight storage space adjacent to the target neuron is accessed, and an output pulse sequence of the source neuron is projected to the target neuron based on the weight value.
It can be seen from the above description that, for the situation that different storage spaces are required due to the same weight value in the impulse neural network architecture, especially under the situation that a large number of such network structures exist, how to optimize the storage space, improve the utilization rate of the storage space, and reduce the number of memory cells of the designed chip, thereby reducing the power consumption of the chip, is a technical problem to be solved urgently.
Further, the inventors found that: although the neuron model and the information processing mode of the conventional artificial neural network are different from those of the impulse neural network, as the scale of parameters such as the weight of the artificial neural network is larger and larger, for example, billions or even billions, the problem to be solved is how to simplify the pressure of the weight storage space and improve the efficiency of the model.
Disclosure of Invention
In order to optimize the utilization efficiency of the chip storage space of the pulse neural network processor, reduce the requirement of the chip storage space and reduce the power consumption of the chip, the invention is realized by the following technical scheme:
a neural network processor comprising at least first and second neuron clusters and a third neuron cluster, each comprising one or more neurons, the neural network processor further comprising first and second lookup tables; the first lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in the second neuron cluster when the neuron in the first neuron cluster is activated to output; the second lookup table is configured to: obtaining weight data of the neuron pointed by the neuron address output by the first lookup table and projected to a target neuron in the third neuron cluster according to the neuron address output by the first lookup table; projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
The neuron is a circuit simulating a biological neuron, and the neuron cluster is a collection of circuits simulating biological neurons.
In some class of embodiments, the second lookup table is further configured to: and obtaining a target neuron address projected to the third neuron cluster according to the neuron address output by the first lookup table.
In one class of embodiments, the neural network processor is a spiking neural network processor, and the neurons are activated and the output is a pulse delivered by the neuron.
In some class of embodiments, the first lookup table is further configured to: when a neuron in the second neuron cluster is activated to output, mapping an address of the activated neuron in the second neuron cluster to an address of the same neuron in the second neuron cluster.
In some embodiments, the output of the first neuron cluster is also projected to the second neuron cluster; or the output of the first neuron cluster is not projected to the second neuron cluster.
In some embodiments, the first and second neuron clusters comprise an equal number of neurons, and the weight matrix for the first neuron cluster to project to the third neuron cluster is numerically equal to the weight matrix for the second neuron cluster to project to the third neuron cluster.
In some embodiments, the second lookup table records only addresses of target neurons in the third neuron cluster projected by the second neuron cluster and weight data projected to the target neurons.
A neural network processor comprising at least a first and a second neuron cluster and a third neuron cluster, each comprising one or more neurons, weight data physically distributed stored in a plurality of weight data storage means in the neural network processor; the neural network processor further comprises a third lookup table and a fourth lookup table; the third lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in a second neuron cluster when the neuron in the first neuron cluster is activated to output; the fourth lookup table is configured to: obtaining a target neuron address projected to the third neuron cluster according to the neuron address output by the third lookup table; obtaining weight data storage means for target neurons in the third neuron cluster and weight data stored therein projected to the target neurons at least according to the obtained target neuron addresses; projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
In some class of embodiments, the third lookup table is further configured to: when a neuron in the second neuron cluster is activated to output, mapping an address of the activated neuron in the second neuron cluster to an address of the same neuron in the second neuron cluster.
In some embodiments, the output of the first neuron cluster is also projected to the second neuron cluster; or the output of the first neuron cluster is not projected to the second neuron cluster.
In some embodiments, the first and second neuron clusters comprise an equal number of neurons, and the weight matrix for the first neuron cluster to project to the third neuron cluster is numerically equal to the weight matrix for the second neuron cluster to project to the third neuron cluster.
In one class of embodiments, the neural network processor is a spiking neural network processor, and the neurons are activated and the output is a pulse delivered by the neuron.
A neural network processor comprising at least a first neuron cluster and a second neuron cluster, each comprising one or more neurons, the neural network processor further comprising: a first alias address storage, the first neuron cluster associated with the first alias address storage; when a neuron in the first neuron cluster is activated to output, looking up a corresponding alias address of the neuron in a first alias address storage, the alias address being an address of a neuron in the second neuron cluster; registering an output of an activated neuron in the first neuron cluster with the neuron in the second neuron cluster; outputting an output of at least the activated neuron in the first registered neuron cluster when the neuron in the second neuron cluster is activated to output.
In certain embodiments, the neural network processor further comprises a third neuron cluster, and the third neuron cluster comprises one or more neurons;
the neurons in the second neuron cluster are activated and output, specifically:
the neurons in the second neuron cluster project outputs to neurons in the third neuron cluster.
In a certain type of embodiment, the outputting at least the output of the activated neurons in the registered first neuron cluster specifically includes:
outputting an output of the activated neurons in the first neuron cluster that have been registered; and the number of the first and second electrodes,
outputting an output fired based on the inputs of the neurons in the second neuron cluster.
In some embodiments, the number of alias addresses stored in the first alias address storage is equal to the number of neurons in the first neuron cluster or a positive integer multiple thereof greater than or equal to 2.
In some embodiments, the output of the first neuron cluster is projected onto the second neuron cluster; the output of the second neuron cluster is projected onto the third neuron cluster.
In some embodiments, the output of the first neuron cluster is not projected onto the second neuron cluster.
In certain embodiments, when the number of alias addresses stored in the first alias address storage is a positive integer multiple of greater than or equal to 2 of the number of neurons in the first neuron cluster:
when a neuron in the first neuron cluster is activated and output, all alias addresses corresponding to the neuron in the first alias address storage device are searched, the output of the activated neuron in the first neuron cluster is registered with the neurons pointed to by all alias addresses, and when the neuron pointed to by an alias address is activated and output, at least the output of the activated neuron in the registered first neuron cluster is output.
In some type of embodiment, the second neuron cluster is associated with a second alias address storage that stores addresses of neurons to which the addresses of the neurons point that need to be registered with the output of activated neurons in the second neuron cluster.
In certain embodiments, the second alias address storage stores addresses of neurons in a third cluster of neurons.
In some embodiments, the first neuron cluster and the second neuron cluster each comprise only one neuron.
In some embodiments, for any ith neuron labeled i, the label j of its alias neuron satisfies: j > i, where i and j are positive integers, the labels being based on the projection order of the neurons; the neural network processor is a pulse neural network processor, and the neurons are activated and output to be the neurons to emit pulses.
A neural network processor, the neural network processor comprising at least first and second neurons and a third neuron, the neural network processor further comprising: a first alias address store, the first neuron associated with the first alias address store; when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron; registering an output of the first neuron being activated with the second neuron; outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
In some class of embodiments, the first neuron further projects an output to a second neuron; or the first neuron does not project output to the second neuron.
A neural network processor comprising at least a first and a second neuron cluster and a third neuron cluster, and the first and the second neuron cluster and the third neuron cluster each comprise one or more neurons, the first neuron cluster being aliased to the second neuron cluster; the output of the first neuron cluster is projected to the second neuron cluster; the output of the second neuron cluster is projected onto the third neuron cluster.
In some class of embodiments, the alias of the second neuron cluster is the third neuron cluster; the output of the third neuron cluster is projected onto a fourth neuron cluster.
An electronic product comprising first and second interface modules and a processing module, and a response module, the electronic product further comprising a neural network processor as claimed in any preceding claim; the neural network processor is coupled with the processing module through the first interface module, and the processing module is coupled with the response module through the second interface module; the neural network processor identifies an input environment signal and transmits an identification result to the processing module through the first interface module, and the processing module generates a control instruction according to the identification result and transmits the control instruction to the response module through the second interface module.
A method of eliminating a data copy, applied in a neural network processor, the neural network processor comprising at least a first neuron and a second neuron, and a third neuron, the neural network processor further comprising: a first alias address store, the first neuron associated with the first alias address store; when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron; registering an output of the first neuron being activated with the second neuron; outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
The technical solutions, technical features, and technical means disclosed above may not be completely the same as, identical to, or detailed in the following detailed description. The technical features and technical means disclosed in this section and the technical features and technical means disclosed in the subsequent detailed description are combined with each other reasonably, so that more technical solutions are disclosed, which are beneficial supplements to the detailed description. As such, some details in the drawings may not be explicitly described in the specification, but if a person skilled in the art can deduce the technical meaning of the details based on the description of other related words or drawings, the common technical knowledge in the art, and other prior arts (such as conference, journal articles, etc.), the technical solutions, technical features, and technical means not explicitly described in this section also belong to the technical contents disclosed in the present invention, and the same as the above descriptions can be used in combination to obtain corresponding new technical solutions. The technical scheme combined by all the technical features disclosed at any position of the invention is used for supporting the generalization of the technical scheme, the modification of the patent document and the disclosure of the technical scheme.
Drawings
FIG. 1 is a schematic weight diagram of A, B two neuron clusters connected to neuron cluster C;
FIG. 2 is a diagram of A, B equal-weight connections of two neuron clusters to neuron cluster C in the prior art;
FIG. 3 is a schematic diagram of equal weight connection implemented under a standard processor platform;
FIG. 4 is a diagram of a residual concatenation structure as is common in the prior art;
FIG. 5 is a diagram illustrating a structure of a prior art connection established by LUT for a neuron cluster;
FIG. 6 is a diagram illustrating a prior art scheme for storing weight values locally in neighboring neurons;
FIG. 7 is a first class of embodiments of the present invention for eliminating weight storage redundancy by introducing additional LUTs;
FIG. 8 is a schematic representation of the organization of several clusters of neurons;
fig. 9 is a schematic diagram of a mapping relationship of the first LUT;
fig. 10 is a schematic diagram of the structure of the second LUT;
FIG. 11 is a schematic diagram of a prior art LUT structure;
FIG. 12 is a second class of embodiments of the present invention for eliminating weight storage redundancy by introducing additional LUTs;
FIG. 13 is a third class of embodiments of the present invention based on an Alias (Alias) mechanism;
FIG. 14 is a schematic diagram of transforming a residual connection network structure by an alias mechanism;
FIG. 15 is a schematic diagram of a transform deep multi-layer residual network by an aliasing mechanism;
FIG. 16 is a schematic representation of a certain neuron cluster as an alias of another neuron cluster;
FIG. 17 is a schematic diagram of information projection of neurons in a certain set of neurons;
FIG. 18 is a labeling rule for neurons and their alias neurons;
fig. 19 is a scheme that enables determination of a destination address from a source address.
Detailed Description
If not specifically stated, the neuron of the present invention, at the chip level, refers to hardware, such as a circuit, that simulates the working principle of biological neurons; at the algorithm level, the method refers to a biological neuron model simulated by software. The pulse referred to in the present invention refers to spike in the field of mimicry, also known as spike. In the case of more than one neuron number in a neuron cluster, the connection weights described in the present invention are all weight matrices. In the case of a single neuron, the special case of a weight matrix with a single numerical value can be considered. The following describes an operation method of the impulse neural network processor in conjunction with a hardware structure of a chip. Since various embodiments are not exhaustive, the number and specification are merely examples, and those skilled in the art can easily conceive of adaptations and modifications in actual use. All LUTs of the present invention are essentially data mapping modules, i.e. given input can directly obtain corresponding output, and any software or hardware way capable of implementing such mapping function belongs to the LUT of the present invention. The "neuron to which the address of the neuron points" described at any position in the present invention refers to a neuron whose address is the address of the neuron. A neuron as described herein is a circuit that mimics a biological neuron, and a neuron cluster is a collection of circuits that mimics a biological neuron.
The Neural network processor (NPU, also called as a Neural network accelerator) at any position of the present invention at least includes a pulse Neural network processor and an artificial Neural network processor. The difference between the two is mainly represented as follows: the former information transfer relies on pulses or pulse sequences (in digital chips, the pulse sequences can be represented by an integer, such as a 5-bit integer), while the latter usually passes through floating-point values (in order to reduce the computational cost, integers are sometimes selected); the former neurons have a memory function, while the latter neurons do not usually have a memory function. The invention discloses a method and a hardware system for eliminating weight data copies by combining software and hardware by taking a pulse neural network processor as an example. Because the neural model itself is not changed by the solution and the solution does not depend on the spiking neural model itself, and the input and output of the neurons have similarities, based on the disclosure, those skilled in the art can reasonably obtain or summarize the processing manner of the neural network processors (including the artificial neural network processor) more superior. Neuron activation outputs, meaning in artificial neural networks that the output is obtained from an activation function (typically a floating point number or an integer); in SNN, meaning that a pulse or a sequence of pulses is emitted, a digital circuit implementation may output an integer (e.g., 5 bits).
Fig. 7 is a first class of embodiments of the present invention for eliminating weight storage redundancy by introducing additional LUTs. In this class of embodiments a new LUT 702 is introduced to reduce the memory usage requirements compared to the prior art shown in fig. 5.
The third set of neurons 701 comprises a number of neurons (artificial neural network neurons or impulse neurons), such as 1000 or 32 or 100 ten thousand, for example, for visual signals more neurons are preferred, whereas for low dimensional signals like processing sound a smaller number of neurons may be selected. The neurons may be physically or logically divided into a plurality of neuron clusters (also referred to as neuron layers), as shown in the figure, 3 neuron clusters A, B, C are shown by way of example, the number of the neuron clusters may be smaller or larger, and the neuron clusters may also be referred to as a first neuron cluster (neuron cluster a), a second neuron cluster (neuron cluster B), a third neuron cluster (neuron cluster C), and a fourth neuron cluster (neuron cluster D) described later. The neuron clusters are collections comprising one or more (≧ 2) neurons. All embodiments of the present invention (not limited to the first class of embodiments), although referred to as a neuron cluster, encompass a special case where the neuron cluster has only one neuron, e.g., neuron cluster A, B, C has only one neuron.
As an example, neurons A1~ A5, B1-B5, C1~ C3 are shown in FIG. 8, respectively, included in neuron cluster A, B, C, where neuron cluster A, B has the same number of neurons, and both neuron cluster A and neuron cluster B project to neuron cluster C, whereby their connection weights to neuron cluster C can be the same, sharing potential.
Taking the scenario constructed in fig. 2 or fig. 4 as an example, the connection of neuron clusters a to C (also called projections in the present invention) and the connection of neuron clusters B to C have the same weight value. Referring back to fig. 7, when a certain source neuron (generally referred to as a pulse emitting end) in a neuron cluster a emits a (fire) pulse (generally a plurality of pulses, which is also referred to as a pulse sequence), the neuron in the neuron cluster a is mapped to a corresponding neuron in a neuron cluster B by a first LUT 702.
Referring to fig. 9, a diagram of a mapping relationship of the first LUT 702 according to an embodiment is shown. In neuron cluster a, a1 is mapped to B1, a2 is mapped to B2 … …, a5 is mapped to B5; b1 is mapped to B1, B2 is mapped to B2 … …, and B5 is mapped to B5. With this mapping arrangement, any neuron in neuron cluster A is mapped to a corresponding neuron in neuron cluster B, and the neurons in neuron cluster B are mapped to themselves, such that neuron cluster A is connected to the neurons in neuron cluster C on behalf of neuron cluster B. Therefore, when a neuron in the neuron cluster a fires a pulse, it establishes a synaptic connection with a neuron in the neuron cluster C completely with the weight data of the corresponding neuron in the neuron cluster B through the mapping of the first LUT 702; the neurons in the neuron cluster B still have original functions due to mapping to the neurons, and therefore the connection between the neurons and the neuron cluster C can still keep original functions.
Thus for the second LUT 703, which only needs to construct connection weight data between the neuron cluster B and the neuron cluster C, in combination with the mapping of the first LUT 702, the neurons in both neuron clusters a and B can obtain the neurons projected into the neuron cluster C and the matching connection weights from the second LUT 703.
In fig. 10, a schematic diagram of the structure of the second LUT 703 is shown. The figure gives the total connection weights established by all neurons in neuron cluster B and all neurons in neuron cluster C. If there are M (≧ 1) neurons in neuron cluster B and N (≧ 1) neurons in neuron cluster C, then there will be M N entries (entries) in the LUT to record the weight data.
In other words, the present invention discloses: a neural network processor comprising at least first and second neuron clusters and a third neuron cluster, each comprising one or more neurons, the neural network processor further comprising first and second lookup tables; the first lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in the second neuron cluster when the neuron in the first neuron cluster is activated to output; the second lookup table is configured to: obtaining weight data of the neuron pointed by the neuron address output by the first lookup table and projected to a target neuron in the third neuron cluster according to the neuron address output by the first lookup table; projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
In the present invention, the expression like "neuron to which a neuron address points" refers to a neuron whose address is the address of the neuron.
In contrast, fig. 11 shows a schematic diagram of the structure of the sixth LUT 503 in the related art shown in fig. 5. Unlike fig. 10, which includes not only all the connection weight data of the neuron cluster B connected to the neuron cluster C, but also all the connection weight data of the neuron cluster a connected to the neuron cluster C, the weight data of both are equal in value, and it is obvious that the storage space is wasted by the identical weight data stored therein.
Although fig. 10 and fig. 11 exemplarily indicate that a certain entry corresponds to a connection (left column) between two neurons, this does not mean that the entries are required to store a mapping relationship between a source neuron address and a target neuron address, that is, a certain entry is required to record a source neuron address (or label) and a target neuron address (or label) at the same time, for example, the mapping relationship may be established through mathematical logic. In the description of fig. 19 that follows, a scheme is given in which an entry does not require simultaneous recording of a source neuron address (or tag) and a target neuron address (or tag).
Fig. 12 shows a second class of embodiments of the present invention that eliminates weight storage redundancy by introducing an additional third LUT 122. The fourth neuron set 121 includes a number of neurons (artificial neural network neurons or impulse neurons) that can be physically or logically divided into a number of neuron clusters, as shown by way of example as 3 neuron clusters A, B, C, which may be fewer or greater in number, each of the neuron clusters including one or more (≧ 2) neurons.
This type of embodiment is an improvement over the prior art shown in fig. 6. Similar to fig. 6, the weight data projected (or connected) to the target neuron or target neuron cluster A, B, C is stored adjacent to the target neuron or target neuron cluster. In other words, the weight data is physically distributed over a plurality of weight data storage devices stored in the neural network processor.
Still taking the scenario constructed in fig. 2 or fig. 4 as an example, the connections of neuron clusters a to C have the same weight value as the connections of neuron clusters B to C.
In third LUT 122, it establishes a mapping relationship between neurons in neuron cluster a to neurons in neuron cluster B, and also establishes a mapping relationship between neurons in neuron cluster B to neurons in neuron cluster B. Similar to the mapping relationship in fig. 9, any neuron in neuron cluster a is mapped to a corresponding neuron in neuron cluster B, and the neuron in neuron cluster B is mapped to itself. Thus, neuron cluster a can look up the target neuron in neuron cluster C in the fourth LUT 123 on behalf of neuron cluster B and connect to the target neuron based on the corresponding weight data stored in the vicinity of the target neuron.
In other words, the present invention discloses: a neural network processor comprising at least a first and a second neuron cluster and a third neuron cluster, each comprising one or more neurons, weight data physically distributed stored in a plurality of weight data storage means in the neural network processor; the neural network processor further comprises a third lookup table and a fourth lookup table; the third lookup table is configured to: mapping an address of an activated neuron in the first neuron cluster to an address of a corresponding neuron in a second neuron cluster when the neuron in the first neuron cluster is activated to output; the fourth lookup table is configured to: obtaining a target neuron address projected to the third neuron cluster according to the neuron address output by the third lookup table; obtaining weight data storage means for target neurons in the third neuron cluster and weight data stored therein projected to the target neurons at least according to the obtained target neuron addresses; projecting an output of the activated neuron in the first neuron cluster to a target neuron in the third neuron cluster, weighted according to the obtained weight data.
FIG. 13 illustrates a third class of embodiments of the present invention based on an Alias (Alias) mechanism. The fifth neuron set 131 includes a number of neurons (artificial neural network neurons or impulse neurons) that may be physically or logically divided into a plurality of neuron clusters A, B, C. In this embodiment no new LUT needs to be introduced, but rather alias techniques are introduced for the neuron clusters a or/and B or/and C. For example, the alias address storage 11 of the neuron cluster a (first alias address storage), the alias address storage 12 of the neuron cluster B (second alias address storage), and the alias address storage 13 of the neuron cluster C (third alias address storage).
The alias mechanism is: it is also (associated with) an alias address storage means for a certain neuron (cluster), where several addresses are stored. If the neuron (cluster) is to activate an output (for the SNN processor, a burst pulse), the activated output of the neuron (cluster) should be registered (or notified) with the neuron (cluster) corresponding to the alias address stored in the alias address storage corresponding to the neuron (cluster), and when the neuron (cluster) corresponding to the alias address needs to activate an output, the registered output is used as a part of the final output of the neuron (cluster) corresponding to the alias address. As if the first neuron cluster had fired (or activated to output) by virtue of its alias second neuron cluster.
So-called registration, for example, may be accomplished in a digital SNN processor by accumulating the number of pulses. For example, each neuron has a corresponding pulse number storage RAM, and an activation output is registered in the RAM, and the activation output may be added to the corresponding RAM, for example. Of course, other implementations are possible, and the invention is not limited thereto.
Preferably, the number of alias addresses stored in the first alias address storage is equal to the number of neurons in the first neuron cluster or a positive integer multiple thereof greater than or equal to 2. If the size of the neuron cluster a is 5, the number of addresses stored by the alias address storage 11 of the neuron cluster a is also 5.
Still taking the scenario constructed in fig. 2 or fig. 4 as an example, the connections of neuron clusters a to C have the same weight value as the connections of neuron clusters B to C. Then, what is stored in the alias address storage 11 for neuron cluster a will be each neuron address in neuron cluster B, and the number will also be 5.
Further, with the fifth LUT 133, as with the second LUT 703, it stores the address of the target neuron (simply referred to as a target address) and weight data corresponding to the target neuron address as well. In other words, whether based on the neuron address in the neuron cluster a or the corresponding neuron address in the neuron cluster B stored in the alias address storage 11 of the neuron cluster a, the corresponding target neuron address and the corresponding weight data can be obtained based on the addresses of these burst neurons (i.e., source neurons).
Since only the basic structure shown in fig. 2 or fig. 4 is introduced, the overall structure of an actual spiking neural network may be very complex, for example, a certain neuron cluster may have multiple groups of aliases, and the size of each group of aliases is equal to the size of the neuron cluster. Thus, in the case of multiple sets of aliases, the alias address storage of the neuron cluster will store an integer number of addresses (equal to the number of sets) times the number of neurons in the neuron cluster. In other words, although the present invention has been illustrated in the context of fig. 2 or fig. 4, this does not preclude the construction of more complex networks based thereon, and the present invention is not limited to a particular network structure model.
In other words, the present invention discloses: a neural network processor comprising at least a first neuron cluster and a second neuron cluster, each comprising one or more neurons, the neural network processor further comprising: a first alias address storage, the first neuron cluster associated with the first alias address storage; when a neuron in the first neuron cluster is activated to output, looking up a corresponding alias address of the neuron in a first alias address storage, the alias address being an address of a neuron in the second neuron cluster; registering an output of an activated neuron in the first neuron cluster with the neuron in the second neuron cluster; outputting an output of at least the activated neuron in the first registered neuron cluster when the neuron in the second neuron cluster is activated to output.
Fig. 14 shows a schematic diagram of transforming the residual connection network structure by the aliasing mechanism. The left side of the figure is the residual error link network structure, and the right side is the network structure schematic diagram after alias conversion. Wherein the output of the neuron cluster A is passed through a weight matrix WABWeighted, neuron cluster AIs projected to a neuron cluster B whose output is passed through a weight matrix WBCWeighted, the output of neuron cluster B is projected to neuron cluster C. In addition, the output of the neuron cluster A is also projected to the neuron cluster C, and the neuron cluster A and the neuron cluster B have the same connection weight W when projected to the neuron cluster CBC. On the right side of the figure, a network structure diagram of the left residual structure after applying the alias mechanism is shown. That is, neuron cluster a is associated with a first alias address storage having stored therein addresses (or labels) of neurons in neuron cluster B (alias neuron cluster B, denoted as AliasB). After the neuron(s) in the neuron cluster a issue pulses, finding addresses (or labels) of the corresponding neuron(s) in the neuron cluster B, which are stored in the alias address storage device by the neuron(s), and registering (or notifying, for example, setting the pulse to be registered as a value of 5) the pulse issued by the neuron(s) in the neuron cluster a with the neuron(s) corresponding to the addresses; when the corresponding neuron(s) in the neuron cluster B fire pulses, at least the registered (or notified) neuron(s) in the neuron cluster a fire pulses (for example, the aforementioned 5 pulses). Furthermore, the corresponding neuron(s) in the neuron cluster B may itself trigger (or fire) a number of pulses, for example, a pulse value of 2, due to its own input pulse sequence, and then the number of actually fired pulses may be 7 (= 2+ 5) in some embodiments. It is of course also possible to determine the specific number of pulses based on the pulses outputted by the self-excitation and the registered pulses, such as the sum of the two in the foregoing examples, some slight fluctuation (e.g. 6) in the upper and lower directions of the sum, half of the sum, and rounding down (e.g. 3= [7/2 ])]) And the like, as the present invention is not limited in this respect. For an artificial neural network processor, for example, it may be simple to accumulate floating point numbers or integers of the activation outputs.
With this scheme, it is as if neuron cluster a had fired a pulse with its alias neuron cluster B. This scheme does not cause significant adjustments in the network architecture, which introduces unnecessary network complexity. Therefore, through the alias mechanism, the weight data copy can be very simply and conveniently eliminated, the utilization efficiency of the storage space is improved, and the requirements of the storage space of the chip and the power consumption of the chip are reduced. As previously mentioned, each of the neuron clusters A, B, C herein may have only one neuron.
In other words, the present invention discloses: a neural network processor comprising at least a first and a second neuron cluster and a third neuron cluster, and the first and the second neuron cluster and the third neuron cluster each comprise one or more neurons, the first neuron cluster being aliased to the second neuron cluster; the output of the first neuron cluster is projected to the second neuron cluster; the output of the second neuron cluster is projected onto the third neuron cluster.
Fig. 15 is a schematic diagram of a conversion depth multi-layer residual network by an aliasing mechanism. As mentioned before, the structure of the network may be complex, and the figure shows part or all of the deep multi-layer residual network. Unlike fig. 14, the structure of the network has one more neuron cluster D (fourth neuron cluster) on the basis of the original network residual connecting network structure, which still includes the special case of only one neuron. The input and output of the neuron cluster C are used as the input of the neuron cluster D, i.e., a second residual connecting structure is formed. The connection between the neuron cluster B and the neuron cluster D and the connection between the neuron cluster C and the neuron cluster D have the common weight data W in terms of numerical valuesCDThe specific structure can be referred to the left side of the figure. Neuron cluster A is connected to neuron cluster B, C, D.
And to further enable a second network residual connection by the alias mechanism, for neuron cluster B, it is associated a second alias address storage. Stored in this second alias address storage is the neuron address in neuron cluster C (alias neuron cluster C, denoted as AliasC). The operation mechanism of the second alias address storage device is similar to that of the first alias address storage device, and is not described herein again. Thus, through the alias mechanism, a more complex network structure is achieved.
Furthermore, with respect to FIGS. 14, 15, although only neuron clusters A, B have been described as each being associated with one alias neuron cluster (neuron cluster B, C), for more complex network architectures, such neuron clusters may also be associated with multiple (≧ 2) alias neuron clusters. For example, in addition to alias neuron cluster B, neuron cluster a may be associated with alias neuron cluster D.
Also with the implementation of the deep multi-layer residual network, only one set of addresses (neuron cluster B) needs to be recorded in this third class of embodiments for neuron cluster a, compared to the first or second class of embodiments with dual LUT structures shown in fig. 7 and 12, while three sets of addresses need to be recorded in the first and second class of embodiments: neuron clusters a to B, neuron clusters a to C, and neuron clusters a to D. This third class of embodiments works better because the network can be implemented with less storage space or component requirements.
In other words, the present invention also discloses: the alias of the second neuron cluster is the third neuron cluster; the output of the third neuron cluster is projected onto a fourth neuron cluster.
For other more complicated network structure models constructed based on this infrastructure, it is not exhaustive, but it is clear how to apply the alias mechanism to eliminate the duplicate weight data based on the aforementioned technical teaching in the art, and it is not described in detail here. The present invention is not limited to a specific network architecture.
FIG. 16 is a schematic diagram of a certain neuron cluster as an alias of another neuron cluster. The neuron cluster A is associated with an alias neuron cluster B each having n (positive integer) neurons (artificial neural network neurons or impulse neurons), for example, a neuron A in the neuron cluster A2Is associated to a neuron B in a neuron cluster B3Of (2), then neuron A2The issued pulse is registered (or notified) to neuron B3In (1). Waiting neuron B3When a pulse is issued, it will issue (or output) a registered (orNotification).
FIG. 17 is a schematic diagram of information projection of neurons in a certain set of neurons. This figure is exemplified by the specific case where only one neuron is included in the aforementioned neuron cluster, where one circle is shown to represent one neuron (artificial neural network neuron or impulse neuron), and four of them are labeled a, b, c, d, respectively. The output of neuron a is projected to b, the output of neuron b is projected to neuron c and the weight is WbcThe output of neuron a also needs to be projected to neuron c according to the requirement of the network architecture, and the weight of the output of neuron b projected to neuron c is the same, i.e. Wac=Wbc. With the foregoing alias mechanism, in order to eliminate the storage space occupied by the weight data when the output of the neuron a is projected to the neuron c, the output of the neuron a is no longer directly projected to the neuron c, but the pulse (or pulse sequence) output is registered (or notified) in the alias neuron b thereof when the neuron a outputs the pulse (or pulse sequence). When the neuron b outputs a pulse, the registered pulse (or pulse sequence) is output in addition to the pulse (or pulse sequence) which is fired by itself. The logic of the operation between the neurons b, c, d is similar to that of the neurons a, b, c, and is not described herein again. Therefore, as the complexity of the network structure increases, more storage space can be saved by the alias mechanism.
In other words, it discloses: a neural network processor, the neural network processor comprising at least first and second neurons and a third neuron, the neural network processor further comprising: a first alias address store, the first neuron associated with the first alias address store; when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron; registering an output of the first neuron being activated with the second neuron; outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
And: a method of eliminating a data copy, applied in a neural network processor, the neural network processor comprising at least a first neuron and a second neuron, and a third neuron, the neural network processor further comprising: a first alias address store, the first neuron associated with the first alias address store; when the first neuron is activated to output, looking up a corresponding alias address of the first neuron in the first alias address store, the alias address being an address of the second neuron; registering an output of the first neuron being activated with the second neuron; outputting, when the second neuron projects output to the third neuron, output that at least the first neuron that has registered is activated.
Furthermore, to eliminate the infinite loop that the alias mechanism may bring, an additional limiting means is: for any ith neuron with the index i, the index j of the alias neuron satisfies: j > i, where i and j are positive integers, the reference numbers are based on the projection order of the neurons. That is, the neuron allowed to establish a connection by aliasing is an off-diagonal triangular matrix (off-diagonal triangular matrix), and reference may be made to fig. 18, which illustrates the labeling rule of a neuron and its alias neurons, that is, the label of the alias neuron of a neuron should be larger than the neuron label.
Fig. 19 illustrates a scheme that enables the determination of a destination address from a source address. For example, a total of 1000 neurons are in a neuron group, and their numbers are 1 to 1000 respectively. In the first memory space RAM-1 there is stored a corresponding number of fanout (also called destination) addresses per neuron, e.g. 1, 3, 7 … …, comprising a total of 1000 entries. A corresponding specific fanout (or target) neuron address is stored in the second memory space RAM-2, specifically, if the average number of fanout addresses of the neurons is 32, 32,000 entries are included in the second memory space, and each entry includes one fanout address. If the first fan-out address designation is 15, the second fan-out address designation is 8, the third fan-out address designation is 2, and so on. After the source neuron address is obtained, according to the corresponding fan-out address number stored in the first storage space RAM-1, reading the target neuron address or/and the weight data corresponding to the fan-out address number from the corresponding initial address position in the second storage space RAM-2. The scheme is a scheme for obtaining a target address or/and weight data corresponding to a source address through mathematical logic, rather than explicitly describing a mapping relationship between the source address and the target address. The method of how to establish the mapping relationship between the source neuron address (source address for short) and the target neuron address (target address for short) and the weight data is not particularly limited.
The technical scheme is very efficient for designing the digital synchronous circuit, and complex network architecture operation can be completed only by triggering alias searching once in one time step calculation.
In addition, the invention also discloses: an electronic product comprising a neural network processor as claimed in any one of the preceding claims. The neural network processor may process various environmental signals (visual, auditory, etc. signals), detect various events in the environmental signals, such as an electrocardiogram signal, certain specific keywords, certain specific gestures, etc., and notify the next level system to respond based on the detected events.
Specifically, the neural network processor is coupled with a processing module (such as an MCU) of the electronic product through an interface module (such as a wired interface circuit for communication, a bluetooth, ZigBee, UWB, or other wireless transmission module). The neural network processor transmits a result to a processing module of the electronic product through an interface module by identifying the environment signal, and the processing module controls a response module through another interface module based on the result fed back by the neural network processor. The response module can be various known response modes, for example, output information on a display screen, an alarm, a voice signal output, movement of mechanical equipment (such as an intelligent curtain scene), control of physical quantities such as voltage and current of electrical equipment, switching (such as an intelligent lamp) and the like.
While the invention has been described with reference to specific features and embodiments thereof, various modifications and combinations may be made without departing from the invention. Accordingly, the specification and figures are to be regarded in a simplified manner as being illustrative of some embodiments of the invention defined by the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents that fall within the scope of the invention. Thus, although the present invention and its advantages have been described in detail, various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification.
As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
To achieve better technical results or for certain applications, a person skilled in the art may make further improvements on the technical solution based on the present invention. However, even if the partial modification/design is inventive or/and advanced, the technical solution should also fall within the protection scope of the present invention according to the "overall coverage principle" as long as the technical features covered by the claims of the present invention are utilized.
Several technical features mentioned in the attached claims may be replaced by alternative technical features or the order of some technical processes, the order of materials organization may be recombined. Those skilled in the art can easily understand the alternative means, or change the sequence of the technical process and the material organization sequence, and then adopt substantially the same means to solve substantially the same technical problems and achieve substantially the same technical effects, therefore, even if the means or/and the sequence are explicitly defined in the claims, the modifications, changes and substitutions shall fall into the protection scope of the claims according to the "equivalent principle".
Where a claim recites an explicit numerical limitation, one skilled in the art would understand that other reasonable numerical values around the stated numerical value would also apply to a particular embodiment. Such design solutions, which do not depart from the inventive concept by a departure from the details, also fall within the scope of protection of the claims.
The method steps and elements described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and the steps and elements of the embodiments have been described in functional generality in the foregoing description, for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention as claimed.
Further, any module, component, or device executing instructions exemplified herein can include or otherwise have access to a non-transitory computer/processor readable storage medium or media for storing information, such as computer/processor readable instructions, data structures, program modules, and/or other data. Any such non-transitory computer/processor storage media may be part of or accessible or connectable to a device. Any application or module described herein may be implemented using computer/processor readable/executable instructions that may be stored or otherwise maintained by such non-transitory computer/processor readable storage media.
The reference numerals or signs in the specification are interpreted
A neuron cluster A (also called first neuron cluster)
B neuron cluster B (also called second neuron cluster)
C neuron cluster C (also called third neuron cluster)
D neuron cluster D (also called fourth neuron cluster)
W1 ACConnection weight data between neuron cluster A and neuron cluster C stored at a first location
W1 BCConnection weight data between neuron cluster B and neuron cluster C stored at first position
W2 BCConnection weight data between neuron cluster B and neuron cluster C stored at a second location
WABConnection weight data between neuron cluster A and neuron cluster B
WBCConnection weight data between neuron cluster B and neuron cluster C
WCDConnection weight data between neuron cluster C and neuron cluster D
LUT lookup table
501 first set of neurons
503 sixth LUT (sixth lookup table)
601 second set of neurons
603 seventh LUT (seventh lookup table)
701 third set of neurons
702 first LUT (first lookup table)
703 second LUT (second lookup table)
A1-A5 belonging to neuron cluster A and 5 neurons labeled A1-A5
B1-B5 belonging to neuron cluster B and 5 neurons labeled B1-B5
C1-C3 belonging to neuron cluster C and labeled C1-C3
121 fourth set of neurons
122 third LUT (third lookup table)
123 fourth LUT (fourth lookup table)
131 fifth set of neurons
133 fifth LUT (fifth lookup table)
11 first alias address storage device
12 second alias address storage
13 third alias address storage device
a neuron a
b neurons b
c neuron c
d neuron d
n is a positive integer
RAM-1 first memory space
RAM-2 second storage space
A0~AnReference number A0~AnOf (3)
B0~BnReference number B0~BnOf (3)