Hardware accelerator execution method, hardware accelerator and neural network device

文档序号:7749 发布日期:2021-09-17 浏览:42次 中文

1. A method performed by a hardware accelerator, comprising:

receiving an image to be classified;

executing calculation processing corresponding to a full connection layer of a neural network on the image to be classified to generate a result value related to the probability that the image to be classified is classified into a corresponding class as probability related data;

loading a lookup table;

mapping each probability-related data value of the probability-related data to an index in a lookup table based on a probability-related data distribution of the probability-related data;

obtaining an output data value corresponding to the probability-related data value using a lookup table; and

determining a class to which the image to be classified belongs based on the output data value,

wherein the output data values are proportional to the respective flexibility maxima of the probability-related data values.

2. The method of claim 1, further comprising:

a difference between the largest probability-related data value of the probability-related data and each probability-related data value is calculated.

3. The method of claim 2, wherein the step of mapping comprises:

the difference is directly mapped to an index.

4. The method of claim 2, wherein the step of mapping comprises:

the difference is scaled and the scaled difference is mapped to an index.

5. The method of claim 1, wherein the lookup table stores therein information associated with an inverse of an exponential function corresponding to the index.

6. The method of claim 5, wherein the information associated with the inverse of the exponential function comprises information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

7. The method of claim 2, wherein the lookup table comprises a plurality of lookup tables, and

wherein, the loading step comprises:

determining an index range based on a maximum difference among the differences obtained by the calculation; and

loading one of the plurality of lookup tables based on the determined index range.

8. The method of claim 2, wherein the step of loading comprises:

determining an index range based on a maximum difference among the differences obtained by the calculation; and

a look-up table generated in real time based on the determined index range is loaded.

9. The method of claim 1, wherein obtaining an output data value comprises:

mapping a clock of the shift register to an index; and

the output data value is obtained using a shift register.

10. The method of any of claims 1 to 9, further comprising:

loading a compensation coefficient lookup table;

calculating a sum of output data values corresponding to the probability-related data values;

mapping the sum of the output data values to an index in a compensation coefficient lookup table;

obtaining a compensation coefficient corresponding to the probability-related data value using a compensation coefficient lookup table; and

a normalized output data value corresponding to the probability-related data value is obtained based on the compensation factor.

11. The method of any of claims 1 to 9, wherein the hardware accelerator is a neural processor.

12. The method of any of claims 1 to 9, wherein a central processor is configured to generate a look-up table.

13. A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform the method of any one of claims 1-12.

14. A hardware accelerator, comprising:

one or more processors configured to: receiving an image to be classified, generating result values related to probabilities that the image to be classified is classified into respective classes as probability-related data by performing calculation processing corresponding to a full connection layer of a neural network on the image to be classified, loading a lookup table, mapping each probability-related data value to an index in the lookup table based on a probability-related data distribution of the probability-related data, obtaining an output data value corresponding to the probability-related data value using the lookup table, and determining a class to which the image to be classified belongs based on the output data value,

wherein the output data values are proportional to the respective flexibility maxima of the probability-related data values.

15. The hardware accelerator of claim 14, wherein the one or more processors are configured to:

a difference between the largest probability-related data value of the probability-related data and each probability-related data value is calculated.

16. The hardware accelerator of claim 15, wherein the one or more processors are configured to:

the difference is directly mapped to an index.

17. The hardware accelerator of claim 15, wherein the one or more processors are configured to:

the difference is scaled and the scaled difference is mapped to an index.

18. The hardware accelerator of claim 14 wherein the lookup table stores therein information associated with an inverse of an exponential function corresponding to the index.

19. The hardware accelerator of claim 18 wherein the information associated with the inverse of the exponential function comprises information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

20. The hardware accelerator of claim 15 wherein the lookup table comprises a plurality of lookup tables,

wherein the one or more processors are configured to:

determining an index range based on a maximum difference among the differences obtained by the calculation; and

loading one of the plurality of lookup tables based on the determined index range.

21. The hardware accelerator of claim 15, wherein the one or more processors are configured to:

determining an index range based on a maximum difference among the differences obtained by the calculation; and

a look-up table generated in real time based on the determined index range is loaded.

22. The hardware accelerator of claim 14, wherein the one or more processors are configured to:

mapping a clock of the shift register to an index; and

the output data value is obtained using a shift register.

23. The hardware accelerator of any one of claims 14 to 22, wherein the one or more processors are configured to:

loading a compensation coefficient lookup table;

calculating a sum of output data values corresponding to the probability-related data values;

mapping the sum of the output data values to an index in a compensation coefficient lookup table;

obtaining a compensation coefficient corresponding to the probability-related data value using a compensation coefficient lookup table; and

a normalized output data value corresponding to the probability-related data value is obtained based on the compensation factor.

24. The hardware accelerator of any of claims 14 to 22, wherein the hardware accelerator is a neural processor.

25. A neural network device, comprising:

a central processor configured to: generating a lookup table in which information associated with the reciprocal of the exponential function is stored; and

a neural processor configured to: receiving an image to be classified, generating a result value related to a probability that the image to be classified is classified into a corresponding class as probability-related data by performing a calculation process corresponding to a full link layer of a neural network on the image to be classified, loading a lookup table and obtaining an output data value corresponding to the probability-related data value, and determining a class to which the image to be classified belongs based on the output data value,

wherein the output data values are proportional to the respective flexibility maxima of the probability-related data values.

26. The apparatus of claim 25, wherein the neural processor is a hardware accelerator.

27. The apparatus of claim 25, wherein the central processor is further configured to generate a neural network for quantification of the deep neural network model.

28. The apparatus of claim 27, wherein the output data value is obtained as a result of calculating a probability that the image to be classified corresponds to each class.

29. The apparatus of claim 27, wherein the neural network used for quantization of the deep neural network model comprises a loss layer configured to compute loss as an objective function for learning.

30. The apparatus of claim 25, wherein the neural processor is further configured to: the difference between the largest of the probability-related data values and each probability-related data value is scaled and the scaled difference is mapped to an index of a look-up table.

31. A method of operating a hardware accelerator, comprising:

loading a lookup table from a host;

mapping each input data value of the input data to an index in a lookup table based on an input data distribution of the input data; and

an output data value corresponding to the input data value is obtained using a look-up table,

wherein the output data value is proportional to the corresponding flexibility maximum of the input data value.

32. A hardware accelerator, comprising:

one or more processors configured to: loading a look-up table from a host, mapping each input data value to an index in the look-up table based on an input data distribution of the input data, and using the look-up table to obtain an output data value corresponding to the input data value,

wherein the output data value is proportional to the corresponding flexibility maximum of the input data value.

33. A neural network device, comprising:

a central processor configured to: generating a lookup table in which information associated with the reciprocal of the exponential function is stored; and

a neural processor configured to: load the look-up table and obtain an output data value corresponding to the input data value,

wherein the output data value is proportional to the corresponding flexibility maximum of the input data value.

Background

Neural networks may be implemented based on a computing architecture. Neural network processing devices may require a large computational effort to compute complex input data.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

In one general aspect, a method performed by a hardware accelerator includes: receiving an image to be classified; executing calculation processing corresponding to a full connection layer of a neural network on the image to be classified to generate a result value related to the probability that the image to be classified is classified into a corresponding class as probability related data; loading a lookup table; mapping each probability-related data value of the probability-related data to an index in a lookup table based on a probability-related data distribution of the probability-related data; obtaining an output data value corresponding to the probability-related data value using a lookup table; and determining a class to which the image to be classified belongs based on the output data values, wherein the output data values are proportional to respective flexibility maxima of the probability-related data values.

The method may further comprise: a difference between the largest probability-related data value of the probability-related data and each probability-related data value is calculated.

The step of mapping may comprise: the difference is directly mapped to an index.

The step of mapping may comprise: the difference is scaled and the scaled difference is mapped to an index.

The lookup table may store therein information associated with the inverse of the exponential function corresponding to the index.

The information associated with the inverse of the exponential function may include information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

The lookup table may comprise a plurality of lookup tables, wherein the step of loading comprises: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading one of the plurality of lookup tables based on the determined index range.

The step of loading may include: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading a look-up table generated in real time based on the determined index range.

The step of obtaining the output data value may comprise: mapping a clock of the shift register to an index; and obtaining an output data value using the shift register.

The method may further comprise: loading a compensation coefficient lookup table; calculating a sum of output data values corresponding to the probability-related data values; mapping the sum of the output data values to an index in a compensation coefficient lookup table; obtaining a compensation coefficient corresponding to the probability-related data value using a compensation coefficient lookup table; and obtaining a normalized output data value corresponding to the probability-related data value based on the compensation coefficient.

The hardware accelerator may be a Neural Processor (NPU).

A Central Processing Unit (CPU) may be configured to generate a lookup table.

In another general aspect, a hardware accelerator includes: one or more processors configured to: receiving an image to be classified, performing a calculation process corresponding to a full connection layer of a neural network on the image to be classified, generating a result value related to a probability that the image to be classified is classified into a corresponding class as probability-related data, loading a lookup table, mapping each probability-related data value to an index in the lookup table based on a probability-related data distribution of the probability-related data, obtaining an output data value corresponding to the probability-related data value using the lookup table, and determining a class to which the image to be classified belongs based on the output data value, wherein the output data value is proportional to a corresponding maximum flexibility value of the probability-related data value.

The one or more processors may be configured to: a difference between the largest probability-related data value of the probability-related data and each probability-related data value is calculated.

The one or more processors may be configured to: the difference is directly mapped to an index.

The one or more processors may be configured to: the difference is scaled and the scaled difference is mapped to an index.

The lookup table may store therein information associated with the inverse of the exponential function corresponding to the index.

The information associated with the inverse of the exponential function may include information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

The lookup table may comprise a plurality of lookup tables, and the one or more processors may be configured to: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading one of the plurality of lookup tables based on the determined index range.

The one or more processors may be configured to: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading a look-up table generated in real time based on the determined index range.

The one or more processors may be configured to: mapping a clock of the shift register to an index; and obtaining an output data value using the shift register.

The one or more processors may be configured to: loading a compensation coefficient lookup table; calculating a sum of output data values corresponding to the probability-related data values; mapping the sum of the output data values to an index in a compensation coefficient lookup table; obtaining a compensation coefficient corresponding to the probability-related data value using a compensation coefficient lookup table; and obtaining a normalized output data value corresponding to the probability-related data value based on the compensation coefficient.

The hardware accelerator may be a Neural Processor (NPU).

In another general aspect, a neural network device includes: a Central Processing Unit (CPU) configured to: generating a lookup table in which information associated with the reciprocal of the exponential function is stored; and a Neural Processor (NPU) configured to: the method comprises the steps of receiving an image to be classified, performing calculation processing corresponding to a full connection layer of a neural network on the image to be classified, generating a result value related to the probability that the image to be classified is classified into a corresponding class as probability related data, loading a lookup table and obtaining an output data value corresponding to the probability related data value, and determining the class to which the image to be classified belongs based on the output data value, wherein the output data value is proportional to the corresponding flexibility maximum value of the probability related data value.

The NPU may be a hardware accelerator.

The CPU may also be configured to generate a neural network for image classification.

The output data values may be obtained as a result of calculating a probability that the image to be classified corresponds to each class.

The neural network for image classification may include a loss layer configured to calculate loss as an objective function for learning.

The NPU may be further configured to: the difference between the largest of the probability-related data values and each probability-related data value is scaled and the scaled difference is mapped to an index of a look-up table.

In one general aspect, a method of operating a hardware accelerator comprises: loading a lookup table, mapping each input data value of the input data to an index of a plurality of indexes in the lookup table based on an input data distribution of the input data; and obtaining an output data value corresponding to the input data value using the look-up table. The output data value is proportional to the corresponding compliance maximum of the input data value.

The method may further comprise: a difference is calculated between the maximum input data value of the input data and each input data value.

The step of mapping may comprise: the difference is directly mapped to an index.

The step of mapping may comprise: the difference is scaled and the scaled difference is mapped to an index.

The lookup table may store therein information associated with the inverse of the exponential function corresponding to the index.

The information associated with the inverse of the exponential function may include information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

The lookup table may comprise a plurality of lookup tables. The step of loading may include: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading one of the plurality of lookup tables based on the determined index range.

The step of loading may include: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading a look-up table generated in real time based on the determined index range.

The step of obtaining an output data value may comprise: mapping a clock of the shift register to an index; and obtaining an output data value using the shift register.

The method may further comprise: loading a compensation coefficient lookup table; calculating a sum of output data values corresponding to the input data values; mapping the sum of the output data values to an index in a compensation coefficient lookup table; obtaining a compensation coefficient corresponding to the input data value using a compensation coefficient lookup table; and obtaining a normalized output data value corresponding to the input data value based on the compensation coefficient.

A non-transitory computer-readable storage medium may store instructions that, when executed by one or more processors, configure the one or more processors to perform the method.

In another general aspect, a hardware accelerator includes: one or more processors configured to: loading a lookup table, mapping each input data value to an index of the indices in the lookup table based on an input data distribution of the input data, and obtaining an output data value corresponding to the input data value using the lookup table. The output data value is proportional to the corresponding compliance maximum of the input data value.

The one or more processors may be configured to: a difference is calculated between the maximum input data value of the input data and each input data value.

The one or more processors may be configured to: the difference is directly mapped to an index.

The one or more processors may be configured to: the difference is scaled and the scaled difference is mapped to an index.

The lookup table may store therein information associated with the inverse of the exponential function corresponding to the index.

The information associated with the inverse of the exponential function may include information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data value.

The lookup table may comprise a plurality of lookup tables. The one or more processors may be configured to: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading one of the plurality of lookup tables based on the determined index range.

The one or more processors may be configured to: determining an index range based on a maximum difference among the differences obtained by the calculation; and loading a look-up table generated in real time based on the determined index range.

The one or more processors may be configured to: mapping a clock of the shift register to an index; and obtaining an output data value using the shift register.

The one or more processors may be configured to: loading a compensation coefficient lookup table; calculating a sum of output data values corresponding to the input data values; mapping the sum of the output data values to an index in a compensation coefficient lookup table; obtaining a compensation coefficient corresponding to the input data value using a compensation coefficient lookup table; and obtaining a normalized output data value corresponding to the input data value based on the compensation coefficient.

In another general aspect, a neural network device, includes: a Central Processing Unit (CPU) configured to: generating a lookup table in which information associated with the reciprocal of the exponential function is stored; and a Neural Processor (NPU) configured to: the lookup table is loaded and an output data value corresponding to the input data value is obtained. The output data value is proportional to the corresponding compliance maximum of the input data value.

The NPU may be a hardware accelerator.

The CPU may also be configured to generate a neural network for classification of the input data values.

The output data value may be obtained as a result of calculating a probability to which class the input data value corresponds.

The neural network for classification may include a loss layer configured to calculate a loss as an objective function for learning.

The NPU may be further configured to: the difference between the largest of the input data values and each input data value is scaled and the scaled difference is mapped to an index of the look-up table.

Other features and aspects will be apparent from the following detailed description, the accompanying drawings, and the claims.

Drawings

Fig. 1 shows an example of a neural network.

Fig. 2 shows an example of a hardware configuration of a neural network device.

Fig. 3 shows an example of a neural network for classification.

FIG. 4 shows a flow chart of an example of a flexible maximum approximation method.

Fig. 5 shows a flow diagram of an example of a method of mapping each input data value to an index in a look-up table (LUT) based on a difference between a maximum input data value and each input data value.

Fig. 6 shows a flow chart of an example of a method of scaling the difference between the maximum input data value and each input data value and mapping the scaled difference to an index in a LUT.

Fig. 7 shows a flowchart of an example of a method of obtaining output data by polynomial approximation using a plurality of LUTs.

Fig. 8 shows a flowchart of an example of a method of obtaining output data using a shift register.

Fig. 9 shows a flowchart of an example of a method of generating a LUT.

Fig. 10 shows a flow chart of an example of a method of determining an effective quantization boundary.

Fig. 11 shows a flow chart of an example of a method of obtaining normalized output data.

Throughout the drawings and detailed description, the same drawing reference numerals will be understood to refer to the same elements, features and structures unless otherwise described or provided. The figures may not be to scale and the relative sizes, proportions and depictions of the elements in the figures may be exaggerated for clarity, illustration and convenience.

Detailed Description

The following detailed description is provided to assist the reader in obtaining a thorough understanding of the methods, devices, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatus, and/or systems described herein will be apparent after understanding the disclosure of the present application. For example, the order of operations described herein is merely an example and is not limited to the order of operations set forth herein, but may be changed as will become apparent after understanding the disclosure of the present application, except where operations must occur in a particular order. Furthermore, descriptions of features known after understanding the disclosure of the present application may be omitted for the sake of clarity and conciseness.

The features described herein may be implemented in different forms and should not be construed as limited to the examples described herein. Rather, the examples described herein have been provided to illustrate only some of the many possible ways to implement the methods, devices, and/or systems described herein that will be apparent after understanding the disclosure of this application.

Although terms such as "first," "second," and "third" may be used herein to describe various elements, components, regions, layers or sections, these elements, components, regions, layers or sections should not be limited by these terms. Rather, these terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section referred to in the examples described herein could also be referred to as a second element, component, region, layer or section without departing from the teachings of the examples.

Throughout the specification, when an element is described as being "connected to" or "coupled to" another element, the element may be directly "connected to" or "coupled to" the other element, or one or more other elements may be present therebetween. In contrast, when an element is referred to as being "directly connected to" or "directly coupled to" another element, there may be no other elements intervening therebetween. Likewise, similar expressions (e.g., "between … …" and "immediately between … …", "adjacent to … …" and "immediately adjacent") should also be interpreted in the same manner. As used herein, the term "and/or" includes any one of the associated listed items and any combination of any two or more.

The terminology used herein is for the purpose of describing various examples only and is not intended to be limiting of the disclosure. The singular is intended to include the plural unless the context clearly indicates otherwise. The terms "comprises," "comprising," and "having" specify the presence of stated features, quantities, operations, elements, components, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, quantities, operations, components, elements, and/or combinations thereof.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs and are based on an understanding of the disclosure of this application. Unless explicitly defined as such herein, terms (such as those defined in general dictionaries) will be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and will not be interpreted in an idealized or overly formal sense.

Further, in the description of the example embodiments, when it is considered that a detailed description of a structure or a function thus known after understanding the disclosure of the present application will lead to a vague explanation of the example embodiments, such description will be omitted.

The following example embodiments may be implemented in various forms of products, such as Personal Computers (PCs), laptop computers, tablet PCs, smart phones, Televisions (TVs), smart home appliances, smart vehicles, kiosks, and wearable devices. Hereinafter, example embodiments will be described in detail with reference to the accompanying drawings, and like reference numerals in the drawings denote like elements throughout.

A method is desired that: calculations or operations involving a neural network are efficiently processed to analyze a large amount of input data and extract desired information in real time using the neural network.

Fig. 1 shows an example of a neural network.

In the example of fig. 1, a neural network 10 is shown. The neural network 10 may have an architecture including an input layer, a hidden layer, and an output layer, and may be configured to be based on received input data (e.g., I)1And I2) To perform a calculation or operation and generate output data (e.g., O) based on a result of performing the calculation1And O2). Here, it is noted that the use of the term "may" with respect to an example or embodiment (e.g., with respect to what an example or embodiment may include or may be implemented) indicates that there is at least one example or embodiment that includes or implements such a feature, and all examples and embodiments are not limited thereto.

The neural network 10 may be a Deep Neural Network (DNN) or an n-layer neural network that includes one or more hidden layers. For example, as shown in fig. 1, the neural network 10 may be a DNN that includes an input layer (layer 1), two hidden layers (layer 2 and layer 3), and an output layer (layer 4). The DNN may include, for example, a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a Deep Belief Network (DBN), and a Restricted Boltzmann Machine (RBM), but examples are not limited thereto.

In examples where the neural network 10 has such a DNN architecture, the neural network 10 may include a greater number of layers that extract the available information, thereby processing more complex data sets than existing neural networks. Although the neural network 10 is illustrated as including four layers in fig. 1, examples are not limited thereto, and the neural network 10 may include a number of layers smaller or larger than the illustrated four layers. Further, the neural network 10 may include layers of various architectures that are different from the illustrated architecture. For example, the neural network 10 may be a DNN that includes convolutional layers, pooling layers, and fully-connected (FC) layers.

Each layer included in the neural network 10 may include a plurality of artificial nodes each known by terms such as "neuron", "Processing Element (PE)" and "unit". For example, as shown in FIG. 1, tier 1 may include two nodes and tier 2 may include three nodes. However, the number of nodes is provided as an example only, and each layer included in the neural network 10 may include various numbers of nodes.

Nodes included in layers included in the neural network 10 may be connected to each other to exchange data therebetween. For example, one node may receive data from other nodes to perform calculations on the received data and output the results of the calculations to the other nodes.

The output value of each node may be referred to herein as an active. The activation may be an output value of one node and an input value of a node included in a subsequent layer. Each node may determine its own activation based on the activations and weights received from the nodes included in the previous layer. The weight may represent a parameter for calculating activation at each node and assigned to a connection relationship between the nodes.

Each node may be, for example, to receive input activations and output activations and perform inputsAnd outputting the mapped hardware computing unit. For example, the activation function is expressed at σ,Represents a weight from a kth node included in an i-1 th layer to a jth node included in an i-th layer,Represents a bias value of a j-th node included in the i-th layer, andindicating activation of the jth node in the ith layer, activationCan be represented by equation 1.

Equation 1:

for example, as shown in FIG. 1, activation of a first node in a second tier (tier 2) may be represented asIn this example, activation is based on equation 1 aboveCan have a structure composed ofThe values indicated. Equation 1 is provided merely as an example to describe the activation and weights for processing data in a neural network, and thus the example is not limited thereto. For example, the activation may be a value of: a value obtained by applying an activation function to a weighted sum of activations received from a previous layer to a linear rectification function (ReLU, also called a modified linear unit).

As described above, in a neural network, many data sets can be exchanged between a plurality of interconnected nodes and undergo many calculation processes while passing through layers. Therefore, a method that can minimize the accuracy loss while reducing the amount of computation required to process complex input data is desired.

Fig. 2 shows an example of a hardware configuration of a neural network device.

In fig. 2, as a non-limiting example, the neural network device 200 may include a host 210, a hardware accelerator 230, and a memory 220.

The neural network device 200 may be a computing device having various processing functions, such as, for example, generating a neural network, training or learning a neural network, quantizing a floating-point type neural network to a fixed-point type neural network, and retraining a neural network. For example, the neural network device 200 may be or may be implemented by various types of devices (e.g., a PC, a remote server device, a mobile device, etc.).

The host 210 may perform an overall function for controlling the neural network device 200. For example, the host 210 may control the neural network device 200 as a whole by executing instructions stored in the memory 220 of the neural network device 200. The host 210 may be, for example, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), and an Application Processor (AP) included in the neural network device 200, or may be implemented by, for example, a Central Processing Unit (CPU), a Graphic Processing Unit (GPU), and an Application Processor (AP) included in the neural network device 200, but the example is not limited thereto.

The host 210 may generate a neural network for classification and may train the neural network for classification. The neural network for classification may output a calculation result obtained by calculating to which class the input data belongs. For example, the neural network for classification may output a calculation result obtained by calculating a probability that the input data corresponds to each class as a result value with respect to each class. The neural network used for classification may include a flexible max (softmax) layer and a lossy layer. The flexible max layer may convert the result values for each class into probability values, and the loss layer may calculate the loss as an objective function for training or learning.

The memory 220 may be hardware configured to store various data sets, and may store data sets processed or to be processed in the neural network device 200. Further, the memory 220 may store applications, drivers, etc. to be executed or driven by the neural network device 200. The memory 220 may be a Dynamic Random Access Memory (DRAM), but is not limited thereto. The memory 220 may include at least one of volatile memory and non-volatile memory.

The neural network device 200 may also include a hardware accelerator 230 for operating the neural network. Hardware accelerator 230 may be a module dedicated to operating a neural network and may include, for example, a Neural Processor (NPU), a Tensor Processor (TPU), and a neural engine, although examples are not limited thereto.

Fig. 3 shows an example of a neural network for classification.

In fig. 3, a neural network 300 for classification may include a hidden layer 310, an FC layer 320, a flexible maximum layer 330, and a loss layer 340. A portion of the hidden layer 310 may be an FC layer, and thus the FC layer 320 may be the last FC layer of the neural network 300. That is, the FC layer 320 may be the FC layer that appears last in order among the FC layers of the neural network 300.

When input data is input to the neural network 300, sequential calculation processing is performed through the hidden layer 310 and the FC layer 320, and then a calculation result s corresponding to a probability that the input data is classified into each class may be output from the FC layer 320. That is, the FC layer 320 may output a result value corresponding to a probability that the input data is classified into the respective classes as a calculation result s (or, referred to as probability-related data) for each class. In one example, when an image to be classified is input to the neural network 300, a computational process may be performed on the image to be classified through the hidden layer 310 and the FC layer 320 to generate a result value related to a probability that the image to be classified is classified into a corresponding class as input data of the flexibility maximizing layer 330 of the neural network 300. For example, the FC layer 320 may include nodes respectively corresponding to a plurality of classes, and each node of the FC layer 320 may output a result value corresponding to a probability that input data is classified into a corresponding class. For example, in the case where the neural network is performed to classify into five classes, the output value of each of the first to fifth nodes of the FC layer of the neural network may be a result value indicating a probability that the input data is classified into each of the first to fifth classes.

The FC layer 320 may output the calculation result s to the flexible maximum layer 330, and the flexible maximum layer 330 may convert the calculation result s into the probability value y. That is, the flexible maximum layer 330 may generate the probability value y by normalizing the result value corresponding to the probability that the input data is classified into each class. Then, the flexible max layer 330 may output the probability value y to the loss layer 340, and the loss layer 340 may calculate a cross-entropy loss (cross-entropy loss) L of the calculation result s based on the probability value y. That is, the loss layer 340 may calculate a cross-entropy loss L indicating an error of the calculation result s.

For example, the flexible max layer 330 may convert the calculation result s into a probability value y using a flexible max operation as shown in equation 2, and the loss layer 340 may calculate the cross entropy loss L of the calculation result s using equation 3 expressed below.

Equation 2:

equation 3:

in equations 2 and 3, siAn output value representing the ith node of the FC layer 320 (e.g., a result value for the ith one of the classes). y isiAn output value representing the ith node of the flexible maximum layer 330 (e.g., a probability value for the ith class in the classes). N is a radical ofcIndicating the number of classes. t is tiMay be a Ground Truth (GT) tag of type i.

Subsequently, a back propagation learning process may be performed. With the loss layer 340, the flexible maximum layer 330 can calculate the gradient of the cross-entropy loss L.

For example, the flexible maximum layer 330 may calculate the gradient of the cross-entropy loss L using equation 4 represented below (e.g.,)。

equation 4:

in equation 4, siAn output value representing the ith node of the FC layer 320 (e.g., a result value for the ith one of the classes). y isiAn output value representing the ith node of the flexible maximum layer 330 (e.g., a probability value for the ith class in the classes). N is a radical ofcIndicating the number of classes. t is tiGT tags representing class i.

Subsequently, a learning process based on the gradient of the cross entropy loss L may be performed in the FC layer 320. For example, the weights of the FC layer 320 may be updated according to a gradient descent algorithm. Further, a subsequent learning process may be performed in the hidden layer 310.

Referring back to FIG. 2, the hardware accelerator 230 may perform flexible max operations. Typical hardware accelerators do not perform complex flexible maximum operations but only matrix-vector multiplications. Thus, a typical neural network device may simply perform matrix-vector multiplication by a hardware accelerator, move the result of the multiplication to a host, perform a flexible max operation in the host, and then move the result of the flexible max operation back to the hardware accelerator. This typical approach may complicate the development of software for data conversion and movement, which may adversely affect overall performance and power consumption.

For flexible max operation, a method of a hardware accelerator performing the flexible max operation has been proposed to prevent intervention of a host; however, this approach may require a divider, which may result in additional hardware cost.

According to an example embodiment, the hardware accelerator 230 may perform flexible max operations without such a divider. The hardware accelerator 230 may convert probability-related data to have a positive value while preventing overflow, and read an appropriate value through a look-up table (LUT) by directly using a value obtained through the conversion as an index, thereby implementing and performing efficient flexible maximum approximation.

The hardware accelerator 230 may estimate the flexibility maximum by an approximate operation represented by equation 5 below, instead of the flexibility maximum operation represented by equation 2 above.

Equation 5:

in equation 5, x denotes each input data value (i.e., probability-related data) of the flexible maximum layer, and y denotes output data of the flexible maximum layer corresponding to each probability-related data value. max (x) represents the maximum probability-related data value. Thus, max (x) -x represents the difference between the largest probability-related data value and each probability-related data value.

In equation 2 above, the compliance maximum may be a probability value of the exponential function value of each probability-related data value x. However, in equation 5, the output data may be an exponential function value of the probability-related data for each conversion. Thus, the output data y in equation 5 may be proportional to the maximum flexibility value for each probability-related data value x.

In equation 5, the hardware accelerator 230 may convert probability-related data into positive-valued data by inverse calculations (e.g., max (x) -x, but not x-max (x)), and perform inverse conversions (e.g., 1/exp (x)) to compensate for such inverse calculations.

Further, although described below, the hardware accelerator 230 may convert probability-related data to have a positive value while preventing overflow, and map a value obtained by the conversion to an index in the LUT. However, in an example in which probability-related data is used without such conversion, the range of the probability-related data may not be defined, and thus it may not be easy to construct an LUT corresponding to the probability-related data. Furthermore, in examples where the probability-related data is converted to negative data by a forward calculation (e.g., x-max (x)), additional conversions or transformations (e.g., scaling and/or biasing) may be required to map the indices in the LUT, and thus overload may occur.

According to example embodiments, by using only a 1-dimensional LUT to compute the flexible maximum, the size of the LUT may be reduced, the computational complexity is greatly reduced without the need for dividers and/or multipliers, and it is implemented in a hardware accelerator 230 (such as an NPU) without the need for large hardware resources.

FIG. 4 shows a flow chart of an example of a flexible maximum approximation method. Operations 410 through 430, which will be described below with reference to FIG. 4, may be performed by a hardware accelerator (e.g., hardware accelerator 230 of FIG. 2). The hardware accelerator may be or be implemented by one or more hardware modules, one or more software modules, or various combinations thereof.

In FIG. 4, in operation 410, the hardware accelerator loads the LUT. The hardware accelerator may load a LUT stored in the host, or a LUT generated in real time in the host. For example, the NPU may load a LUT generated in the CPU.

The LUT may store information associated with the inverse of the exponential function corresponding to a preset range. For example, the LUT may include information associated with values from 1/exp (0) to 1/exp (n), where n represents a positive integer. The LUT may indicate an approximation of the reciprocal of the exponential function, as shown in table 1 below.

Table 1:

index (id) LUT(id)
0 1.000
1 0.368
2 0.135
3 0.050
4 0.018
5 0.007
6 0.002
7 0.001

Further, the information associated with the inverse of the exponential function may include information associated with an integer that: an integer corresponding to the inverse of the exponential function based on the number of bits used for quantization of the output data. For example, the output data may be quantized to [0,255] using 8 bits, and the LUT may include information associated with integers within [0,255] that are proportional to the inverse of the exponential function, as shown in Table 2 below.

Table 2:

index (id) LUT(id)
0 255
1 94
2 35
3 13
4 5
5 2
6 1
7 0

Similarly, the output data may be quantized to [0,15] using 4 bits, and the LUT may include information associated with integers within [0,15] that are proportional to the inverse of the exponential function, as shown in Table 3 below.

Table 3:

index (id) LUT(id)
0 15
1 6
2 2
3 1
4 0

In operation 420, the hardware accelerator maps each probability-related data value to an index of the LUT based on the probability-related data distribution. In one example, the probability-related data distribution may be a difference between the largest probability-related data value and each probability-related data value, but the application is not limited thereto. Here, the probability-related data may be data to be input to the flexibility maximum layer (e.g., the calculation result s described above with reference to fig. 3). The hardware accelerator may compute a difference between the largest probability-related data value and each probability-related data value and map each probability-related data value to an index in the LUT based on the computed difference.

In one example, the hardware accelerator may map the difference between the largest probability-related data value and each probability-related data value directly to an index in the LUT. For example, where the probability-related data is 990, 991, 992, 990, 995, 993, and 997, the hardware accelerator may directly map, without processing, the difference (e.g., 7, 6, 5, 7, 2, 4, and 0) between the largest probability-related data value (e.g., 997) and each probability-related data value to an index in the LUT.

Further, in a non-limiting example, the host may store multiple LUTs, and the hardware accelerator may load one of the multiple LUTs. The hardware accelerator may determine LUT index ranges based on a maximum difference of the calculated maximum probability-related data value and a difference between each probability-related data value, and load one of the LUTs based on the determined index range.

For example, in the case where the LUTs indicated in table 2 and table 3 above are stored in the host and the probability-related data are 990, 991, 992, 990, 995, 993, and 997, the hardware accelerator may determine the index range as [0, 7] based on the largest difference 7 of the differences 7, 6, 5, 7, 2, 4, and 0 between the largest probability-related data value 997 and each probability-related data value, and then load the LUT of table 2 corresponding to the determined index range as the final LUT. In this example, when loading one of the LUTs, the hardware accelerator may load the LUT after calculating the difference between the largest probability-related data value and each probability-related data value.

In operation 430, the hardware accelerator obtains output data corresponding to each probability-related data value using the LUT. As described above with reference to fig. 3, the output data may be data output from a flexible maximum layer (e.g., a relative probability value indicating a probability that input data of a neural network is classified into each class).

For example, in the case where the input data (or probability-related data) of the flexible maximum layer is 990, 991, 992, 990, 995, 993, and 997, the hardware accelerator may output LUT (7), LUT (6), LUT (5), LUT (7), LUT (2), LUT (4), and LUT (0) as output data corresponding to the respective probability-related data values. In the LUT of table 1 above, the output data may be 0.001, 0.002, 0.007, 0.001, 0.135, 0.018, and 1.0. In this example, the input data of the neural network may be classified into class 8 corresponding to the maximum output data value (e.g., 1).

Here, 2n cycles may be sufficient for the hardware accelerator to perform flexible maximum operations. For example, n cycles may be required to obtain the maximum probability-dependent data value, and n cycles may be required to load the LUT and obtain the output data. Hereinafter, detailed examples of mapping each probability-related data value to an index in the LUT based on the probability-related data distribution will be described with reference to fig. 5 to 7.

Fig. 5 shows a flow chart of an example of a method of mapping each probability-related data value to an index in a LUT based on the difference between the largest probability-related data value and each probability-related data value.

In FIG. 5, the hardware accelerator may load a LUT with index range [0, x _ q ] from the host (e.g., CPU).

The hardware accelerator may extract the largest probability-related data value and calculate a difference between the extracted largest probability-related data value and each probability-related data value.

The hardware accelerator may directly map the difference between the largest probability-related data value and each probability-related data value to an index in the LUT.

The hardware accelerator may use the LUT to obtain output data corresponding to each probability-related data value. As described above, the output data may indicate relative probability values indicating the probability of the input data of the neural network being classified into each class.

The hardware accelerator may directly map the difference between the largest probability-related data value and each probability-related data value to an index in the LUT and obtain a relative probability value indicative of the probability that the input data of the neural network is classified into each class without a divider.

Fig. 6 shows a flow chart of an example of a method of scaling the difference between the largest probability related data value and each probability related data value and mapping the scaled difference to an index in a LUT.

In fig. 6, the hardware accelerator may scale the difference between the largest probability-related data value and each probability-related data value (e.g., x) and map the scaled difference to an index in the LUT.

For example, where the difference between the largest probability-related data value and each probability-related data value has a range of [0, 10] and the loaded LUT's index has a range of [0, 20], scaling the difference to fit within the index range may be more effective in improving the accuracy of the output data than mapping the difference directly to the index.

The hardware accelerator may extract the largest probability-related data value and calculate a difference between the largest probability-related data value and each probability-related data value.

The hardware accelerator may scale the difference between the largest probability-related data value and each probability-related data value and map the scaled difference to an index. As an example, in fig. 6, the difference between the largest probability-related data value and each probability-related data value may be scaled to fit into the index range based on the equation id x ═ ax + b, where a and b are scaling coefficients and idx is the scaled difference. In the foregoing example, based on the equation idx ═ 2x, the hardware accelerator may map the difference between the largest probability-related data value and each probability-related data value to an index.

The hardware accelerator may use a LUT (e.g., by y ═ LUT [ idx ]) to obtain output data corresponding to each probability-related data value.

Fig. 7 shows a flowchart of an example of a method of obtaining output data by polynomial approximation using a plurality of LUTs.

In fig. 7, the hardware accelerator may obtain output data by polynomial approximation using a plurality of LUTs.

The hardware accelerator can flexibly control the approximation accuracy according to the selected polynomial. As an example, a hardware accelerator may load multiple LUTs (or LUT families), e.g., LUT _ a, LUT _ b, LUT _ c, etc. For example, the hardware accelerator may approximate the method 710 from a first order polynomial based on the situation (e.g., y ═ LUT _ a [ idx)]*x+LUT_b[idx]) And quadratic polynomial approximation method 720 (e.g., y ═ LUT _ a [ idx [)]*x2+LUT_b[idx]*x+LUT_c[idx]) One is selected between to flexibly control the approximation accuracy.

Fig. 8 shows a flowchart of an example of a method of obtaining output data using a shift register.

In FIG. 8, the hardware accelerator may map the clock of the shift register to an index and obtain the output data using the shift register.

The hardware accelerator may extract the largest probability-related data value and calculate a difference between the extracted largest probability-related data value and each probability-related data value.

The hardware accelerator may map the clock of the shift register to an index based on a difference between the calculated maximum probability-related data value and each probability-related data value.

The hardware accelerator may use the shift register to obtain output data corresponding to a clock corresponding to a difference between the largest probability-related data value and each probability-related data value. For example, where a 4-bit shift register is used, the hardware accelerator may obtain the outputs indicated in Table 4 below.

Table 4:

index (id) LUT(id)
0 15(1111(2))
1 7(0111(2))
2 3(0011(2))
3 1(0001(2))
4 0(0000(2))

Fig. 9 shows a flowchart of an example of a method of generating a LUT.

In fig. 9, the host (e.g., CPU) may extract the largest probability-related data value and calculate the difference between the extracted largest probability-related data value and each probability-related data value. Alternatively, the host may receive the difference between the largest probability-related data value and each probability-related data value from a hardware accelerator (e.g., NPU) without directly calculating the difference.

The host may extract a maximum difference among differences obtained by calculating a difference between the maximum probability-related data value and each probability-related data value. For example, in the case where the probability-related data is 990, 991, 992, 990, 995, 993, and 997, the host may determine the largest difference among differences 7, 6, 5, 7, 2, 4, and 0 between the largest probability-related data value 997 and the respective probability-related data values as 7.

The host may generate the LUT based on the maximum difference and the number of bits used for quantization of the output data. In the foregoing example, the host may determine the LUT index range based on the maximum difference being 7, and generate information associated with such integers based on the number of bits used for quantization of the output data: the integer is proportional to the reciprocal of the exponential function corresponding to the index. For example, the host may generate the LUT indicated in table 2 above based on the maximum difference being 7 and the number of bits being 8 bits.

Fig. 10 shows a flow chart of an example of a method of determining an effective quantization boundary.

In FIG. 10, the hardware accelerator may determine the effective quantization boundary as shown in equation 6.

Equation 6:

x_q=ln(2w-1)

in equation 6, w represents the number of bits used for quantization of output data. For example, in the case of using an int8 hardware accelerator, w may be determined to be 7. For example, in the case of using an int4 hardware accelerator, w may be determined to be 3. For example, in the case of using the uint4 hardware accelerator, w may be determined to be 4.

The hardware accelerator may determine the quantization boundary based on equation 6 above and efficiently use the available quantization range based on the determined quantization boundary. That is, in such an example, the LUT may provide a higher level of accuracy with the same number of quantization bits. When the quantization boundaries are determined based on equation 6 above, the contents of the LUT may need to be recalculated based on x _ q.

Fig. 11 shows a flow chart of an example of a method of obtaining normalized output data.

The operation of the hardware accelerator described above with reference to fig. 1 to 10 is applicable to the example of fig. 11, and thus a more detailed and repetitive description will be omitted herein.

The hardware accelerator can estimate the flexibility maximum by an approximation represented by equation 5 above. However, in an example where a complex task is performed using the approximate flexibility maximum in equation 5, the ultimate accuracy of the task may drop due to the approximation error to a degree that undermines the normal performance of the task. The hardware accelerator may obtain the normalized output data by applying the compensation coefficient α to the approximate compliance maximum. For example, the approximate flexibility maximum may be the scaled actual flexibility maximum, so the hardware accelerator may obtain the normalized output data by assigning the scaled grades to the approximate flexibility maximum. The normalized output data can be obtained as shown in equation 7.

Equation 7:

y(x)=α*y(x)

in equation 7, y(x) Denotes the flexibility maximum estimated by equation 5 above, α denotes the compensation coefficient, and y (x) denotes the normalized output data. The compensation coefficient α can be calculated as shown in equation 8.

Equation 8:

in equation 8, yRepresenting an approximate maximum flexibility. The hardware accelerator may compute the sum of the approximate flexible maxima (e.g., sum _ rexp) and map the sum to an index of the compensation coefficient α (e.g., alpha _ idx). The hardware accelerator may map the sum of the approximate flexibility maxima to an index of the compensation coefficient alpha using a preset function. For example, the hardware accelerator may map the sum of the approximate flexibility maxima as shown in equation 9To the index of the compensation coefficient alpha.

Equation 9:

alpha_idx=round(sum_rexp)

equation 9 is provided merely as an example of an index mapping the sum of the approximate flexibility maximums to the compensation coefficient α. As non-limiting examples, the mapping may be performed using various functions (f (sum _ rexp)) (e.g., alpha _ idx ═ round (sum _ rexp) -1, alpha _ idx ═ ceil (sum _ rexp) -1, alpha _ idx ═ floor (sum _ rexp), etc.).

The hardware accelerator may obtain the compensation coefficient α corresponding to the index using a compensation coefficient LUT (e.g., LUT _ explore) including information associated with the compensation coefficient α (e.g., alpha), and obtain normalized output data corresponding to the probability-related data by multiplying each approximate flexible maximum by the compensation coefficient α.

For example, a compensation coefficient LUT corresponding to 16 indexes using signed 16 bits may be indicated in table 5 below.

Table 5:

for example, a compensation coefficient LUT corresponding to 16 indices using 8 bits without symbols may be indicated in table 6 below.

Table 6:

for example, a compensation coefficient LUT corresponding to 16 indexes using signed 4 bits may be indicated in table 7 below.

Table 7:

index (id) LUT_expln(id)
0 15
1 8
2 5
3 4
4 3
5 3
6 2
7 2
8 2
9 2
10 1
11 1
12 1
13 1
14 1
15 0

For example, a compensation coefficient LUT corresponding to 7 indices using signed 2 bits may be indicated in table 8 below.

Table 8:

index (id) LUT_expln(id)
0 3
1 2
2 1
3 1
4 1
5 1
6 0

Further, to improve accuracy, the scaling coefficient of the compensation coefficient LUT may be adjusted. For example, in the case where the compensation coefficient LUT is scaled to LUT _ explore [0, scale × sum _ q ], the index of the compensation coefficient α may be determined based on alpha _ idx ═ round (scale × sum _ rexp) -1, where [0, sum _ q ] may represent the range of the original compensation coefficient LUT, and scale may represent the scaling coefficient.

The neural network device, host, hardware accelerator, memory, neural network device 200, host 210, hardware accelerator 230, memory 220, and other devices, apparatuses, units, modules, and other components described herein with respect to fig. 1-11 are implemented by hardware components. Examples of hardware components that may be used to perform the operations described in this application include, where appropriate: a controller, a sensor, a generator, a driver, a memory, a comparator, an arithmetic logic unit, an adder, a subtractor, a multiplier, a divider, an integrator, and any other electronic component configured to perform the operations described herein. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware (e.g., by one or more processors or computers). A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes or is connected to one or more memories that store instructions or software for execution by the processor or computer. A hardware component implemented by a processor or a computer may execute instructions or software (such as an Operating System (OS) and one or more software applications running on the OS) for performing the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of instructions or software. For simplicity, the singular terms "processor" or "computer" may be used in the description of the examples described in this application, but in other examples, multiple processors or computers may be used, or a processor or computer may include multiple processing elements or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or processors and controllers, and one or more other hardware components may be implemented by one or more other processors, or other processors and other controllers. One or more processors, or processors and controllers, may implement a single hardware component or two or more hardware components. The hardware components may have any one or more of different processing configurations, examples of which include: single processors, independent processors, parallel processors, Single Instruction Single Data (SISD) multiprocessing, Single Instruction Multiple Data (SIMD) multiprocessing, Multiple Instruction Single Data (MISD) multiprocessing, and Multiple Instruction Multiple Data (MIMD) multiprocessing.

The methods illustrated in fig. 1-11 to perform the operations described in this application are performed by computing hardware (e.g., by one or more processors or computers) implemented as executing instructions or software as described above to perform the operations described in this application as being performed by the methods. For example, a single operation or two or more operations may be performed by a single processor or two or more processors, or a processor and a controller. One or more operations may be performed by one or more processors, or processors and controllers, and one or more other operations may be performed by one or more other processors, or other processors and other controllers. One or more processors, or a processor and a controller, may perform a single operation or two or more operations.

Instructions or software for controlling a processor or computer to implement hardware components and perform methods as described above are written as computer programs, code segments, instructions, or any combination thereof, to individually or collectively instruct or configure the processor or computer to operate as a machine or special purpose computer to perform operations performed by hardware components and methods as described above. In one example, the instructions or software include machine code that is directly executed by a processor or computer (such as machine code generated by a compiler). In another example, the instructions or software comprise high-level code that is executed by a processor or computer using an interpreter. A programmer of ordinary skill in the art can readily write instructions or software based on the block diagrams and flow diagrams shown in the figures and the corresponding description in the specification, which disclose algorithms for performing the operations performed by the hardware components and methods described above.

Instructions or software for controlling a processor or computer to implement hardware components and perform methods as described above, as well as any associated data, data files, and data structures, are recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of non-transitory computer-readable storage media include: read-only memory (ROM), programmable random access memory (PROM), electrically erasable programmable read-only memory (EEPROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or optical disk memory, Hard Disk Drive (HDD), Solid State Disk (SSD), card-type memory (such as multimedia card or mini-card (e.g., Secure Digital (SD) or extreme digital (XD)), magnetic tape, floppy disk, magneto-optical data storage device, magnetic tape, magneto-optical data storage device, magnetic tape, magneto-optical data storage device, magnetic tape, magneto-optical data storage device, and optical data storage device, Hard disks, solid state disks, and any other device configured to store and provide instructions or software and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the instructions.

Although the present disclosure includes specific examples, it will be apparent to those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered merely as illustrative and not restrictive. The description of features or aspects in each example will be considered applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order and/or if components in the described systems, architectures, devices, or circuits are combined in a different manner and/or replaced or supplemented by other components or their equivalents.

Therefore, the scope of the disclosure is defined not by the detailed description but by the claims and their equivalents, and all changes within the scope of the claims and their equivalents are to be construed as being included in the disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:重要空白凭证管理方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!