Method, processor, device and readable storage medium for processing task

文档序号:7354 发布日期:2021-09-17 浏览:26次 中文

1. A method for processing a computing task by a heterogeneous multi-core processor that includes a general purpose processing core and a dedicated acceleration core, the method comprising:

for a predetermined type of computing task, allocating a plurality of instruction blocks in the computing task to the general purpose processing core and the special purpose acceleration core;

a control unit in the dedicated acceleration core communicates an instruction completion indication of a predetermined co-processing unit coupled thereto to at least one processing unit of the general purpose processing core through a signal path configured to couple the at least one general purpose processing unit to the control unit; and

if it is determined that the instruction completion indication is received, the general purpose processing core obtains data in a first in-chip cache in the special acceleration core for completing the computational task through a data path configured to couple the general purpose processing core to the first in-chip cache.

2. The method of claim 1, wherein allocating the plurality of instruction blocks to the general purpose processing core and the dedicated acceleration core comprises:

allocating the plurality of instruction blocks to the general purpose processing core and the dedicated acceleration core based on a core identification.

3. The method of claim 1, wherein communicating an instruction completion indication to the general purpose processing core comprises:

the instruction completion indication is replicated by an aggregate dispatch component within the general purpose processing core to be sent to the at least one general purpose processing unit.

4. The method of claim 1, wherein acquiring the data comprises:

sending an access address for the data to a routing fabric component within the general purpose processing core;

the routing fabric component determines an access to a second on-chip cache within the general purpose processing core or the first on-chip cache based on the access address; and

and if the first in-chip cache is determined to be accessed, acquiring the data from the access address in the first cache.

5. The method of claim 1, further comprising:

in response to the at least one general processing unit completing an instruction block operation, sending an instruction completion indication to the gather dispatch component; and

the instruction completion indication is summarized by the summary dispatch component for transmission to a predetermined co-processing unit.

6. A heterogeneous multi-core processor, comprising:

a dedicated acceleration core comprising a first in-chip cache and a control unit coupled to the first in-chip cache;

a general purpose processing core comprising a routing fabric component and at least one general purpose processing unit coupled to the routing fabric component;

a data path configured to couple the routing fabric component to the first in-chip cache to enable the at least one general purpose processing unit to access the first in-chip cache;

a signal path configured to couple the at least one general purpose processing unit to the control unit for transmitting an instruction completion indication related to the access.

7. The processor of claim 6, the general purpose processing core further comprising:

a summary dispatch component configured to summarize instruction completion indications received from the at least one general purpose processing unit for transmission to the control unit or dispatch instruction completion indications received from a predetermined co-processing unit to the at least one general purpose processing unit.

8. The processor of claim 6, the general purpose processing core further comprising: a second on-die cache coupled to the routing fabric component, the routing fabric component configured to access the first or second on-die cache based on the received address information.

9. The processor of claim 6, the dedicated acceleration core further comprising:

at least one co-processing unit; and

at least one direct memory access unit.

10. The processor of claim 6, further comprising:

a scheduler configured to:

acquiring a computing task to be processed;

if the type of the computing task is determined to be a predetermined type, determining a core group comprising an available special-purpose acceleration core and an available general-purpose processing core;

configuring an operating mode of a core of the group of cores to a predetermined mode;

allocating a plurality of instruction blocks of the computing task to the available dedicated acceleration cores and the available general purpose processing cores for processing the computing task.

11. The processor of claim 10, wherein the available general purpose processing cores are configured to:

waiting for an instruction completion signal from a predetermined co-processing unit in the dedicated acceleration core;

in response to receiving an instruction completion signal of the predetermined co-processing unit, executing the instruction block allocated to the general-purpose processing core.

12. The processor of claim 11, wherein the general purpose processing core is further configured to:

and if the execution of the instruction block in the general-purpose processing core is determined to be completed, sending an instruction completion signal to a target co-processing unit in the special-purpose acceleration core.

13. The processor of claim 10, wherein the dedicated acceleration core is configured to:

if the execution of the instruction block in the co-processing unit is determined to be completed, sending an instruction completion indication to the general processing core; and

receiving instruction completion indications from the general purpose processing cores, the instruction completion indications including which co-processing cores within the special purpose acceleration core to send instruction completion indications.

14. An electronic device, comprising:

at least one heterogeneous multi-core processor according to claim 6; and

a memory communicatively coupled to the at least one heterogeneous multi-core processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to be executed by the at least one heterogeneous multi-core processor to enable the at least one heterogeneous multi-core processor to perform the method of any of claims 1-5.

15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.

16. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.

Background

An Artificial Intelligence (AI) algorithm is widely applied to many important internet applications, such as voice recognition, image recognition, natural language processing, and the like, and compared with a traditional method, the AI algorithm can achieve better precision and effect. The deep neural network is the most widely used one of the AI algorithms, when the AI algorithm is applied, large-scale multiply-add calculation needs to be performed on a large amount of data, and the AI algorithm is a typical calculation-intensive application.

In the last two decades, due to the constraints of semiconductor processes, heat dissipation and the like, the method for improving the performance of the processor by simply improving the main frequency is difficult to realize, and most processors adopt a multi-core technology to improve the computing performance of the processor. The multi-core technology is to integrate a plurality of same or different processor cores in a processor, distribute a computing task to the plurality of processing cores to be completed together through scheduling when the task is processed, and improve the computing performance of the whole processor through the parallel computing of the processing cores. There are also many problems to be solved in processing the AI algorithm with the multi-core processor.

Disclosure of Invention

The present disclosure provides a method of processing a task, a processor, a device and a readable storage medium.

According to a first aspect of the present disclosure, a method is provided for processing a computing task by a heterogeneous multi-core processor that includes a general purpose processing core and a dedicated acceleration core. The method comprises the following steps: for a predetermined type of computing task, allocating a plurality of instruction blocks in the computing task to the general purpose processing core and the special purpose acceleration core; a control unit in the dedicated acceleration core communicates an instruction completion indication of a predetermined co-processing unit coupled thereto to at least one processing unit of the general purpose processing core through a signal path configured to couple the at least one general purpose processing unit to the control unit; and if it is determined that the instruction completion indication is received, the general purpose processing core obtains data in a first in-chip cache in the special purpose acceleration core for completing the computational task through a data path configured to couple the general purpose processing core to the first in-chip cache.

According to a second aspect of the present disclosure, a heterogeneous multi-core processor is provided. The heterogeneous multi-core processor includes: a dedicated acceleration core comprising a first in-chip cache and a control unit coupled to the first in-chip cache; a general purpose processing core comprising a routing fabric component and at least one general purpose processing unit coupled to the routing fabric component; a data path configured to couple the routing fabric component to the first in-chip cache to enable the at least one general purpose processing unit to access the first in-chip cache; a signal path configured to couple the at least one general purpose processing unit to the control unit for transmitting an instruction completion indication related to the access.

According to a third aspect of the present disclosure, an electronic device is provided. The electronic device comprises at least one heterogeneous multi-core processor according to the second aspect of the present disclosure; and a memory communicatively coupled to the at least one heterogeneous multi-core processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one heterogeneous multi-core processor to enable the at least one heterogeneous multi-core processor to perform the method according to the first aspect of the present disclosure.

According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method according to the first aspect of the present disclosure.

According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method according to the third aspect of the present disclosure.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 shows a schematic diagram of a heterogeneous multi-core processor 100 in which embodiments of the present disclosure can be implemented;

FIG. 2 illustrates a schematic diagram of an example 200 of a general purpose processing core and a dedicated acceleration core in a general purpose processing core view, according to some embodiments of the present disclosure;

FIG. 3 illustrates a schematic diagram of an example 300 of a general purpose processing core and a dedicated acceleration core in a dedicated acceleration core view, in accordance with some embodiments of the present disclosure;

FIG. 4 illustrates a flow diagram of a method 400 for processing tasks, according to some embodiments of the present disclosure;

FIG. 5 illustrates a flow diagram of a method 500 for processing tasks, according to some embodiments of the present disclosure;

fig. 6 illustrates a block diagram of a device 600 capable of implementing multiple embodiments of the present disclosure.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.

The hardware accelerator may be divided into homogeneous multi-core processors and heterogeneous multi-core processors according to the kind of computing resources. A plurality of same processing cores are integrated in the homogeneous multi-core processor, and all the cores can bear the same computing task; in the heterogeneous multi-core processor, a plurality of processing cores with different structures are integrated, and each core has different calculation tasks suitable for processing.

Currently, the AI technology is developed at a high speed, the algorithm iteration is rapid, and the neural network accelerator needs to have high computational efficiency and strong flexibility to be compatible with the future network structure computation.

A heterogeneous multi-core neural network accelerator is structurally characterized in that a special accelerating core and a general processing core are integrated at the same time. The special acceleration kernel is internally integrated with a large number of processing units such as multipliers and adders and is provided with a special calculation pipeline, so that the calculation of typical basic AI neural network structures such as convolution, matrix multiplication, vector calculation and the like can be efficiently completed; the general processing core has convenient programming and flexible usage, can realize various calculation functions and can adapt to the continuous updating of the AI algorithm. In the calculation, the neural network accelerator uses a special accelerating kernel to process typical high-computation-density operators such as convolution, matrix multiplication and the like to achieve higher processing performance; the general processing core is used for processing the operators which cannot be processed by the special accelerator, so that the problem that the processing performance is reduced due to bandwidth bottleneck caused by the fact that the operators which cannot be accelerated need to be transmitted back to the host is avoided.

In actual calculation, the neural network accelerator issues different calculation tasks to each processing core respectively according to the characteristics of the calculation tasks and the integrated processing core structure, and the calculation is completed in sequence. However, due to the limitation of data dependency, the calculation can only be executed on the neural network accelerator in series, and the calculation power of the neural network accelerator is difficult to be fully utilized; meanwhile, when the processing cores for executing the computation are switched, overhead of data migration and corresponding switching delay are also brought.

In the existing heterogeneous multi-core neural network accelerator, synchronization between processing cores is generally completed by selecting a mode of issuing a computing task time to the processing cores through a scheduling controller, and by adopting the mode, fixed overhead is brought when sub-computing tasks are switched, so that the overall performance of the multi-core neural network accelerator is influenced. For example, the scheduling controller receives two sub-computation tasks with dependency relationship, which are a first sub-computation task and a second sub-computation task. In order to improve the calculation throughput, the processing core internally processes data in a pipeline calculation mode. In the mode, the calculation process can be divided into three stages of inflow, flow processing and outflow, and only in the flow processing stage, all the calculation resources of the calculation core are in a working state, so that the highest calculation throughput is achieved. When the sub-computation tasks are switched, due to the introduction of the synchronization of the scheduling controller, the pipeline is interrupted, and therefore computing resource idleness occurs between the first sub-computation task and the second sub-computation task. Since the part of time is basically fixed, the total time consumption of calculation is reduced along with the increase of the calculation power of the neural network accelerator, and the proportion of the part of fixed overhead in the whole calculation task becomes more and more obvious.

Another problem with this approach is that the overall memory access bandwidth has a large impact on computational performance. The clock frequency and performance of modern processors have increased at a rate beyond that which is envisioned, but the access rate to memory has increased much more slowly; although Cache and prefetching can help to reduce average access time, the problem is still not solved fundamentally, and the gap between processors and memory is getting larger and larger. In the heterogeneous multi-core architecture, each processing core is generally integrated with its own on-chip memory, which can efficiently implement access to internal data, but data exchange between processing cores is generally completed by a uniform off-chip memory or an on-chip cache. Therefore, data exchange between sub-computing tasks often requires passing through a unified on-chip interconnect fabric, on-chip cache, or off-chip memory. However, due to the limited bandwidth and the large delay, the neural network computing task is considered to have a large data volume, so that data access often becomes a system bottleneck, and the performance of the overall computing performance is affected.

In this way, there is also a problem that the computational and memory resources between different processing cores are difficult to reuse. Different processing cores are difficult to work simultaneously due to the restriction of data dependence in the calculation task; even if the influence of reducing data dependence by a multithreading mode is considered, because the time consumption proportion of different processing cores in a computing task is different, even if a task pipeline is arranged very carefully, part of the processing cores are still in an idle state, and the waste of computing resources is caused.

To address at least the above issues, in accordance with an embodiment of the present disclosure, an improved scheme for processing computing tasks by heterogeneous multi-core processors is presented. In the scheme, for a predetermined type of computing task, a plurality of instruction blocks in the computing task are distributed to a general processing core and a special accelerating core. The control unit in the dedicated acceleration core then transmits an instruction completion indication of a predetermined co-processing unit coupled thereto to the at least one processing unit of the general purpose processing core via the signal path. If the general processing core receives the instruction completion indication, the general processing core acquires data in a first in-chip cache in the special acceleration core through a data path for completing the calculation task. By the method, the calculation task switching cost is avoided, the dependence on network bandwidth and storage bandwidth is reduced, the utilization rate of storage resources and calculation resources is improved, and the calculation performance of the accelerator is utilized to the maximum extent.

In the present disclosure, the term "general processing core" refers to a processing core that is not designed for a specific computing task and can complete various computing tasks, and is generally designed by adopting a CPU-like architecture, which has strong flexibility. The term "special acceleration core" refers to a special processing core that is specifically and optimally designed for a specific task, and can efficiently process various corresponding computing tasks, but cannot complete other types of computing tasks. The special accelerating core integrates a Direct Memory Access (DMA) unit and a plurality of co-processing units, can respectively complete data Access and various special optimized calculation tasks, the co-processing units maintain respective execution sequences through asynchronous cooperative instructions, so that each calculation step is completed in a pipelining mode in calculation, and the co-processing units inside the special accelerating core complete data transmission through internal in-chip caches.

FIG. 1 shows a schematic diagram of a heterogeneous multi-core processor 100 in which embodiments of the present disclosure can be implemented. Processor 100 includes a dispatch controller 101, a general purpose processing core 105, a specialized acceleration core 106, and an on-chip cache 103, which communicate via an on-chip interconnect fabric 104. The processor 100 also includes an off-chip memory interface 102 to interface with off-chip memory. The off-chip memory may be any suitable memory or memory module packaged with the processor 100 for the processor 100 to exchange data with external devices.

The scheduling controller 101 in the processor 100 is used to control the processing of computational tasks within the processor 100 and to allocate computational tasks to the general purpose processing cores 105 and the dedicated acceleration cores. In executing a computing task, a scheduling controller acquires the computing task to be processed. If it is determined that the type of the computing task is a predetermined type, such as a co-processing task type, the scheduling controller obtains a core group from the processor 100 that includes an available dedicated acceleration core and an available general-purpose processing core. The scheduling controller will then configure the operating mode of the cores in the core group to a predetermined mode, such as a cooperative mode, so that the core group cooperatively processes the computing task. By the method, different computing tasks can be cooperatively processed, the time for processing the tasks is reduced, and storage resources and computing resources are saved.

A plurality of instruction blocks of the computing task are allocated to the available dedicated acceleration cores and the available general purpose processing cores for processing the computing task.

The on-chip cache 103 is used to store data accessed by the general purpose processing core 105 and the special purpose acceleration core 106. The processing of data within processor 100, particularly inter-core data, may be expedited by providing on-chip cache 103.

The general purpose processing core 105 includes at least one general purpose processing unit 107, a routing fabric component 108, and an on-chip cache 109. The at least one general purpose processing unit 107 is coupled to a routing fabric component 108. The routing fabric component 108 is coupled to an on-chip cache 109.

To enable general processing of the 105 accesses to the on-chip cache 112 within the special purpose acceleration core 106, a data path is provided between the routing fabric component 108 and the on-chip cache 112. In some embodiments, the data path is configured as an AXI bus. In some embodiments, the data bus is configured as an APB bus. In some embodiments, custom bus structures may be provided as desired. The above examples are intended to be illustrative only, and not to be limiting of the disclosure.

After the data path is set, the routing fabric component 108 is also coupled to the on-chip cache 112, which determines whether to access the on-chip cache 109 or the on-chip cache 112 based on the memory address in the access instruction received from the general processing unit 107. For convenience of description, the on-chip cache 112 is referred to as a first on-chip cache, and the on-chip cache 109 is referred to as a second on-chip cache. By the method, caches on different cores can be accessed according to needs, and data processing is accelerated.

Also disposed within the general processing core 105 is an aggregate distribution component 110. The aggregate distribution component 110 connects various processing units within a general purpose processing core. The rollup dispatch component is also connected to the control unit within the dedicated acceleration core 106 via a signal path for sending an instruction completion indication to a co-processing unit within the dedicated acceleration core or receiving an instruction completion indication of a predetermined co-processing unit from the control unit. In this manner, the transfer of instructions between the two processing cores may be accelerated. In some embodiments, the signal passes through an AXI bus. In some embodiments, the signal path is a communication line set by a user, such as a communication line including one active bit line and an eight bit digit line. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.

When other co-processing units send instruction completion instructions through the signal path during operation, the summary distribution component 110 copies the instructions into multiple copies and distributes the copies to all the general processing units of the general processing core 105; when a general purpose processing unit generates an instruction completion indication, the aggregate dispatch component 110 collects the instruction completion indication from each processing unit, sends it to the control unit 118 after aggregation, and then sends it by the control unit 118 to the corresponding co-processing unit.

To achieve the above, an asynchronous synergistic instruction is added to the general purpose processing core 105 in the disclosure. The asynchronous cooperative instruction comprises two instructions, namely a waiting instruction and a finishing instruction. The waiting instruction comprises 2 main fields, wherein the field 1 is used for indicating that the type of the instruction is the waiting instruction, and the field 2 is used for indicating the coprocessing units from which the completion signals of the instruction waiting come; when the general-purpose processing core executes the instruction, the following work can be continuously executed after a plurality of coprocessing units appointed in the instruction send completion signals. The completion instruction includes 2 main fields, field 1 to indicate the type of instruction is a completion instruction, and field 2 to indicate which coprocessing units need to be signaled for completion. When the general-purpose processing core executes the instruction, a completion signal needs to be sent to each co-processing unit indicated in the instruction. Through the instruction, task cooperative processing among cores with different structures can be realized.

The dedicated acceleration core includes four co-processing units 111, 114, 115, and 117 and two DMA units 113 and 116 in fig. 1. Which are merely examples and are not specific limitations of the present disclosure, the dedicated acceleration core may include any suitable number of co-processing units and any number of DMA units, such as at least one co-processing unit and at least one DMA unit. In this way, the processing cores can be made to meet different requirements.

The control unit 110 in the dedicated acceleration core 106 is configured to allocate an instruction block for the computation task to the co-processing unit, and may send a completion instruction indication of the co-processing unit to the summary distribution component, and send an instruction completion indication received from the summary distribution component to a predetermined co-processing unit to implement cooperative processing of the task. The dedicated acceleration core is thus configured to send an instruction completion indication to the general purpose processing core if it is determined that execution of the instruction block in the co-processing unit is complete; and receiving instruction completion indications from the general purpose processing cores, the instruction completion indications including which co-processing cores within the special purpose acceleration core to send instruction completion indications. By this way, the cooperative processing of tasks can be realized.

The internal on-chip cache 112 of the dedicated acceleration core 106 adds an access interface for forming the data path described above, by the on-chip cache 112 being connected to the routing fabric components by a data path. This interface is exposed to the general purpose processing core 105, connecting the added access paths.

To implement the signal path between the control unit 118 of the dedicated acceleration core and the aggregate distribution component 110, a co-processing unit arbitration interface is added at the control unit 118. The interface is connected to the general processing core or a gathering and distributing module in the general processing core, and the interface scheduling module can receive and send a completion signal to the general processing core, so that the execution sequence of the instruction is controlled.

Connections to at least one general processing unit 107 and a summary distribution component 110 are shown in FIG. 1, by way of example only. In some embodiments, where at least one general processing unit 107 is a general processing unit, no aggregate distribution component may be provided within the general processing core 105.

Fig. 1 illustrates one general purpose processing core 105 and one special purpose acceleration core 106, which are examples only and are not specific limitations of the present disclosure. Any suitable number of general purpose processing cores and special purpose acceleration cores may be included within processor 100.

And adding a data access path for accessing the data stored in the special acceleration core chip for the general processing core. The path exposes the memory space inside the special accelerating core to the general processing core, thereby allowing the general processing core to realize data access in a mode of a special instruction or address space;

the general processing core adds asynchronous cooperative instructions. The asynchronous cooperative instruction comprises two instructions, namely a waiting instruction and a finishing instruction. The wait instruction comprises 2 main fields, wherein the field 1 is used for indicating that the type of the instruction is a wait instruction, and the field 2 is used for indicating which coprocessing units the wait completion signal of the instruction comes from; when the general-purpose processing core executes the instruction, the following work can be continuously executed only by waiting for a plurality of co-processing units appointed in the instruction to send out completion signals. The completion instruction comprises 2 main fields, wherein the field 1 is used for indicating that the type of the instruction is the completion instruction, and the field 2 is used for indicating to which coprocessing units the completion signal needs to be sent; when the processing core executes the instruction, a completion signal needs to be sent to each co-processing unit indicated in the instruction;

if the general processing core is a multi-core structure, an additional summary distribution module is needed. One side of the collecting and distributing module is connected with each processing core inside the general processing core, and the other side of the collecting and distributing module is connected with other coprocessing units. When the signal is finished by other co-processing units in work, the collecting and distributing module copies a plurality of signals and distributes the signals to all the processing cores of the general processing core; when the general processing core sends a completion signal, the collection and distribution module collects the completion signal from each processing core and sends the completion signal to the corresponding co-processing unit after collection.

An access interface is added to the internal on-chip cache of the special acceleration core. The interface is exposed to the general processing core and is connected with the added access path;

the dispatching unit of the special accelerating core is added with a co-processing unit arbitration interface. The interface is connected to the general processing core or a gathering and distributing module in the general processing core, and a completion signal can be received and sent to the general processing core through the interface scheduling module, so that the execution sequence of the instruction is controlled.

By the method, the calculation task switching cost is avoided, the dependence on network bandwidth and storage bandwidth is reduced, the utilization rate of storage resources and calculation resources is improved, and the calculation performance of the accelerator is utilized to the maximum extent.

A schematic diagram of a heterogeneous multi-core processor 100 in which embodiments of the present disclosure can be implemented is described above in connection with fig. 1. Examples from a general processing core perspective and a dedicated acceleration core perspective are described below in conjunction with fig. 2 and 3. FIG. 2 depicts a schematic diagram of an example 200 of a dedicated accelerated processing core and an accelerated processing core from the perspective of a general purpose processing core, in accordance with some embodiments of the present disclosure; FIG. 3 depicts a schematic diagram of an example 300 of a dedicated acceleration core and an acceleration processing core in a dedicated acceleration processing core perspective, according to some embodiments of the present disclosure.

Fig. 2 includes a general purpose processing core 201 and a specialized acceleration core 202 coupled to each other. The general processing core comprises at least one general processing unit 203, a routing structure component 204, an on-chip cache 205 and a summary distribution component 206; the dedicated acceleration core 202 includes an on-chip cache 207 and a control unit 208. The functions of the above-described components are the same as the functions of the corresponding components in fig. 1.

In the general processing core view, the general processing core 210 adds a block of memory space, and the space access performance is lower than the on-chip cache 205 of the general processing core, but higher than the off-chip memory and the on-chip cache of the heterogeneous multi-core processor. When a program is written, if source data comes from the special acceleration core 202, a waiting instruction is added at first to instruct the general processing core 201 to start working after the work of the specified co-processing unit is completed; if the calculation result is used by the co-processing unit of the special acceleration core, a completion instruction is added after the calculation instruction, and the corresponding co-processing unit is indicated to start working.

The dedicated acceleration core 302 in fig. 3 includes four co-processing units 303, 304, 305, and 309, a control unit 310, and two DMA units. In the view of the special acceleration core, the general processing core is used as a co-processing unit of the special acceleration core as a whole. The resources inside the general processing core are not independently controlled by the scheduling module of the dedicated acceleration core, and the general processing core 301 and other co-processing units exchange data through the on-chip cache inside the dedicated acceleration core 302.

By the method, the calculation task switching cost is avoided, the dependence on network bandwidth and storage bandwidth is reduced, the utilization rate of storage resources and calculation resources is improved, and the calculation performance of the accelerator is utilized to the maximum extent.

The special acceleration core and the general processing core from different perspectives are described above in connection with fig. 2 and 3. A flowchart of a method 400 for tasks according to some embodiments of the present disclosure is described below in conjunction with fig. 4. Method 400 in fig. 4 may be performed by processor 100 in fig. 1 or any suitable processor.

The method of FIG. 4 is performed by a heterogeneous multi-core processor that includes a general purpose processing core and a dedicated acceleration core.

At block 402, for a predetermined type of computing task, a plurality of instruction blocks in the computing task are allocated to the general purpose processing core and a dedicated acceleration core. For example, tasks are assigned to the general purpose processing core 105 and the dedicated acceleration core 106.

In some embodiments, a scheduling controller within a processing core, upon receiving a computing task, may determine whether the computing task is a co-processing type of computing task. If the computing task is a collaborative type computing task, a general-purpose processing core and an acceleration-specific core which can be grouped together are found in a general-purpose processing core or a specific acceleration core in a processing core to execute the computing task. And identifies the set of cores as a predetermined mode of operation, e.g., a collaborative mode of operation. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.

In some embodiments, the plurality of instruction blocks are allocated to the general purpose processing core and the dedicated acceleration core based on a core identification. By the method, the instruction can be rapidly and accurately distributed to the processing cores.

At block 404, a control unit in the dedicated acceleration core communicates an instruction completion indication of a predetermined co-processing unit coupled thereto to at least one processing unit of the general purpose processing core via a signal path configured to couple the at least one general purpose processing unit to the control unit.

In some embodiments, within a general purpose processing core, an aggregate dispatch component within the general purpose processing core copies the instruction completion indication and then sends it to the at least one general purpose processing unit. In this way, each processing unit may be caused to obtain an instruction completion indication.

In some embodiments, if the at least one general purpose processing unit is a general purpose processing unit, the general purpose processing unit may be directly coupled to the control unit using the signal path. The above examples are intended to be illustrative of the present disclosure, and are not intended to be limiting of the present disclosure.

At block 406, if it is determined that the instruction completion indication is received, the general purpose processing core obtains data in a first in-chip cache in the special purpose acceleration core for completing the computing task via a data path configured to couple the general purpose processing core to the first in-chip cache.

In some embodiments, a general purpose processing unit sends an access address for the data to a routing fabric component within the general purpose processing core. The routing fabric component then determines an access to a second on-chip cache within the general purpose processing core or the first on-chip cache based on the access address. And if the first in-chip cache is determined to be accessed according to the address information, acquiring the data from the access address in the first cache. In this way, the first in-chip cache can be accurately accessed.

In some embodiments, an instruction completion indication is sent to the aggregate dispatch component in response to the at least one general processing unit completing an instruction block operation. The instruction completion indication is summarized by the summary dispatch component for transmission to a predetermined co-processing unit. In this way, correct transfer of instructions can be achieved.

By the method, the calculation task switching expense is avoided, the dependence on network bandwidth and storage bandwidth is reduced, the utilization rate of storage resources and calculation resources is improved, and the calculation performance of the accelerator is utilized to the maximum extent

A flowchart of a method 400 for tasks according to some embodiments of the present disclosure is described above in connection with fig. 4. A flowchart of a method 500 for tasks according to some embodiments of the present disclosure is described below in conjunction with fig. 5. Method 500 in fig. 5 may be performed by a computing device in fig. 1 that includes processor 100 or any suitable computing device.

At block 501, the method 500 begins. At block 502, a user writes a program that may use multiple processing cores simultaneously in a sub-computing task and direct sequential execution through asynchronous cooperative instructions. In the traditional mode, each sub-computing task can only use one processing core; in the cooperative work mode, in one sub-computing task, multiple processing cores can be used simultaneously, and the work sequence of different processing cores is indicated by an asynchronous cooperative instruction.

At block 503, compilation is completed using a software toolchain, marking sub-computing tasks that use a collaborative work method. For sub-computing tasks that use multiple processing cores, the compiled instructions are marked as collaborative mode computing tasks. At block 504, the compiled program and data is sent to the heterogeneous multi-core neural network accelerator via the driver and runtime program, and the scheduling controller is configured to begin performing computations. At block 505, the scheduling controller determines whether the sub-computation task category to be issued is a collaborative computation task according to the flag; if not, block 506 is entered, otherwise, block 507 is entered.

At block 506, the scheduling controller selects the desired homogeneous processing core and issues the computational task; in block 509, the scheduling controller searches for and locks free grouped-present processing cores. At block 508, the operating mode of the locked processing core is configured to be a cooperative operating mode. Block 509 is next entered where the computation tasks are issued for each processing core that is locked in turn. Following block 510, the processing core distinguishes its own instructions that need to be executed based on the tag and returns an interrupt to the dispatch controller after completing the computational task.

Then, at block 512, the dispatch controller collects the interrupts and marks the sub-compute task as complete when all the processing cores called by the sub-compute task have completed computing. Following block 513, the scheduling controller determines whether all computing tasks have been completed, i.e., whether there are any computing tasks that have not been completed, and if so, proceeds to block 505. If so, at block 514, the dispatch controller returns an interrupt to the host to complete the calculation.

By the method, the calculation task switching cost is avoided, the dependence on network bandwidth and storage bandwidth is reduced, the utilization rate of storage resources and calculation resources is improved, and the calculation performance of the accelerator is utilized to the maximum extent.

An example of cloud computing task processing using a general purpose processing core and a special purpose acceleration core is described below in conjunction with a piece of example pseudo code. Wherein wait _ core is a wait instruction, signal _ core is a finish instruction, and xx _ run is a calculation program executed on the corresponding co-processing unit; this code constructs a DMA0- > COP0- > COP1- > GENERAL _ CORE- > COP2- > DMA1 computation pipeline. The DMA0 and DMA1 are direct memory access units in the special acceleration CORE, the COP0, COP1 and COP2 are coprocessing units on the special acceleration CORE, and the GENERAL _ CORE is a GENERAL processing CORE.

In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic device 600 may be a computing device that includes a heterogeneous multi-core processor. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the device 600 includes a computing unit 601, which may be a heterogeneous multi-core processor, and may also perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 804.

A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general purpose and/or special purpose acceleration components with processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the methods 400 and 500. For example, in some embodiments, methods 400 and 500 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the methods 400 and 500 described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the methods 400 and 500 in any other suitable manner (e.g., by way of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:图形处理器资源的使用方法及装置、电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!