MAC (media Access control) layer scheduling method and terminal based on 5G small base station

文档序号:7322 发布日期:2021-09-17 浏览:44次 中文

1. A MAC layer scheduling method based on a 5G small base station is characterized by comprising the following steps:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

2. The MAC layer scheduling method based on a 5G small cell site as claimed in claim 1, wherein the allocating scheduling events that can be processed concurrently in different threads for processing concurrently comprises:

distributing the concurrently processable scheduling events in different threads;

and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.

3. The MAC layer scheduling method based on a 5G small cell base station according to claim 1, wherein allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:

acquiring the processing time of the scheduling event processed in series;

dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;

and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.

4. The MAC layer scheduling method based on the 5G small cell as claimed in claim 3, wherein the allocating the serial sub-threads obtained after the division to different CPU cores for delay processing comprises:

putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;

and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.

5. The MAC layer scheduling method based on 5G small cell base station as claimed in any of claims 1 to 4, further comprising:

scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;

and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.

6. A MAC layer scheduling terminal based on a 5G small cell, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the following steps when executing the computer program:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

7. The MAC layer scheduling terminal based on the 5G small cell base station as claimed in claim 6, wherein the allocating the scheduling events that can be processed concurrently in different threads for processing concurrently comprises:

distributing the concurrently processable scheduling events in different threads;

and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.

8. The MAC layer scheduling terminal based on the 5G small cell base station as claimed in claim 6, wherein the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:

acquiring the processing time of the scheduling event processed in series;

dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;

and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.

9. The MAC layer scheduling terminal based on a 5G small base station as claimed in claim 8, wherein the allocating the serial sub-threads obtained after the splitting to different CPU cores for performing the delay processing comprises:

putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;

and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.

10. The MAC layer scheduling terminal based on 5G small cell base station according to any of claims 6 to 9, further comprising:

scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;

and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.

Background

With the increase of bandwidth, compared to LTE (Long Term Evolution), data processed by NR (New Radio over the air) is increased by ten times per TTI (Transport Time Interval), and meanwhile, the scheduling Time of each TTI is reduced from 1ms to 1 slot (slot), so that a larger amount of data and scheduling need to be processed in a shorter Time.

However, since the CPU core of the NR small base station has limited processing performance, the number of user equipments that can be scheduled per TTI is small, and it is difficult to process a large amount of data in a short time.

Disclosure of Invention

The technical problem to be solved by the invention is as follows: the MAC layer scheduling method and the terminal based on the 5G small base station are provided, the number of mobile devices processed by the small base station in each scheduling time is increased, and the overall rate is increased.

In order to solve the technical problems, the invention adopts the technical scheme that:

a MAC layer scheduling method based on a 5G small base station comprises the following steps:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

In order to solve the technical problem, the invention adopts another technical scheme as follows:

a MAC layer scheduling terminal based on a 5G small cell, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

The invention has the beneficial effects that: generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel, configuring the scheduling events which can be processed concurrently into different threads for concurrent processing, and configuring the scheduling events which can only be processed serially into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.

Drawings

Fig. 1 is a flowchart of a MAC layer scheduling method based on a 5G small base station according to an embodiment of the present invention;

fig. 2 is a schematic diagram of a MAC layer scheduling terminal based on a 5G small base station according to an embodiment of the present invention;

fig. 3 is a schematic diagram of MAC layer scheduling of a 5G small base station-based MAC layer scheduling method according to an embodiment of the present invention;

fig. 4 is a schematic diagram of serially scheduling an MAC layer in time order according to an MAC layer scheduling method based on a 5G small base station according to an embodiment of the present invention;

fig. 5 is a schematic diagram illustrating a relationship between an abstract serial step and an actual MAC layer process of a 5G small cell base station-based MAC layer scheduling method according to an embodiment of the present invention;

FIG. 6 is a diagram of a single thread model for MAC layer scheduling in the prior art;

fig. 7 is a schematic diagram of a secondary thread model of a MAC layer scheduling method based on a 5G small cell site according to an embodiment of the present invention.

Detailed Description

In order to explain technical contents, achieved objects, and effects of the present invention in detail, the following description is made with reference to the accompanying drawings in combination with the embodiments.

Referring to fig. 1, an embodiment of the present invention provides a MAC layer scheduling method based on a 5G small cell, including:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

From the above description, the beneficial effects of the present invention are: generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel, configuring the scheduling events which can be processed concurrently into different threads for concurrent processing, and configuring the scheduling events which can only be processed serially into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.

Further, the allocating the concurrently processable scheduling events in different threads for concurrent processing includes:

distributing the concurrently processable scheduling events in different threads;

and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.

According to the above description, the scheduling events which can be processed concurrently are distributed in different threads, and the time slot time synchronization is performed on the threads, so that the scheduling events which can be processed concurrently can be guaranteed to be processed concurrently in the same time slot, and the processing efficiency of the scheduling events is improved.

Further, the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:

acquiring the processing time of the scheduling event processed in series;

dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;

and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.

As can be seen from the above description, according to the processing time of the scheduling event of serial processing, the thread of serial processing is divided into a plurality of serial sub-threads whose processing time is less than or equal to one time slot, so that the processing time of serial processing can be ensured to be within one time slot, and missing of the sending opportunity of an air interface due to the fact that the processing time exceeds one time slot is avoided.

Further, the allocating the serial sub-threads obtained after the division to different CPU cores for performing the delay processing includes:

putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;

and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.

As can be seen from the above description, the scheduling manner of the pipeline can be implemented by storing the result of the sub-thread after processing in the preset queue and acquiring data by the next sub-thread through the preset queue, so that the processing time of each serial sub-thread is reduced, the number of processable user devices is increased, and the overall scheduling rate is increased.

Further, still include:

scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;

and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.

As can be seen from the above description, scheduling in advance the thread corresponding to the scheduling event that acquires the resource required for scheduling, since the resource required for scheduling is periodic, fixed, or predictable, allocating the resource required for scheduling in advance can improve the subsequent scheduling time.

Referring to fig. 2, another embodiment of the present invention provides a MAC layer scheduling terminal based on a 5G small cell, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the following steps when executing the computer program:

generating a corresponding scheduling event according to a scheduling relation between each uplink physical channel and each downlink physical channel in an MAC layer;

distributing scheduling events which can be processed concurrently in different threads for concurrent processing, distributing scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores;

and distributing the scheduling events processed in series to a plurality of independent threads for delayed processing.

As can be seen from the above description, the corresponding scheduling event is generated according to the scheduling relationship between each uplink and downlink physical channel, the scheduling events that can be processed concurrently are configured into different threads for concurrent processing, and the scheduling events that can only be processed serially are configured into the same thread for serial processing; when the thread of serial processing is delayed, the serial scheduling event which is most time-consuming and can not be processed concurrently is distributed to a plurality of threads, the original processing process is divided into a plurality of parts with similar execution time through a similar multi-stage pipeline processing mode, and because a plurality of threads are added for delayed processing compared with the original processing process, the number of scheduled users is increased correspondingly within the same time after a large number of user equipment is accessed, so that the data processing amount within each unit time can be increased by introducing a multi-thread and multi-core mode under the condition of not changing the processing capacity of a CPU, the number of mobile devices processed by a small base station within each scheduling time is increased, and the overall scheduling speed is increased.

Further, the allocating the concurrently processable scheduling events in different threads for concurrent processing includes:

distributing the concurrently processable scheduling events in different threads;

and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.

According to the above description, the scheduling events which can be processed concurrently are distributed in different threads, and the time slot time synchronization is performed on the threads, so that the scheduling events which can be processed concurrently can be guaranteed to be processed concurrently in the same time slot, and the processing efficiency of the scheduling events is improved.

Further, the allocating the serially processed scheduling events to a plurality of independent threads for delay processing comprises:

acquiring the processing time of the scheduling event processed in series;

dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;

and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.

As can be seen from the above description, according to the processing time of the scheduling event of serial processing, the thread of serial processing is divided into a plurality of serial sub-threads whose processing time is less than or equal to one time slot, so that the processing time of serial processing can be ensured to be within one time slot, and missing of the sending opportunity of an air interface due to the fact that the processing time exceeds one time slot is avoided.

Further, the allocating the serial sub-threads obtained after the division to different CPU cores for performing the delay processing includes:

putting a plurality of serial sub-threads into corresponding independent threads for processing, and judging whether the processing sequence of the serial sub-threads is the first one, if so, receiving data from an air interface and an upper layer service and processing the serial sub-threads, and if not, acquiring the data from a preset queue and processing the serial sub-threads;

and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.

As can be seen from the above description, the scheduling manner of the pipeline can be implemented by storing the result of the sub-thread after processing in the preset queue and acquiring data by the next sub-thread through the preset queue, so that the processing time of each serial sub-thread is reduced, the number of processable user devices is increased, and the overall scheduling rate is increased.

Further, still include:

scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;

and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.

As can be seen from the above description, scheduling in advance the thread corresponding to the scheduling event that acquires the resource required for scheduling, since the resource required for scheduling is periodic, fixed, or predictable, allocating the resource required for scheduling in advance can improve the subsequent scheduling time.

The MAC layer scheduling method and terminal based on the 5G small base station according to the present invention are applicable to scheduling the MAC layer by using multi-thread concurrence under the condition that the single CPU processing capability in the NR small base station is insufficient, and can increase the number of user equipments scheduled in each transmission time interval, thereby increasing the overall scheduling efficiency, and are described below by specific embodiments:

example one

Referring to fig. 1, a MAC layer scheduling method based on a 5G small cell includes the steps of:

and S1, generating a corresponding scheduling event according to the scheduling relation between each uplink physical channel and each downlink physical channel in the MAC layer.

Specifically, referring to fig. 3, the 5G base station includes multiple scheduling information to be processed, and generates a corresponding scheduling event according to a scheduling relationship between an uplink channel and a downlink channel, where in this embodiment, the ULSCH is uplink data and needs to perform MAC decoding; rach (random Access Channel) is a random Access Channel, and a Control Channel Element (CCE) and an uplink RB (Resource Block) need to be allocated during scheduling; the CRC is an uplink HARQ (Hybrid Automatic Repeat reQuest) process, the BSR (Buffer Status Report) is an uplink retransmission process, and the UE is scheduled for retransmission or retransmission according to information of the CRC and the BSR or SR (Scheduling reQuest); the SRS (Sounding Reference Signal) information is used to determine codebook or frequency selective scheduling of the UE; RLC SDU (Radio Link Control structure Service Data Unit) is downlink Data delivered by an upper Service, HARQ is information reported by PUCCH (Physical Uplink Control Channel), and both are used for downlink retransmission or new transmission of UE. The CSI (Channel State Information) is used to determine related configuration Information during downlink transmission, such as MCS (Modulation and Coding Scheme), PMI (Precoding Matrix Indicator), Layer, and the like during downlink transmission.

Referring to fig. 4, in the prior art, the MAC layer of a 5G small cell generally schedules each scheduling event in series according to an event sequence, and this processing method processes a whole flow in a single thread and a single CPU, so that it is required to ensure that time consumed for processing these flows is less than time of one slot, and if the time exceeds one slot, a sending opportunity of an air interface is missed, resulting in data transmission failure. The CPU used by the small base station generally has relatively poor performance, and the processing time of MAC scheduling is controlled by limiting the number of UEs processed in each slot.

And S2, distributing the scheduling events which can be processed concurrently in different threads for concurrent processing, distributing the scheduling events which can only be processed serially in the same thread for serial processing, and binding each thread in different CPU cores.

Wherein, the allocating the scheduling events that can be processed concurrently in different threads for concurrent processing includes:

distributing the concurrently processable scheduling events in different threads;

and carrying out time synchronization on the different threads based on time slots, and carrying out concurrent processing on the synchronized threads.

Specifically, in this embodiment, the scheduling module is allocated to different threads, each thread is bound to a different CPU core, and slot time synchronization is performed between multiple threads through a slot indication message of a PHY (Physical layer).

Wherein, still include: scheduling in advance to obtain a thread corresponding to a scheduling event of resources required for scheduling, and allocating the resources required for scheduling;

and scheduling the scheduling event of the resource required for obtaining scheduling once in each time slot after the thread corresponding to the scheduling event of the resource required for obtaining scheduling is scheduled in advance.

Specifically, the common channel information is periodic or fixed or predictable, and can be scheduled N slots in advance, CCEs and RBs are allocated, 3 slots are prepared in advance for scheduling the common channel initially, and then each slot is scheduled once.

Referring to fig. 5, in the present embodiment, the uplink decoded PUSCH uses an independent thread processing; the remaining scheduling events are processed in serial steps 1-N.

S3, distributing the scheduling event of the serial processing to a plurality of independent threads for delay processing;

dividing the scheduling event of serial processing into a plurality of steps, allocating each step to different threads for delay processing, and allocating each thread in the serial processing to different time slots once according to a serial processing sequence; therefore, the expansibility of placing the serial events in different threads of different time slots for processing is better, and the efficiency of concurrent processing is further improved, wherein the common channel processing thread is scheduled once in each slot, so that an uplink decoding step is adaptively added in each slot, and the information in the common channel is decoded and acquired.

And distributing the scheduling events which can be processed concurrently in different threads for concurrent processing, wherein the consumed time of each thread is low, and the number of the UE processed in each TTI can be increased as long as the maximum consumed time of each thread in each slot is not more than 1 slot during distribution.

Example two

The difference between this embodiment and the first embodiment is that how to allocate the serially processed scheduling events to multiple threads for delayed processing is further defined:

specifically, the allocating the serially processed scheduling event to a plurality of independent threads for performing the delay processing includes:

acquiring the processing time of the scheduling event processed in series;

dividing the thread processed in series into a plurality of series sub-threads with the processing time less than or equal to one time slot according to the processing time;

and distributing the serial sub-threads obtained after the division to different CPU cores for time delay processing.

In this embodiment, the thread processed in series is divided into a plurality of serial sub-threads with processing time less than or equal to one time slot according to the processing time of the scheduling event processed in series, for example, the step to be processed in each TTI of the base station is abstracted to the serial steps 1 to N, and the total processing time of the N steps is longer, so the N steps are divided into two parts with similar execution time, and the first part includes steps 1 to N1; the second part comprises steps N2-N, where N2 is (N1) +1, so that the total processing time of both parts after division is reduced to within one time slot.

Specifically, a plurality of serial sub-threads are put on independent thread-independent CPU cores for processing, whether the processing sequence of the serial sub-threads is the first or not is judged, if yes, data are received from an air interface and an upper layer service and the serial sub-threads are processed, and if not, data are obtained from a preset queue and the serial sub-threads are processed;

and judging whether the processing sequence of the serial sub thread is the last one, if so, storing the processing result in the preset queue, and if not, sending the processing result through an air interface.

Referring to fig. 5, the periodic and predictable channels such as the common channel are scheduled several slots in advance; uplink decoding is carried out on each TTI by one thread; the processing of the remaining channels is handled in serial steps 1 to N.

Referring to fig. 6, before this embodiment is implemented, each TTI receives a downlink data packet from an upper layer service and an uplink data packet from a physical layer in a single thread according to a time sequence, then performs all serial steps, and sends related data to the physical layer and then sends the related data out through an air interface by the physical layer after the processing is completed. Suppose that only 2 UEs can be handled per TTI.

Referring to fig. 7, after the embodiment is implemented, the serial step is split into two parts with similar execution time, where the first part includes steps 1 to N1; the second part comprises steps N2-N, each user device needs to be executed in two separate threads, each thread is bound to a separate CPU core, so that the two parts can be run simultaneously at any time.

Specifically, in TTI1, thread 2 schedules a first part of UEs 1-4;

in TTI2, thread 1 schedules the second part of UE 1-4, thread 2 schedules the first part of UE 4-8;

and by analogy, each subsequent TTI is a second part of the previous TTI scheduled by the thread 1, and a first part of the current TTI scheduled by the thread 2.

I.e., similar to two-stage pipeline processing, the first portion at thread 2 is then placed in the queue and the remaining second portion is processed by thread 1 starting at the next TTI. Therefore, one more concurrent thread can be considered to be processed, and the number of UEs that can be processed in each TTI is doubled.

The above steps are that the serial step is divided into two parts, and the two parts are processed by 2 threads, namely two-stage pipeline processing; the serial step can be divided into N parts according to specific requirements, and the N parts are processed by N threads so as to be expanded into an N-stage pipeline, thereby improving the overall performance of the system.

EXAMPLE III

Referring to fig. 2, a MAC layer scheduling terminal based on a 5G small cell includes a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor executes the computer program to implement the steps of the MAC layer scheduling method based on a 5G small cell according to the first embodiment or the second embodiment.

In summary, according to the MAC layer scheduling method and terminal based on the 5G small cell, the corresponding scheduling event is generated according to the scheduling relationship between each uplink and downlink physical channel, the scheduling events that can be processed concurrently are configured to different threads for processing concurrently, and the scheduling events that can only be processed serially are configured to the same thread for processing serially; when the serial processing thread is subjected to pipeline processing, serial scheduling events which are most time-consuming and cannot be processed concurrently are distributed to a plurality of threads for processing, although delay is carried out when the serial scheduling events are processed, the processing data volume in each unit time can be improved by introducing a multi-core mode under the condition that the processing capacity of a CPU is not changed, wherein the serial processing thread is divided into a plurality of serial sub-threads with the processing time less than or equal to one time slot according to the processing time of the serial processing scheduling events, so that the processing time of serial processing can be ensured to be in one time slot, and the sending opportunity of an air interface missing due to the fact that the processing time exceeds one time slot is avoided; therefore, the scheduling events are processed in a concurrent mode, the scheduling events which can only be processed in a serial mode are distributed to the CPU cores, and the existing MAC layer scheduling which can only be processed in the serial mode is processed in different CPU cores, so that the number of the mobile devices processed in each scheduling time of the small base station is increased, and the overall scheduling rate is increased.

The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all equivalent changes made by using the contents of the present specification and the drawings, or applied directly or indirectly to the related technical fields, are included in the scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于计算图的任务型会话管理框架、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!