Acceleration capability matching method and device, equipment and storage medium

文档序号:7332 发布日期:2021-09-17 浏览:26次 中文

1. A method for matching acceleration capabilities, the method comprising:

receiving an acceleration requirement independent of specific hardware types and characteristics;

searching a pre-established training model according to the acceleration requirement to obtain an acceleration hardware resource specification of a resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

and according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources, performing resource scheduling on the acceleration hardware resources in the resource pool.

2. The method of claim 1, further comprising:

resolving the acceleration requirement into an acceleration requirement description by adopting a specific language;

analyzing the acceleration demand description on an acceleration demand dimension according to an acceleration demand condition to obtain an acceleration demand dimension list;

and searching a pre-established training model according to the acceleration demand dimension list to obtain an acceleration hardware resource specification of the resource pool.

3. The method according to claim 1, wherein said performing resource scheduling on the acceleration hardware resource in the resource pool according to the real-time availability status of the acceleration hardware resource in the resource pool and the acceleration hardware resource specification comprises:

determining the acceleration hardware resource information meeting the acceleration hardware resource specification as alternative acceleration hardware resource information;

determining target acceleration hardware resource information according to the real-time available state of the acceleration hardware resource and the alternative acceleration hardware resource information;

and according to the target acceleration hardware resource information, carrying out resource scheduling on the acceleration hardware resources in the resource pool.

4. The method of claim 3, wherein determining target acceleration hardware resource information based on the real-time availability status of the acceleration hardware resource and the alternative acceleration hardware resource information comprises:

and deleting the acceleration hardware resource information corresponding to the acceleration hardware resource which is occupied in the available state from the alternative acceleration hardware resource information to obtain the target acceleration hardware resource information.

5. The method of any one of claims 1 to 4, wherein forming the training model comprises:

introducing a plurality of acceleration requirements which are irrelevant to specific hardware types and characteristics into a VNF requirement description file VNFD;

and establishing a mapping relation between each acceleration requirement and the acceleration hardware resource specification to obtain the training model.

6. The method of claim 5, wherein the mapping each acceleration requirement to an acceleration hardware resource specification to obtain the training model comprises:

analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list;

determining corresponding acceleration hardware resources in a resource pool as a candidate acceleration node set on an acceleration demand dimension in the acceleration demand dimension list;

determining resource capacity information in the candidate acceleration node set as an acceleration hardware resource specification matched with the acceleration requirement;

and associating each acceleration requirement with the corresponding acceleration hardware resource specification to obtain the mapping relation.

7. The method according to claim 6, wherein the analyzing each acceleration requirement description on the acceleration requirement dimension according to the acceleration requirement condition to obtain the list of acceleration requirement dimensions includes:

analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list to be perfected;

determining a limit value of resource capability information of the candidate accelerating node set;

and filling the limit value into the acceleration demand dimension list to be perfected.

8. The method according to claim 7, wherein the list of acceleration requirements to be perfected includes a key item and a Value item for characterizing resource capability information, wherein the Value item is an item to be perfected; correspondingly, the filling the limit value into the list of the acceleration requirement dimensions to be perfected includes:

and filling the limit Value into a Value item corresponding to the acceleration demand dimension list to be perfected to obtain a perfected acceleration demand dimension list.

9. An apparatus for matching acceleration capabilities, the apparatus comprising:

the receiving module is used for receiving the acceleration requirement which is irrelevant to the specific hardware type and the specific characteristics;

the searching module is used for searching a pre-established training model according to the acceleration requirement to obtain the acceleration hardware resource specification of the resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

and the scheduling module is used for scheduling the resources of the acceleration hardware resources in the resource pool according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources.

10. An acceleration capability matching apparatus comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the program.

11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.

Background

The Hardware Platform Awareness (HPA) technology implements Awareness and accurate scheduling of NFV Hardware capabilities for specific business requirements in the context of NFV Hardware generalization. In the prior art, the service requirement is converted into accurate accelerating device information, and the description mode of the accelerating device information is consistent with that of the accelerating node device.

All the data described in the HPA are accurate acceleration requirements, which put forward higher requirements on demand providers and network element manufacturers, and demand providers have clear cognition and mastery on the existing acceleration techniques, which leads to binding of VNF software applications and underlying hardware platforms, thereby hindering the technical independent evolution of the applications and the platforms, and further limiting technical innovation.

Disclosure of Invention

In view of the above, embodiments of the present application provide a matching method and apparatus for acceleration capability, a device, and a storage medium to solve at least one problem in the related art.

The technical scheme of the embodiment of the application is realized as follows:

in a first aspect, an embodiment of the present application provides a method for matching acceleration capability, where the method includes:

receiving an acceleration requirement independent of specific hardware types and characteristics;

searching a pre-established training model according to the acceleration requirement to obtain an acceleration hardware resource specification of a resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

and according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources, performing resource scheduling on the acceleration hardware resources in the resource pool.

In a second aspect, an embodiment of the present application provides an apparatus for matching acceleration capability, where the apparatus includes:

the receiving module is used for receiving the acceleration requirement which is irrelevant to the specific hardware type and the specific characteristics;

the searching module is used for searching a pre-established training model according to the acceleration requirement to obtain the acceleration hardware resource specification of the resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

and the scheduling module is used for scheduling the resources of the acceleration hardware resources in the resource pool according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources.

In a third aspect, an embodiment of the present application provides an acceleration capability matching apparatus, including a memory and a processor, where the memory stores a computer program executable on the processor, and the processor implements the steps in the above method when executing the program.

In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the method.

In the embodiment of the application, an acceleration requirement irrelevant to the specific hardware type and characteristics is received; searching a pre-established training model according to the acceleration requirement to obtain an acceleration hardware resource specification of a resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool; according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources, performing resource scheduling on the acceleration hardware resources in the resource pool; therefore, in this embodiment, the acceleration hardware resource specification is obtained according to the acceleration requirement by a training model (for example, a mapping relationship) between the acceleration requirement issued by the design state and the acceleration hardware resource specification in the running state, so that decoupling of the virtual network function software requiring data plane acceleration and the underlying hardware platform is realized, and the virtual network element is used as a resource pool user, and only the abstract acceleration requirement needs to be provided without specifying a specific acceleration hardware implementation scheme, and meanwhile, the matching capability of realizing the abstract requirement to specific resources based on static experience and dynamic matching is established, so that fuzzy-to-specific acceleration resource scheduling can be realized.

Drawings

Fig. 1 is a schematic flow chart illustrating an implementation process of a matching method for acceleration capability according to an embodiment of the present application;

FIG. 2 is a schematic flow chart illustrating an implementation of a method for forming a training model according to an embodiment of the present application;

FIG. 3 is a schematic diagram illustrating a configuration of a matching system for acceleration capability according to an embodiment of the present disclosure;

FIG. 4 is a diagram illustrating an implementation mechanism for searching for an accelerated resource of a resource pool according to an embodiment of the present application;

fig. 5 is a schematic structural diagram of a matching apparatus for acceleration capability according to an embodiment of the present application.

Detailed Description

Before the embodiments of the present application are described, the following terms are introduced:

HPA: hardware platform awareness;

NFV: network function virtualization, one of which is to be interpreted as: many types of Network devices (e.g., servers, switches, storage devices, etc., (servers, switches, storage devices, etc.) are constructed as a Data Center Network (Data Center Network), a Virtual Machine (VM) is formed by virtualization by Information Technology (IT), and then conventional communication services are deployed to the VM, in the NFV architecture, specific physical devices such as servers, storage devices, Network devices are underlying, NFV may generally include computing virtualization, storage virtualization, and Network virtualization, and the computing acceleration, Network acceleration, and storage acceleration provided below the embodiments of the present application correspond to the above three types, respectively, wherein computing virtualization is a Virtual Machine, a plurality of Virtual systems are created on one server, storage virtualization is a plurality of storage devices are virtualized into one logical storage device, Network virtualization, namely, the control plane of the network device is separated from the bottom hardware, and the control plane of the device is installed on the server virtual machine. One standard architecture for NFV includes NFV infrastructure (NFVI), Management and organization (MANO), and virtual Network layers (VNFs).

VNF: a virtual network function;

VNFD: VNF requirements description file (topology description file);

SDC: designing and creating a service;

and (4) UUI: a service management human-computer interaction interface;

and NS: a network service;

NSD: a network traffic template (network service topology description template), the NSD comprising a VNF template and a virtual network connection VL between the VNFs;

VNFP: a VNF template package;

NFVO: a network function virtualization orchestrator;

VNFM: a VNF manager;

VIM: a virtual resource infrastructure manager;

VM Flavor: specification of a virtual machine;

VL: virtual network connection.

schema: represents a collection of database objects, which may be tables (tables), columns (columns), data types (data types), views (views), stored procedures (stored procedures), relationships (relationships), primary keys (primary keys), foreign keys (foreign keys), etc.;

HPA demand capacity: common examples include Host CPU capability request (Host CPU capability request), PCI SR-IOV, Non-Uniform Memory Access (NUMA) support, CPU binding (CPU tying), page (Huge page) support, Intelligent Platform Management Interface (IPMI) for monitor, OpenVSwitch + Data Plane Development Kit (OVS + k), Reliable Data transmission Gate Array (Reliable Data transfer), rdt and TCP protocol, GPU Virtualization, Field Programmable Gate Array (FPGA), and other Programmable Field devices (FPGAs).

The orchestrator is used as the brain of the next generation network intelligent operation, realizes the automatic management of the whole life cycle of the network through four links of service design, service orchestration, service online and strategy operation, and is the core component of the new generation network management.

The service design is completed in a design-mode module, and firstly, a virtual network function resource template (VNF packet) provided by an equipment manufacturer is loaded to the design-mode module, and then the design of a network service model is completed in a mode of combining the virtual network functions and specifying the network connection relation of the virtual network functions. The network service description package with the completed design is distributed to the runtime modules along with the virtual network function description package required for its composition for deployment on demand in the production environment.

The network operation and maintenance personnel submit operation requests for formulating the network service through the operation state interactive interface, and then life cycle management operations such as instantiation, capacity reduction and expansion, termination and the like of the network service can be completed. The general procedure is as follows:

step 1) in the design state, the system receives a VNF package (named VNFP) of a manufacturer0The VNF package consists of a topology description file VNFD and other matched files Artifacts), the topology description file VNFD and the other matched files Artifacts are loaded to a design state, and a network service designer uploads the topology description file VNFD and the other matched files Artifacts to generate the design state VNF package (named VNFP) according to service requirements through VNF1) The network service template NSD (including the VNF template and the virtual network connection VL between the VNF) is formed by combining the service design modules and is stored in a design state template library together with the NSD; the operation mode module subscribes NSD and VNFD templates to the design mode and automatically synchronizes the new templates

Step 2) operation and maintenance management personnel in the running production network select services to be instantiated in the service template list and input instantiation parameters to initiate service instantiation requests to NFVO, wherein the service instantiation requests comprise the ID and the instantiation parameters of the service template;

and 3) after receiving an NS instantiation request (including instantiation parameters related to the NS), the NFVO searches for a corresponding NS template, disassembles the NS template to obtain a VNF template ID and VL description information included in the NS template, creates a corresponding VL, and delegates an instantiation task of the corresponding VNF to a VNFM for further execution. After receiving a VNF instantiation request (containing instantiation parameters) sent by the NFVO, the VNFM searches for a corresponding VNF template, disassembles the VNF template, and requests the VIM to create a corresponding virtual machine for instantiation according to a resource requirement description file in the VNF template.

In the above process, the VNFM includes a parameter of the VM flag when calling the virtual machine creation interface of the VIM, where the created virtual machine specification (including the general virtual hardware resource information and the resource information HPA bound to the actual hardware platform) is specified. There are two ways to calculate this parameter: firstly, directly creating according to a VM (virtual machine) browser parameter specified in a VNF (virtual network function) template; secondly, the NFVO matches a most suitable specification in the VM shaders which are uniformly defined in the running state platform according to the requirement information specified in the VNF template to create.

The HPA technology realizes that the sensing and accurate scheduling of NFV hardware capability is completed aiming at specific business requirements under the general background of NFV hardware. In the prior art, the service requirement is converted into accurate accelerating device information, and the description mode of the accelerating device information is consistent with that of the accelerating node device.

All the requirements described in HPA are precise acceleration requirements (also called HPA demand capability), which put high demands on demand providers and network element manufacturers, and demand providers have clear cognition and mastery on the existing acceleration technology, which leads to the binding of VNF software applications and underlying hardware platforms, thereby hindering the independent evolution of the application and platform technologies and further limiting the technological innovation.

In order to realize decoupling of virtual network function software needing data plane acceleration and a bottom hardware platform, the scheme provides description of virtual network function data plane acceleration requirements (acceleration service requirements), management of acceleration resources and a method and a system for matching requirements and resources.

The technical solution of the present application is further elaborated below with reference to the drawings and the embodiments.

The embodiment proposes a matching method of acceleration capability, which is applied to a device of the matching method of acceleration capability, and the functions implemented by the method can be implemented by a processor in the device calling a program code, although the program code can be stored in a computer storage medium, and the device at least includes a processor and a storage medium.

Fig. 1 is a schematic flow chart of an implementation process of a matching method of acceleration capability according to an embodiment of the present application, and as shown in fig. 1, the method includes:

step S101, receiving an acceleration requirement irrelevant to the specific hardware type and characteristics;

step S102, searching a pre-established training model according to the acceleration requirement to obtain an acceleration hardware resource specification of a resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

step S103, according to the real-time available state of the acceleration hardware resources in the resource pool and the specification of the acceleration hardware resources, performing resource scheduling on the acceleration hardware resources in the resource pool.

In this embodiment, the acceleration hardware resource specification is obtained according to the acceleration requirement by a training model (for example, a mapping relationship) between the acceleration requirement issued in a design state and the acceleration hardware resource specification in a running state, so that decoupling of virtual network function software requiring data plane acceleration and a bottom hardware platform is realized, and a virtual network element is used as a resource pool user, and only an abstract acceleration requirement needs to be provided without specifying a specific acceleration hardware implementation scheme, and meanwhile, a matching capability for realizing abstract requirements to specific resources based on static experience and dynamic matching is established, so that fuzzy-to-specific acceleration resource scheduling can be realized.

In some embodiments, the method further comprises:

step S104, analyzing the acceleration requirement into an acceleration requirement description by adopting a specific language;

here, the specific language may be a TOSCA language, and thus, the acceleration requirement of the fuzzification may be described using the TOSCA language.

Step S105, analyzing the acceleration demand description on an acceleration demand dimension according to an acceleration demand condition to obtain an acceleration demand dimension list;

in some embodiments, the acceleration requirement description may be automatically parsed by software to obtain a list of acceleration requirement dimensions. In other embodiments, the acceleration requirement description may also be parsed by using a network element vendor and an operator of the system through a predefined fuzzification acceleration requirement condition (i.e., the described NFVI _ requirements), so as to obtain an acceleration requirement dimension list.

Correspondingly, step S102 is: and searching a pre-established training model according to the acceleration demand dimension list to obtain an acceleration hardware resource specification of the resource pool.

In some embodiments, the performing resource scheduling on the acceleration hardware resource in the resource pool according to the real-time available state of the acceleration hardware resource in the resource pool and the acceleration hardware resource specification includes:

step A1, determining the acceleration hardware resource information meeting the acceleration hardware resource specification as alternative acceleration hardware resource information;

step A2, determining target acceleration hardware resource information according to the real-time available state of the acceleration hardware resource and the alternative acceleration hardware resource information;

here, the real-time available state includes idle and occupied, and idle hardware resources are selected from the candidate acceleration hardware resource information and determined as target acceleration hardware resource information.

Step A3, according to the target acceleration hardware resource information, performing resource scheduling on the acceleration hardware resources in the resource pool.

In some embodiments, the determining target acceleration hardware resource information according to the real-time availability status of the acceleration hardware resource and the alternative acceleration hardware resource information includes: and deleting the acceleration hardware resource information corresponding to the acceleration hardware resource which is occupied in the available state from the alternative acceleration hardware resource information to obtain the target acceleration hardware resource information.

An embodiment of the present application provides a method for forming a training model in the above embodiment shown in fig. 1, as shown in fig. 2, the method includes:

step 201, introducing multiple acceleration requirements irrelevant to specific hardware types and characteristics into a VNF requirement description file VNFD;

step 202, analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list;

step 203, determining corresponding acceleration hardware resources in a resource pool as a candidate acceleration node set on an acceleration demand dimension in the acceleration demand dimension list;

step 204, determining the resource capacity information in the candidate acceleration node set as an acceleration hardware resource specification matched with the acceleration requirement;

step 205, associating each acceleration requirement with a corresponding acceleration hardware resource specification to obtain the mapping relationship.

Here, steps 202 to S205 provide a method for establishing a mapping relationship between each of the acceleration requirements and the acceleration hardware resource specification to obtain the training model.

In some embodiments, the analyzing each acceleration requirement description in the acceleration requirement dimension according to the acceleration requirement condition to obtain the acceleration requirement dimension list includes:

step B1, analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list to be perfected;

in the implementation process, NFVI (e.g., openstack, etc.) may be used to collect resource capability information of the acceleration node, and import the resource capability information into the content to be filled in the acceleration requirement dimension list to be completed. The list of the acceleration requirement dimensions to be completed is shown in table 0.1, where the TBA is contents to be filled, the bandwidth (bandwidth) in the table can be understood as a key item, and the contents filled in the TBA can be understood as a Value item and resource occupation collected by the acceleration node.

TABLE 0.1

Step B2, determining the limit value of the resource capacity information of the candidate accelerating node set;

here, the limit value includes an upper limit (ceiling) and/or a lower limit (ceiling).

And step B3, filling the limit value into the acceleration requirement dimension list to be completed.

In some embodiments, the to-be-completed acceleration requirement list includes a key item and a Value item for characterizing resource capability information, where the Value item is the to-be-completed item; correspondingly, the filling the limit value into the list of the acceleration requirement dimensions to be perfected includes: and filling the limit Value into a Value item corresponding to the acceleration demand dimension list to be perfected to obtain a perfected acceleration demand dimension list.

Here, after table 0.1 is filled with the limit value, table 0.2 is obtained, that is, the table is the complete acceleration requirement dimension list, and the complete table 0.2 can be regarded as schema information of the acceleration capability resource pool.

TABLE 0.2

In the embodiment of the present application, an abstract acceleration requirement that is not related to a specific hardware type and a specific characteristic is introduced into a VNF requirement description file (VNFD), in other embodiments, an acceleration service abstraction requirement is simply referred to as an acceleration requirement, and the acceleration requirement includes a required acceleration service type, platform information, and a performance index. In some embodiments, the types of acceleration requirements include computational acceleration, network acceleration, and storage acceleration, among others. In some embodiments, the performance indicator of the acceleration requirement may be a performance indicator commonly used in a performance indicator list, such as bandwidth, latency, etc., and of course, the performance indicator may also be an indicator related to the kind of the acceleration requirement, such as a performance indicator of the processing speed of a processor, such as a GPU, a CPU, etc., besides the bandwidth and latency in the calculation acceleration.

In the embodiment of the application, after the VNFP is uploaded in the design state and before a corresponding Network Service Package (NSP) and the VNFP are distributed to the running state, a process of modeling an acceleration hardware resource and an acceleration Service index is introduced, and a mapping relation between an acceleration requirement and an acceleration hardware resource specification in the running state is established and issued to a running module; wherein the acceleration hardware resource specification includes a required acceleration hardware type and configuration parameters.

In the instantiation process of the VNF in the running state, the needed acceleration requirement is converted into the acceleration hardware resource specification of the resource pool by referring to the mapping relation established in the design state, and the acceleration resource scheduling is carried out by combining the real-time available state of the acceleration resource. Wherein the available state comprises an occupied or idle state.

The system work flow of the technical scheme provided by the embodiment of the application is divided into a static experience stage and a dynamic matching stage. Wherein:

the fuzzification acceleration requirement in the static experience stage is designed through a requirement dimension, the presentation of a resource pool schema with acceleration capability and multiple rounds of test training, an optimal model scheme for accelerating resource device matching and scheduling is established, and the model scheme (the established mapping relation) can be a node selection scheme and an acceleration resource combination mode for meeting certain fuzzification acceleration requirement.

The dynamic matching stage is stage description of application of the system and a corresponding model in an actual operation process, steps 1 and 2 in the stage follow a fuzzy acceleration demand input and demand dimension design mode of a static experience stage, and step 3 is used for searching a resource pool based on a training model and an acceleration demand dimension list obtained in the static experience stage in an acceleration resource searching process in the resource pool, so that searching of an optimal acceleration node and determination of a scheduling scheme are realized. And gradually finishing the scheduling of the optimal accelerating equipment resource in the subsequent steps.

The system provided by the embodiment of the present application includes two major parts, namely a design module and an operation module, as shown in fig. 3, wherein the design module 31 includes a service receiving module 311, a service description module 312, and a design storage module 313; the execution module 32 includes a resource capability collection module 321, an execution storage module 322, a capability matching module 323, and a scheduling module 324.

Aiming at the system workflow, the method comprises a static experience process and a dynamic matching process. From the perspective of system composition, the design module 31 is oriented to an acceleration requirement level, and the service receiving module 311 completes the acceleration requirement uploaded by the client; the service description module 312 implements matching between an acceleration requirement and an acceleration requirement dimension list, where the acceleration requirement description includes fuzzification acceleration type, acceleration platform, and acceleration device information description; the design storage module 313 is configured to store the acceleration requirement dimension that has been subjected to the fuzzification matching and the resource capability schema corresponding to the acceleration requirement dimension.

The operation module 32 is oriented to an acceleration device scheduling layer, and implements acceleration device scheduling, wherein the resource capability collection module 321 collects information of an acceleration node in a certain period; the operation storage module 322 is bound with the design storage module 313, and the operation storage module 322 includes service description which is reported by the design storage module and has completed fuzzification matching, and also includes accelerating equipment information which is collected periodically; the capability matching module 323 queries and matches the information of the acceleration equipment to the service description which is subjected to fuzzification matching, and completes the process from fuzzification service description to accurate locking of the acceleration equipment; the scheduling module 324 completes the scheduling process of the acceleration device under the acceleration node.

The acceleration node may be a computing node, a storage node, a network node, and the like, and taking the computing node as an example, each computing node may be in one of the following forms: the cloud with the network topology, the servers in the cloud, and other forms of devices with computing capabilities, and thus the multiple computing nodes provided by the embodiments of the present application may be a combination of cloud services provided by enterprise a and cloud services provided by enterprise B, and the multiple computing nodes may also be a combination of several servers in cloud services provided by enterprise C and other forms of devices with computing capabilities, and so on.

The device information may include hardware information of devices such as device identifiers, accelerator cards, GPUs, CPUs, and the like.

From the viewpoint of system workflow, the static experience link is completed in a design state testing stage, and the method comprises the following steps:

and step 11) fuzzification description of acceleration requirements is carried out in a testing stage.

The acceleration requirements include three categories, namely computation acceleration, network acceleration and storage acceleration, wherein: computational acceleration includes, but is not limited to: IPsec encryption and decryption acceleration, transcoding hardware acceleration, NGFW acceleration and the like; network acceleration includes, but is not limited to: load balancing, NAT and NFVI virtual network offloading and the like; storage acceleration includes, but is not limited to: based on structured NVMe acceleration, the high-performance permanent memory of the computing node and the like.

Step 12) matching a model obtained by training in a static experience stage with the acceleration equipment information for the fuzzified acceleration requirement description facing the user angle; wherein: the model is stored in a capability matching module and an operation storage module and is used in a dynamic matching stage;

and step 13) storing the matched fuzzification service description into a design storage module to complete the static experience process of the test stage.

The other part of the system workflow is a dynamic matching link, which is completed in the actual operation stage and mainly comprises the following steps:

step 21) uploading an acceleration requirement on a client by a user, and sending the acceleration requirement to the system through a service receiving module, wherein the service receiving module converts the acceleration requirement into fuzzified acceleration requirement description;

step 22) the service description module receives the fuzzified acceleration requirement description from the service receiving module, searches the service description which is established in the static experience link in the design storage module and is matched with the fuzzification, and corresponds to specific target acceleration equipment information;

step 23) binding the service description which is subjected to fuzzification matching with the target accelerating equipment information, and storing the service description into a design storage module;

step 24) the resource capacity collection module reports the bottom hardware resources in the running period and stores the bottom hardware resources in the running storage module;

step 25), the capability matching module matches the current service description and the target accelerating equipment information of the design storage module with the accelerating equipment resources reported by the operation storage module to perform accurate and optimal capability matching;

step 26) realizing the acceleration requirement of capability matching, and completing the calling of the final acceleration equipment and the realization of the acceleration scheme through the scheduling module.

First, static experience stage:

taking IPsec encryption and decryption acceleration as an example, the technical solution provided in the embodiments of the present application specifically includes:

step 11, in the design module, network element manufacturers and operators using the system describe acceleration requirements of the fuzzification by using a Cloud application Topology deployment for Cloud Applications (TOSCA) language, derive a packet as a Cloud Service Archive (CSAR), upload the packet to a Service receiving module, and use encryption and decryption as an example, the description mode can be as shown in table 1.1:

TABLE 1.1

Wherein, the template of the section aiming at the table 1.1 is analyzed in a generalized way to be a table 1.2: "

TABLE 1.2

Step 12, the service description module parses the pre-uploaded fuzzified acceleration requirement description (i.e. the described NFVI _ requirements) into acceleration requirement dimensions, feeds the acceleration requirement dimensions back to the resource capability collection module in a list manner, and transmits the acceleration requirement dimensions to the NFVI (e.g. openstack, etc.), wherein the system acceleration capability ID value is a matching value corresponding to the acceleration type field, the system version number is a system automatic allocation random value, and "TBA" indicates the content of the item to be filled, and the NFVI is required to collect resource capability information from the acceleration node for automatic improvement. The presentation of the list of acceleration demand dimensions (as seen in this example) is seen in 1.3:

TABLE 1.3

For table 1.3, see table 1.4 for generalized parsing of this list:

TABLE 1.4

Step 13, the NFVI (e.g., openstack, etc.) collects the resource capacity information of the acceleration node, introduces the resource capacity information into the content of the item to be filled in the acceleration requirement dimension list (i.e., the TBA part in the list), reports the content to the operation storage module, embodies the schema of the acceleration capacity resource pool, and completes synchronization with the acceleration capacity resource pool information of the design storage module.

The accelerator information is specifically locked to the scheduled accelerator hardware and devices, and the resource capability here is only simple resource condition information collected, which refers to the current condition of each accelerator card in each accelerator node, see the content in fig. 4, whether the accelerator card is occupied, and the like.

In the implementation, the resource capability collection module may be NFVI, such as openstack, etc. In this example, the schema information of the acceleration capability resource pool includes, but is not limited to, the following information, which may be referred to in table 1.5:

TABLE 1.5

The generalized analysis of the accelerated resource pool schema information can be seen in table 1.6:

TABLE 1.6

The resource use condition is used for confirming whether the accelerating node resource is in an occupied state or an idle state, and if the accelerating node resource is in the idle state, the accelerating node is allowed to be scheduled to the corresponding accelerating node.

And step 14, the capability matching module calls the running storage module to search a list according to the acceleration requirement dimension in the system design module, and searches acceleration nodes through the running storage module, and if A, B two acceleration nodes exist at the moment, wherein the node B comprises acceleration equipment capability which meets the acceleration requirement dimension, the scheduling module selects to deploy to the acceleration nodes.

In step 15, in the testing stage, the capability matching module can use table 1.7 to complete the scheduling mechanism matching between the fuzzified acceleration demand dimension and the hardware resource scheduling platform (for example, HPA in ONAP, etc.):

TABLE 1.7

Obfuscating acceleration demand dimension HPA-attribute-key HPA-attribute-value
Fuzzification acceleration requirement 1 Resource capability field 1 Resource capacity value 1
Fuzzification acceleration requirement 2 Resource capability field 2 Resource capacity value 2

Step 16, in the testing stage, aiming at different acceleration requirement types, network acceleration, calculation acceleration and storage acceleration are set as large-class granularity, and multiple rounds of test training can be performed based on the ETSI-IFA 001 standard case range, wherein:

the standard use case range includes: 1) computational acceleration includes, but is not limited to, the following scenarios: IPsec encryption and decryption acceleration, transcoding hardware acceleration, NGFW acceleration and the like; 2) network acceleration includes, but is not limited to: load balancing, NAT and NFVI virtual network offloading and the like; 3) storage acceleration includes, but is not limited to: based on structured NVMe acceleration, the high-performance permanent memory of the computing node and the like.

And step 17, storing the model obtained through the multiple rounds of test training to a capability matching module and a running storage module for use in a dynamic matching stage.

Second, dynamic matching stage:

also taking Internet Protocol Security (IPsec) acceleration requirements as an example, the specific implementation steps in the dynamic matching stage are as follows:

step 21, describing the acceleration requirement of fuzzification by a service receiving module in a design module by adopting a TOSCA language, referring to table 2.1, and exporting a CSAR packet:

TABLE 2.1

And step 22, the user analyzes the fuzzified acceleration requirement description into an acceleration requirement dimension list (see table 2.2) and feeds the acceleration requirement dimension list back to the resource capacity collection module.

Wherein the obfuscated acceleration requirement description is in the form of a CSAR package. The user may be a network element manufacturer or operator using the system, such as an operation and maintenance person or a tester, using the system.

In the implementation process, the resource capacity collection module can be connected with the client, and the user analyzes the fuzzified acceleration requirement description through the client. In another embodiment, a parsing module can be added to the system to implement parsing of the obfuscated acceleration requirement description into an acceleration requirement dimension list.

TABLE 2.2

And step 23, the capability matching module searches a resource pool of the training model obtained in the static experience stage and the acceleration demand dimension list, and completes dimension matching with the original HPA mechanism.

The mechanism for searching for the accelerated resource of the resource pool is shown in fig. 4, assuming that there are three accelerated nodes at this time, the accelerated nodes report respective accelerated resource dimension information of Acc1, Acc2, and Acc3, the capability matching module adopts an accelerated requirement dimension list schema imported by the design module as a filtering condition, and compares the accelerated dimension information Acc1, Acc2, and Acc3 with the accelerated requirement dimension list schema, so as to filter out an updated accelerated dimension list, store the updated accelerated dimension list in the operation storage module, and execute a specific call in the next step. Wherein:

the schema corresponding to the Acc1 includes:

12v CPUs;

24GB RAM;

PCle GPU accelerator;

bandwidth_max:800M;

Delay_max:100ms;

the schema corresponding to the Acc2 includes:

8v CPUs;

24GB RAM;

SriovNIC Network;

bandwidth_max:2000M;

Delay_max:6ms;

the schema corresponding to the Acc3 includes:

8v CPUs;

24GB RAM;

SriovNIC Network;

bandwidth_max:1000M;

Delay_max:30ms;

and step 24, in the dynamic matching stage, the matched acceleration resource information updates the acceleration demand dimension list imported by the system into hardware dimension information of the HPA mechanism by the capability matching module, and the final acceleration node scheduling and deployment are completed.

Taking the IPsec requirement of this example as an example, when the found matching relationship can be updated to the list, see table 2.3:

TABLE 2.3

And 25, if the proper acceleration node resource cannot be found in the dynamic matching process, transmitting the acceleration requirement dimension list to a hardware provider in a static experience stage mode to complete a new round of resource pool data updating, and expanding the acceleration capacity resource pool of the operation storage module and the design storage module according to the updated schema.

Step 26, when the system in this scheme uses a specific acceleration node resource in the process of performing acceleration node matching, running the schema corresponding to the resource in the storage module and displaying the schema as "occupied", notifying the capability matching module to perform a new round of capability matching, and scheduling the resource to the acceleration node displayed as "free" in the schema, where the flow is consistent with the matching flow, and therefore not described in detail, and the corresponding occupied state schema is shown in the following table 2.4:

TABLE 2.4

And 27, when the resource in the system is used, the resource capacity collection module informs the running storage module that the schema state corresponding to the completed resource is idle so as to be used for dynamic matching.

The embodiment of the application provides a method and a system for fuzzification acceleration capability requirement description, capability matching and resource scheduling independent of hardware platform information; the matching system based on the fuzzification acceleration capability comprises a design module and an operation module; the design module comprises a service receiving module, a service description module and a design storage module; the operation module comprises a resource capacity collection module, an operation storage module, a capacity matching module and a scheduling module. In some embodiments, the service receiving module is configured to receive an acceleration requirement that is independent of a specific hardware type and a specific characteristic, and parse the acceleration requirement into an acceleration requirement description; the service description module is used for analyzing the acceleration requirement description on an acceleration requirement dimension to obtain an acceleration requirement dimension list; the capability matching module is used for searching a training model according to the acceleration requirement dimension list to obtain an acceleration hardware resource specification of a resource pool; wherein the mapping is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool; searching for acceleration hardware resource information meeting the acceleration hardware resource specification to obtain candidate acceleration hardware resource information; the scheduling module is used for screening the candidate hardware resource information according to the real-time available state of the acceleration resources to obtain target acceleration hardware resource information; and scheduling the acceleration hardware resources in the resource pool according to the target acceleration hardware resource information.

The system workflow designed by the embodiment of the application comprises a static experience process and a dynamic matching process, wherein the static experience process finishes the accurate matching of the fuzzification service description in a testing stage, and the dynamic matching process realizes the final calling of the acceleration equipment in an actual operation stage.

In some embodiments, the design state phase includes the steps of:

step 31, introducing multiple pieces of acceleration requirement description which are irrelevant to specific hardware types and characteristics into a VNF requirement description file VNFD;

step 32, analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list to be perfected;

wherein: the acceleration demand list to be perfected comprises a key item and a Value item for representing the dimension of the acceleration demand, wherein the Value item is an item to be perfected;

step 33, determining the corresponding acceleration hardware resources in the resource pool as a candidate acceleration node set on the key item;

step 34, determining a limit value of the resource capacity information of the candidate acceleration node set;

step 35, filling the limit Value into a Value item corresponding to the acceleration demand dimension list to be completed to obtain a completed acceleration demand dimension list; and saving the completed acceleration demand dimension list.

Step 36, determining the resource capacity information in the candidate acceleration node set as an acceleration hardware resource specification matched with the acceleration requirement;

and step 37, associating each acceleration requirement with a corresponding acceleration hardware resource specification to obtain the mapping relation.

Compared with the prior art, the embodiment of the application provides a matching method based on fuzzification acceleration capability, a user-oriented service description mode is realized, and the fuzzified service description is set up to be connected with accurate resource capability; two processes of static experience and dynamic matching are designed in the system workflow, so that the design and operation scheduling requirements of the system can be fully met; the fuzzification requirement matching capability provided by the embodiment of the application also lays a foundation for subsequently introducing more user angle-oriented acceleration technologies (such as FPGA and the like).

Based on the foregoing embodiments, the present application provides an apparatus for matching acceleration capability, where the apparatus includes modules and units included in the modules, and each subunit included in each unit may be implemented by a processor in a matching device (e.g., a composer) for acceleration capability; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.

Fig. 5 is a schematic structural diagram illustrating a matching apparatus for acceleration capability according to an embodiment of the present application, and as shown in fig. 5, the apparatus 500 includes:

a receiving module 501, configured to receive an acceleration requirement unrelated to a specific hardware type and a specific characteristic;

a searching module 502, configured to search a pre-established training model according to the acceleration requirement, so as to obtain an acceleration hardware resource specification of a resource pool; wherein the training model is used to convert the acceleration requirements into an acceleration hardware resource specification for a resource pool;

and a scheduling module 503, configured to perform resource scheduling on the acceleration hardware resource in the resource pool according to the real-time available state of the acceleration hardware resource in the resource pool and the specification of the acceleration hardware resource.

In some embodiments, the module further comprises:

the first analysis module is used for analyzing the acceleration requirement into an acceleration requirement description by adopting a specific language;

the second analysis module is used for analyzing the acceleration demand description on an acceleration demand dimension according to an acceleration demand condition to obtain an acceleration demand dimension list;

correspondingly, the searching module is used for searching a pre-established training model according to the acceleration demand dimension list to obtain an acceleration hardware resource specification of the resource pool.

In some embodiments, the scheduling module comprises:

a first determining unit, configured to determine acceleration hardware resource information that meets the acceleration hardware resource specification as alternative acceleration hardware resource information;

a second determining unit, configured to determine target acceleration hardware resource information according to the real-time available state of the acceleration hardware resource and the candidate acceleration hardware resource information;

and the scheduling unit is used for scheduling the accelerated hardware resources in the resource pool according to the target accelerated hardware resource information.

In some embodiments, the second determining unit is configured to delete the acceleration hardware resource information corresponding to the acceleration hardware resource whose available state is occupied from the alternative acceleration hardware resource information, so as to obtain the target acceleration hardware resource information.

In some embodiments, the apparatus further comprises:

the system comprises an introducing module, a judging module and a judging module, wherein the introducing module is used for introducing a plurality of acceleration requirements which are irrelevant to specific hardware types and characteristics into a VNF requirement description file VNFD;

and the establishing module is used for establishing a mapping relation between each acceleration requirement and the acceleration hardware resource specification to obtain the training model.

In some embodiments, the establishing module comprises:

the first analysis unit is used for analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list;

a third determining unit, configured to determine, in an acceleration requirement dimension in the acceleration requirement dimension list, a corresponding acceleration hardware resource in a resource pool as a candidate acceleration node set;

a fourth determining unit, configured to determine resource capability information in the candidate acceleration node set as an acceleration hardware resource specification matching the acceleration requirement;

and the association unit is used for associating each acceleration requirement with the corresponding acceleration hardware resource specification to obtain the mapping relation.

In some embodiments, the first parsing unit includes:

the first analysis subunit is used for analyzing each acceleration requirement description on an acceleration requirement dimension according to an acceleration requirement condition to obtain an acceleration requirement dimension list to be perfected;

a determining subunit, configured to determine a limit value of the resource capability information of the candidate acceleration node set;

and the filling subunit is used for filling the limit value into the acceleration requirement dimension list to be perfected.

In some embodiments, the to-be-completed acceleration requirement list includes a key item and a Value item for characterizing resource capability information, where the Value item is the to-be-completed item; correspondingly, the filling subunit is configured to fill the limit Value into the Value entry corresponding to the acceleration demand dimension list to be refined, so as to obtain a refined acceleration demand dimension list.

The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.

It should be noted that, in the embodiment of the present application, if the matching method of the acceleration capability is implemented in the form of a software functional module and is sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a matching device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.

Correspondingly, the embodiment of the present application provides an acceleration capability matching device, which includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor implements the steps in the above method when executing the program.

Correspondingly, the embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program realizes the steps of the above method when being executed by a processor.

Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.

It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.

Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.

Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.

The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种任务处理的方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!