Method and device for upgrading computing node

文档序号:7297 发布日期:2021-09-17 浏览:51次 中文

1. A method of computing node upgrade, the method comprising:

acquiring an upgrading strategy and tenant information;

determining M computing nodes to be upgraded from N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein N is a positive integer larger than or equal to 2, and M is a positive integer larger than or equal to 1 and smaller than N;

and carrying out upgrading operation on the M computing nodes to be upgraded.

2. The method of claim 1, wherein the upgrade policy includes a first threshold and a second threshold, the first threshold and the second threshold being positive integers, the first threshold being smaller than the second threshold, the second threshold being smaller than the number of VMs of the same tenant allocated to the N computing nodes to be upgraded, wherein the tenant information includes VM allocation information indicating the number of VMs of the same tenant allocated to each of the N computing nodes to be upgraded,

determining M computing nodes to be upgraded from the N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein the method comprises the following steps:

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein K is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the K candidate computing nodes is less than or equal to the first threshold;

and determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold.

3. The method of claim 2, wherein the upgrade policy further includes a third threshold, the third threshold being a positive integer, the third threshold being less than or equal to a maximum level allocated to the tenant among the N computing nodes to be upgraded, wherein the tenant information further includes tenant level information indicating a level of the tenant included in each of the N computing nodes to be upgraded,

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein the K candidate computing nodes comprise:

determining Q candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein Q is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the Q candidate computing nodes is equal to the first threshold;

and determining the K candidate computing nodes from the Q candidate computing nodes according to the third threshold and the tenant level information, wherein K is a positive integer which is less than or equal to Q and greater than or equal to M, and the level of the tenant included in each computing node to be upgraded in the K candidate computing nodes is less than or equal to the third threshold.

4. The method of claim 2 or 3, wherein the upgrade policy further comprises a number of compute nodes to be upgraded, the number of nodes to be upgraded being equal to M.

5. The method according to any one of claims 2-4, wherein prior to determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, the method further comprises:

sorting the K candidate computing nodes according to the number of tenant VMs included in each candidate computing node in the K candidate computing nodes to obtain an arrangement list;

determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, including:

and in the arrangement list, sequentially determining the M computing nodes to be upgraded from the K candidate computing nodes according to the sequence, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold value.

6. The method according to any one of claims 1-5, further comprising:

and upgrading the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded.

7. An apparatus for computing node upgrade, the apparatus comprising:

the acquisition unit is used for acquiring the upgrading strategy and the tenant information;

the processing unit is used for determining M computing nodes to be upgraded from N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein N is a positive integer larger than or equal to 2, and M is a positive integer larger than or equal to 1 and smaller than N;

the processing unit is further configured to perform an upgrade operation on the M computing nodes to be upgraded.

8. The apparatus of claim 7, wherein the upgrade policy comprises a first threshold and a second threshold, the first threshold and the second threshold being positive integers, the first threshold being less than the second threshold, the second threshold being less than a number of VMs of the N compute nodes to be upgraded that are allocated to a same tenant, wherein the tenant information comprises VM allocation information indicating a number of VMs of the same tenant that are allocated to each of the N compute nodes to be upgraded,

the processing unit is further to:

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein K is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the K candidate computing nodes is less than or equal to the first threshold;

and determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold.

9. The apparatus of claim 8, wherein the upgrade policy further comprises a third threshold, the third threshold being a positive integer, the third threshold being less than or equal to a maximum level allocated to a tenant among the N computing nodes to be upgraded, wherein the tenant information further comprises tenant level information indicating a level of the tenant included in each of the N computing nodes to be upgraded,

the processing unit is further to:

determining Q candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein Q is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the Q candidate computing nodes is equal to the first threshold;

and determining the K candidate computing nodes from the Q candidate computing nodes according to the third threshold and the tenant level information, wherein K is a positive integer which is less than or equal to Q and greater than or equal to M, and the level of the tenant included in each computing node to be upgraded in the K candidate computing nodes is less than or equal to the third threshold.

10. The apparatus of claim 8 or 9, wherein the upgrade policy further comprises a number of compute nodes to be upgraded, the number of nodes to be upgraded being equal to M.

11. The apparatus according to any of claims 8-10, wherein prior to determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, the processing unit is further configured to:

sorting the K candidate computing nodes according to the number of tenant VMs included in each candidate computing node in the K candidate computing nodes to obtain an arrangement list;

determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, wherein the processing unit is further configured to:

and in the arrangement list, sequentially determining the M computing nodes to be upgraded from the K candidate computing nodes according to the sequence, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold value.

12. The apparatus according to any of claims 7-11, wherein the processing unit is further configured to:

and upgrading the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded.

13. An apparatus, comprising: a processor and a memory, the memory for storing a program and data, the processor for invoking and running the program from the memory to perform the method of any one of claims 1 to 6.

14. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 6.

15. A chip comprising a processor and a data interface, the processor reading instructions stored on a memory through the data interface to perform the method of any one of claims 1 to 6.

Background

With the continuous expansion of the public cloud scale, the tenant scale and the number of computing nodes (e.g., physical hosts or virtual machines) carrying tenant traffic are increased correspondingly. In the prior art, the service requirements of different tenants can be met by upgrading the computing nodes.

In a traditional upgrading mode, the resource types of the computing nodes are used as granularity to carry out batch upgrading on the computing nodes. The batch-wise upgrade process is to group the computing nodes according to their resource types (e.g., compute intensive or network enhanced), and sequentially perform upgrade operations on the groups of computing nodes. For example, batch upgrading is performed according to the resource types of the computing nodes, a group of computing nodes to be upgraded with network-enhanced resource types is obtained, and the group of computing nodes to be upgraded is upgraded. Based on the above scheme, if the upgrade of the compute node fails, the use of the compute node of a certain resource type may be affected, and further, the service of the tenant assigned to the compute node of the resource type may be affected.

Therefore, a method for upgrading a computing node is needed, which can reduce the impact on tenant services when the computing node is faulty.

Disclosure of Invention

The application provides a method and a device for upgrading a computing node, which can reduce the influence on tenant service under the condition that the upgrading of the computing node fails.

In a first aspect, a method for upgrading a computing node is provided, and the method includes:

acquiring an upgrading strategy and tenant information;

determining M computing nodes to be upgraded from N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein N is a positive integer larger than or equal to 2, and M is a positive integer larger than or equal to 1 and smaller than N;

and carrying out upgrading operation on the M computing nodes to be upgraded.

Based on the scheme, when the computing node to be upgraded is upgraded, the information of the tenant included in the computing node to be upgraded is considered. Under the condition that the upgrade of the computing node fails, the influence on the tenant service can be reduced.

With reference to the first aspect, in certain implementations of the first aspect, the upgrade policy includes a first threshold and a second threshold, the first threshold and the second threshold are positive integers, the first threshold is smaller than the second threshold, the second threshold is smaller than the number of VMs of the same tenant allocated to the N computing nodes to be upgraded, the tenant information includes VM allocation information, the VM allocation information is used to indicate the number of VMs of the same tenant allocated to each computing node to be upgraded of the N computing nodes to be upgraded,

determining M computing nodes to be upgraded from the N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein the method comprises the following steps:

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein K is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the K candidate computing nodes is less than or equal to the first threshold;

and determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold.

Based on the scheme, most of or even all VMs distributed to the same tenant can be prevented from being in the same batch of upgrading operation as much as possible. And under the condition that the upgrade of the computing node fails, the influence on the tenant service can be reduced.

With reference to the first aspect, in certain implementations of the first aspect, the upgrade policy further includes a third threshold, where the third threshold is a positive integer, the third threshold is less than or equal to a maximum level allocated to the tenant among the N computing nodes to be upgraded, the tenant information further includes tenant level information, and the tenant level information is used to indicate a level of the tenant included in each of the N computing nodes to be upgraded,

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein the K candidate computing nodes comprise:

determining Q candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein Q is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the Q candidate computing nodes is equal to the first threshold;

and determining the K candidate computing nodes from the Q candidate computing nodes according to the third threshold and the tenant level information, wherein K is a positive integer which is less than or equal to Q and greater than or equal to M, and the level of the tenant included in each computing node to be upgraded in the K candidate computing nodes is less than or equal to the third threshold.

Based on the scheme, the condition that the VMs of the high-level tenants are in the same upgrading operation can be avoided as much as possible. Under the condition that the upgrade of the computing node fails, the influence on the high-level tenant service can be reduced.

With reference to the first aspect, in certain implementations of the first aspect, the upgrade policy further includes a number of computing nodes to be upgraded, where the number of computing nodes to be upgraded is equal to M.

With reference to the first aspect, in certain implementations of the first aspect, before determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, the method further includes:

sorting the K candidate computing nodes according to the number of tenant VMs included in each candidate computing node in the K candidate computing nodes to obtain an arrangement list;

determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, including:

and in the arrangement list, sequentially determining the M computing nodes to be upgraded from the K candidate computing nodes in sequence, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold value.

Based on the scheme, when the M computing nodes to be upgraded are upgraded, the M computing nodes to be upgraded comprise more tenant VMs.

With reference to the first aspect, in certain implementations of the first aspect, the method further includes:

and upgrading the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded.

In a second aspect, there is provided an apparatus for upgrading a computing node, the apparatus comprising:

the acquisition unit is used for acquiring the upgrading strategy and the tenant information;

the processing unit is used for determining M computing nodes to be upgraded from N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein N is a positive integer larger than or equal to 2, and M is a positive integer larger than or equal to 1 and smaller than N;

the processing unit is further configured to perform an upgrade operation on the M computing nodes to be upgraded.

Based on the scheme, when the computing node to be upgraded is upgraded, the information of the tenant included in the computing node to be upgraded is considered. Under the condition that the upgrade of the computing node fails, the influence on the tenant service can be reduced.

With reference to the second aspect, in some implementations of the second aspect, the upgrade policy includes a first threshold and a second threshold, the first threshold and the second threshold are positive integers, the first threshold is smaller than the second threshold, the second threshold is smaller than the number of VMs of the same tenant allocated to the N computing nodes to be upgraded, the tenant information includes VM allocation information, the VM allocation information is used to indicate the number of VMs of the same tenant allocated to each computing node to be upgraded of the N computing nodes to be upgraded,

the processing unit is further configured to:

determining K candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein K is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the K candidate computing nodes is less than or equal to the first threshold;

and determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold.

Based on the scheme, most of or even all VMs distributed to the same tenant can be prevented from being in the same batch of upgrading operation as much as possible. And under the condition that the upgrade of the computing node fails, the influence on the tenant service can be reduced.

With reference to the second aspect, in some implementations of the second aspect, the upgrade policy further includes a third threshold, the third threshold is a positive integer, the third threshold is less than or equal to a maximum level allocated to the tenant among the N computing nodes to be upgraded, the tenant information further includes tenant level information, the tenant level information indicates a level of the tenant included in each of the N computing nodes to be upgraded,

the processing unit is further configured to:

determining Q candidate computing nodes from the N computing nodes to be upgraded according to the first threshold and the VM allocation information, wherein Q is a positive integer which is less than or equal to N and greater than or equal to M, and the number of VMs allocated to the same tenant by each of the Q candidate computing nodes is equal to the first threshold;

and determining the K candidate computing nodes from the Q candidate computing nodes according to the third threshold and the tenant level information, wherein K is a positive integer which is less than or equal to Q and greater than or equal to M, and the level of the tenant included in each computing node to be upgraded in the K candidate computing nodes is less than or equal to the third threshold.

Based on the scheme, the condition that the VMs of the high-level tenants are in the same upgrading operation can be avoided as much as possible. Under the condition that the upgrade of the computing node fails, the influence on the high-level tenant service can be reduced.

With reference to the second aspect, in some implementations of the second aspect, the upgrade policy further includes a number of computing nodes to be upgraded, where the number of computing nodes to be upgraded is equal to M.

With reference to the second aspect, in certain implementations of the second aspect, before determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, the processing unit is further configured to:

sorting the K candidate computing nodes according to the number of tenant VMs included in each candidate computing node in the K candidate computing nodes to obtain an arrangement list;

determining the M computing nodes to be upgraded from the K candidate computing nodes according to the second threshold, the processing unit being further configured to:

and in the arrangement list, sequentially determining the M computing nodes to be upgraded from the K candidate computing nodes in sequence, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold value.

Based on the scheme, when the M computing nodes to be upgraded are upgraded, the M computing nodes to be upgraded comprise more tenant VMs.

With reference to the second aspect, in some implementations of the second aspect, the processing unit is further configured to:

and upgrading the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded.

In a third aspect, an apparatus for computing node upgrade is provided, where the apparatus is configured to perform the method in the first aspect or any possible implementation manner of the first aspect, and specifically, the apparatus includes a module configured to perform the method in the first aspect or any possible implementation manner of the first aspect.

In a fourth aspect, an embodiment of the present application provides an apparatus for upgrading a computing node, including: a memory and a processor. Wherein the memory is configured to store instructions and the processor is configured to execute the instructions stored by the memory, and when the processor executes the instructions stored by the memory, the execution causes the processor to perform the first aspect or the method of any possible implementation manner of the first aspect.

In a fifth aspect, the present application provides a computer-readable medium for storing a computer program comprising instructions for executing the method of the first aspect or any possible implementation manner of the first aspect.

In a sixth aspect, the present application further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method of the first aspect or any possible implementation manner of the first aspect.

In a seventh aspect, a chip is provided, which includes at least one processor and an interface; the at least one processor is configured to invoke and run a computer program, so that the chip is configured to execute the method in the first aspect or any possible implementation manner of the first aspect.

Drawings

Fig. 1 is a schematic diagram of a cloud system 100 to which embodiments of the present application are applicable.

FIG. 2 is a schematic flow chart diagram of a method 200 for computing node upgrade provided by an embodiment of the present application.

FIG. 3 is a schematic block diagram of a method 200 for computing node upgrade provided by embodiments of the present application.

FIG. 4 is a schematic block diagram of a method 200 for computing node upgrade provided by embodiments of the present application.

FIG. 5 is a schematic block diagram of an apparatus 500 for computing node upgrade according to an embodiment of the present application.

FIG. 6 is a schematic block diagram of an apparatus 600 for computing node upgrade of an embodiment of the present application.

Detailed Description

The method and the device for upgrading the computing node, provided by the embodiment of the application, can be applied to a computer, and the computer comprises a hardware layer, an operating system layer running on the hardware layer, and an application layer running on the operating system layer. The hardware layer includes hardware such as a Central Processing Unit (CPU), a Memory Management Unit (MMU), and a Memory (also referred to as a main Memory). The operating system may be any one or more computer operating systems that implement business processing through processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system. The application layer comprises applications such as a browser, an address list, word processing software, instant messaging software and the like. In the embodiment of the present application, the computer may be a handheld device such as a smartphone or a terminal device such as a personal computer, and the present application is not particularly limited as long as the data can be processed by the method for compressing data according to the embodiment of the present application by running a program in which codes of the method for compressing data according to the embodiment of the present application are recorded. The execution main body of the method for compressing data of the embodiment of the application can be computer equipment, or a functional module capable of calling a program and executing the program in the computer equipment.

Moreover, various aspects or features of the present application may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer-readable media may include, but are not limited to: magnetic storage devices (e.g., hard disk, floppy disk, or magnetic tape), optical disks (e.g., Compact Disk (CD), Digital Versatile Disk (DVD), etc.), smart cards, and flash Memory devices (e.g., Erasable Programmable Read-Only Memory (EPROM), card, stick, or key drive, etc.). In addition, various storage media described herein can represent one or more devices and/or other machine-readable media for storing information. The term "machine-readable medium" can include, without being limited to, wireless channels and various other media capable of storing, containing, and/or carrying instruction(s) and/or data.

For ease of understanding, related terms referred to in the embodiments of the present application will be briefly described first.

1, public cloud

The public cloud generally refers to a cloud that can be used and provided by a third party provider for a user. Public clouds are generally available over the internet. The core attribute of a public cloud is shared resource service.

2, tenant

The tenant refers to an account number for renting public cloud resources in the field of cloud computing. Tenants are also sometimes referred to as cloud tenants.

3. User' s

And the user refers to a sub-account which is created by the tenant under the account. The user has partial authority of the tenant. In addition, the tenant can also set the authority of the user for the resource and the operation according to the requirement.

4. User group

And the user group, wherein a plurality of users can be divided into one group, and the operation of the tenant for the user group is applied to each user in the group.

5. Tenant virtual machine (VMware, VM)

A tenant Virtual Machine (VM), i.e., a tenant VM, refers to a VM assigned to a tenant.

The technical solution in the present application will be described below with reference to the accompanying drawings.

In a conventional method for upgrading computing nodes, the computing nodes are first grouped according to their resource types (e.g., compute intensive or network enhanced). Then, the upgrading operation is carried out on each group of computing nodes in sequence. Based on the scheme, the tenant information is ignored when the computing nodes are grouped. In this case, it may occur that the compute nodes within each group contain most or all of the tenant VMs of some tenant business. Furthermore, it may also occur that the compute nodes within each group contain most or all of the tenant VMs of some high-level tenant business. When the upgrade of the computing node fails, the large-scale computing node may fail, thereby affecting tenant service and reducing user experience.

In order to solve the above problem, the present application provides a method and an apparatus for upgrading a compute node, which can reduce the influence on tenant services when the compute node is faulty.

Fig. 1 shows a schematic diagram of a cloud system 100 to which embodiments of the present application are applicable.

As shown in fig. 1, the system 100 includes: at least two compute nodes 110, a management area node 120. Wherein the compute node 110 is communicatively coupled to the management area node 120.

Specifically, in the embodiment of the present application, a communication interface is provided in the computing node 110, and a communication interface is provided in the management area node 120, so that the computing node 110 and the management area node 120 can communicate through the communication interface.

In this embodiment, the computing node 110 and the management area node 120 may be configured in the same physical device, and in this case, by way of example and not limitation, the system 100 may further include a bus, and the computing node 110 may be connected to the bus via a communication interface, and the management area node 120 may be connected to the bus via a communication interface, so that the computing node 110 and the management area node 120 may be communicatively connected via the bus.

By way of example, and not limitation, the bus may comprise a data bus.

Optionally, the bus may also include a power bus, a control bus, a status signal bus, and the like. In this case, the communication interface of the computing node 110 may be a communication interface between internal devices of the computer apparatus, and similarly, the communication interface of the management area node 120 may be a communication interface between internal devices of the computer apparatus.

In the embodiment of the present application, the computing node 110 and the management area node 120 may be configured in different devices, in which case, the computing node 110 and the management area node 120 may be connected in a wired manner or in a wireless manner, for example, a communication cable (e.g., an optical fiber, a copper wire, or the like) may be disposed between the computing node 110 and the management area node 120 (specifically, between a communication interface of the computing node 110 and a communication interface of the management area node 120) to implement a wired manner communication connection between the computing node 110 and the management area node 120. In this case, the communication interface of the computing node 110 may be a communication interface of a computer device for communicating with an external device, and similarly, the communication interface of the management area node 120 may be a communication interface of a computer device for communicating with an external device.

In addition, in the embodiment of the present application, the plurality of computing nodes 110 may be configured in the same physical device (for example, a server), or the plurality of computing nodes 110 may also be configured independently, and the present application is not particularly limited.

It should be noted that, in the embodiment of the present application, the computing nodes 110 configured in the same physical device may be connected through a bus (e.g., a PCIE bus), that is, signaling or data transmission between the computing nodes 110 in the same physical device may be realized through the bus.

In addition, for each computing node 110 configured in different physical devices, communication may be performed in such a manner that a transceiver (for transmitting information or signals) connected to the computing node 110 may be configured in each physical device, and the transceivers in each physical device may be connected by a transmission cable to implement signaling or data transmission between each computing node 110 configured in different physical devices. Alternatively, the transceivers in each physical device may communicate with each other wirelessly.

Next, the functions and structures of the above-described components will be explained.

First, the compute node 110

The compute node 110 may include one or more virtual machines 130. The one or more virtual machines 130 can be assigned to tenants to satisfy the tenants' business needs. As shown in FIG. 1, each compute node 110 includes two virtual machines.

In the embodiment of the present application, the computing node 110 may be a physical computing node. That is, in the present embodiment, the computing node 110 may be a computer device having a processor. By way of example, and not limitation, the processor may be a Central Processing Unit (CPU).

The computing node 110 may also include components such as a bus, transceiver, and memory.

The memory may include a memory controller (memory controller) and a storage unit (or storage medium). A storage unit, which may also be referred to as storage space, is a medium used for storing some kind of discontinuous physical quantity. By way of example and not limitation, the memory cell may be a memory chip, and the material of the memory cell may be a semiconductor, a magnetic core, a magnetic drum, a magnetic tape, a laser disc, etc., as is well known in the art. The type of the storage unit can be a random access memory, a flash memory, a read only memory, a programmable read only memory or an electrically erasable programmable memory, a register and other storage media types mature in the field. The memory controller is used for allocating physical addresses to the memory units, accessing the memory units according to the physical addresses and performing data storage operation on the memory units

Management area node 120

The management node 120 can update the service by upgrading the computing node 110.

The management area node 120 is similar to the compute node 110. The management area node 120 may be a computer device having a processor. Here, the detailed description thereof is omitted in order to avoid redundancy.

In the embodiment of the present application, one or more virtual machines may be included in the management area node 120.

In one implementation, the management area node 120 includes multiple virtual machines, and the multiple virtual machines may respectively deploy different service software. For example, one management area node includes 3 virtual machines, i.e., virtual machine 1, virtual machine 2, and virtual machine 3. Computer management services may be deployed in virtual machine 1, upgrade services may be deployed in virtual machine 2, and authentication services may be deployed in virtual machine 3.

In another implementable manner, the management area node 120 includes only one virtual machine, and different service software may be deployed on the one virtual machine. For example, one management area node includes 1 virtual machine, on which a computer management service, an upgrade service, and an authentication service may be deployed.

Also, in the embodiment of the present application, the management area node 120 and the computing node 110 may be configured independently, that is, the management area node 120 and the computing node 110 may be configured in different physical nodes.

Alternatively, in the embodiment of the present application, the management area node 120 may also be configured jointly with one or more computing nodes 110, that is, the management area node 120 and one or more computing nodes 110 may be configured in the same physical node.

It should be understood that the architecture of the system shown in fig. 1 is merely an example and is not intended to limit the present application in any way. For example, the number of computing nodes 110 in the system and the number of virtual machines 130 included in each computing node may be arbitrarily changed according to actual needs.

The method for upgrading a computing node according to the embodiment of the present application is described in detail below with reference to fig. 2 to 4.

FIG. 2 illustrates a schematic flow chart diagram of a method 200 for computing node upgrade provided by an embodiment of the present application. The method 200 includes steps 210 through 230, which are described in detail below.

Step 210 obtains upgrade policies and tenant information.

The upgrading strategy comprises a first threshold and a second threshold, wherein the first threshold and the second threshold are positive integers, the first threshold is smaller than the second threshold, and the second threshold is smaller than the number of VMs distributed to the same tenant by the N computing nodes to be upgraded. N is a positive integer greater than or equal to 2.

Wherein the first threshold may be understood as the maximum number of the same tenant VMs included in each compute node in one upgrade operation. The second threshold may be understood as the maximum number of the same tenant VMs included in all compute nodes in one upgrade operation.

The N computing nodes to be upgraded may be computing nodes in the cloud system.

Optionally, the upgrade policy further includes a third threshold, where the third threshold is less than or equal to a maximum level allocated to the tenant among the N computing nodes to be upgraded.

Optionally, the upgrade policy further includes the number of computing nodes to be upgraded.

The number of the computing nodes to be upgraded can be understood as the number of the computing nodes included in one upgrading operation.

The tenant information comprises VM allocation information, and the VM allocation information is used for indicating the number of VMs which are allocated to the same tenant by each computing node to be upgraded in the N computing nodes to be upgraded.

Optionally, the tenant information further includes tenant level information, and the tenant level information is used to indicate a level of a tenant included in each computing node to be upgraded.

In the embodiment of the present application, the definitions of the highest level and the lowest level of the tenant are not particularly limited.

In one implementation, a tenant with a tenant level of 1 has the highest level, a tenant with a tenant level of 2 has the next highest level, …, and so on.

In another implementation, a tenant at tenant level N has the highest level, a tenant at tenant level N-1 has the next highest level, …, and so on.

In the embodiment of the present application, the tenant level included in the computing node to be upgraded may be obtained by installing an Identity and Access Management (IAM) software in the management area node.

In the embodiment of the application, computing management service software can be installed in the management area node, and the corresponding relation between the tenant and the computing node to be upgraded is obtained from the software.

In the embodiment of the present application, the manner of determining the upgrade policy is not particularly limited.

In one implementation, the operation and maintenance personnel determine the upgrade strategy according to the current network conditions. For example, the operation and maintenance personnel determine that the first threshold value is equal to 1 and the second threshold value is equal to 3 according to the current network condition. For another example, the operation and maintenance personnel determine that the first threshold is 1, the second threshold is 3, and the third threshold is 6 according to the current network condition. For another example, the operation and maintenance personnel determine that the first threshold is 1, the second threshold is 3, the third threshold is 6, and the number of the computing nodes to be upgraded is 3 according to the current network condition.

In another implementation, the upgrade policy may be determined by a method of automatic upgrade over a small-scale network. For example, by the method of automatic upgrade of a short-range network, it is determined that the first threshold is equal to 1 and the second threshold is equal to 3. For another example, by the method for automatically upgrading the small-range network, it is determined that the first threshold is 1, the second threshold is 3, and the third threshold is 6.

And step 220, determining M computing nodes to be upgraded from the N computing nodes to be upgraded according to the upgrading strategy and the tenant information, wherein N is a positive integer greater than or equal to 2, and M is a positive integer greater than or equal to 1 and less than N.

In the embodiment of the present application, the number of tenant VMs included in each of the N computing nodes to be upgraded is not specifically limited. For example, one of the N computing nodes to be upgraded may include a number of tenant VMs of 0. For another example, one of the N computing nodes to be upgraded may include 1 number of tenant VMs. For another example, one of the N computing nodes to be upgraded may include a number of tenant VMs of 4.

In this embodiment of the present application, the number of tenant VMs included in each of the N computing nodes to be upgraded may be the same. The number of tenant VMs included in each of the N computing nodes to be upgraded may be different. The number of tenant VMs included by part of the N computing nodes to be upgraded is the same, and the number of tenant VMs included by part of the N computing nodes to be upgraded is the same and different.

In the embodiment of the present application, determining M computing nodes to be upgraded from N computing nodes to be upgraded according to an upgrade policy and tenant information includes:

determining K candidate computing nodes from the N computing nodes to be upgraded according to a first threshold and VM allocation information, wherein K is a positive integer which is smaller than or equal to N and larger than or equal to M, and the number of VMs allocated to the same tenant by each candidate computing node in the K candidate computing nodes is smaller than or equal to the first threshold;

and determining M computing nodes to be upgraded from the K candidate computing nodes according to a second threshold, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to the second threshold.

It should be noted that, according to the second threshold, M computing nodes to be upgraded are determined from the K candidate computing nodes, and the number of VMs, allocated to the same tenant, of the M computing nodes to be upgraded is less than or equal to the second threshold. It can be understood that M candidate compute nodes are selected from the K candidate compute nodes arbitrarily or in a certain order. Under the condition that the number of the selected M candidate computing nodes distributed to the same tenant VM is smaller than or equal to a second threshold value, the M candidate computing nodes can be determined to be M nodes to be upgraded.

In one implementation, M candidate compute nodes are arbitrarily chosen among the K candidate compute nodes.

By way of example and not limitation, the rank of the K candidate compute nodes is, in order: candidate compute node 1, candidate compute node 2, candidate compute node 3, candidate compute node 4. If M is 2, randomly selecting 2 candidate computing nodes, wherein the following conditions are at least included:

group 1: candidate computing nodes 3, 2;

group 2: candidate computing node 1, candidate computing node 3;

group 3: candidate computing nodes 1, 4;

group 4: candidate computing nodes 2, 3;

group 5: candidate computing nodes 2, 4;

group 6: candidate compute node 3, candidate compute node 4.

In one implementation, selecting M candidate compute nodes from the K candidate compute nodes in an order includes: and sequentially selecting M candidate computing nodes according to the sequence of the K candidate computing nodes.

By way of example and not limitation, the rank of the K candidate compute nodes is, in order: candidate compute node 1, candidate compute node 2, candidate compute node 3, candidate compute node 4. If M is 2, sequentially selecting 2 candidate computing nodes, wherein the candidate computing nodes at least comprise the following 6 groups of conditions:

group 1: candidate computing nodes 1 and 2;

group 2: candidate computing node 1, candidate computing node 3;

group 3: candidate computing nodes 1, 4;

group 4: candidate computing nodes 2, 3;

group 5: candidate computing nodes 2, 4;

group 6: candidate compute node 3, candidate compute node 4.

Alternatively, in one implementation, M may be equal to the number of nodes to be upgraded.

Optionally, in an implementation manner, determining K candidate compute nodes from the N compute nodes to be upgraded according to the first threshold and the VM allocation information includes:

determining Q candidate computing nodes from the N computing nodes to be upgraded according to a first threshold and VM allocation information, wherein Q is a positive integer which is smaller than or equal to N and larger than or equal to M, and the number of VMs allocated to the same tenant by each candidate computing node in the Q candidate computing nodes is equal to the first threshold;

and according to the third threshold and the tenant level information, K candidate computing nodes are determined from the Q candidate computing nodes, wherein K is a positive integer which is smaller than or equal to Q and larger than or equal to M, and the level of the tenant included in each computing node to be upgraded in the K candidate computing nodes is smaller than or equal to the third threshold.

Optionally, in an implementation manner, the K candidate computing nodes are sorted according to the number of tenant VMs included in each candidate computing node of the K candidate computing nodes, and an arrangement list is obtained;

determining M computing nodes to be upgraded from the K candidate computing nodes according to a second threshold, wherein the method comprises the following steps:

and in the arrangement table, sequentially determining M computing nodes to be upgraded from the K candidate computing nodes according to the sequence, wherein the number of the VM allocated to the same tenant by the M computing nodes to be upgraded is less than or equal to a second threshold value.

In the implementation of the present application, the K candidate compute nodes are ranked according to the number of tenant VMs included in each of the K candidate compute nodes, and the specific manner of ranking is not limited.

For example, the K candidate compute nodes may be ordered according to the number of tenant VMs included in each of the K candidate compute nodes from large to small.

For another example, the K candidate compute nodes may be ordered according to the number of tenant VMs included in each of the K candidate compute nodes from small to large.

Optionally, in an implementation manner, the K candidate compute nodes are sorted according to the number of tenant VMs included in each of the K candidate compute nodes from large to small. And determining M computing nodes to be upgraded in the sequenced K candidate computing nodes according to a second threshold value.

As an example and not limitation, according to the number of tenant VMs included in each candidate computing node of the K candidate computing nodes, the K candidate computing nodes are sorted from large to small, and the order of the candidate computing nodes included in the obtainable sort list sequentially is: candidate compute node 4, candidate compute node 3, candidate compute node 1, candidate compute node 2. If M is 2, sequentially selecting 2 candidate computing nodes from the K sequenced candidate computing nodes according to the sequence, wherein the sequence at least comprises the following 6 groups of conditions:

group 1: candidate computing nodes 4, candidate computing nodes 3;

group 2: candidate computing node 4, candidate computing node 1;

group 3: candidate computing nodes 4, 2;

group 4: candidate computing node 3, candidate computing node 1;

group 5: candidate computing nodes 3, 2;

group 6: candidate compute node 1, candidate compute node 2.

First, it is determined whether the number of VMs allocated to the same tenant in the 1 st group is less than or equal to a second threshold. And if the number of the 1 st group allocated to the same tenant VM is less than or equal to a second threshold value, determining that the candidate computing nodes included in the 1 st group are the computing nodes to be upgraded. Otherwise, the candidate computing nodes included in the 1 st group are not the computing nodes to be upgraded.

It should be noted that, in the case that it is determined that the candidate computing node included in the group 1 is the computing node to be upgraded, it is not necessary to continuously determine whether the candidate computing node included in the group 2 is the computing node to be upgraded. In the case where it has been determined that the candidate compute node included in the group 1 is not a compute node to be upgraded, it may be continuously determined whether the number of the 2 nd group assigned to the same tenant VM is less than or equal to a second threshold. And so on.

And step 230, performing upgrade operation on the M computing nodes to be upgraded.

In the embodiment of the application, after the upgrade operation is successfully performed on the M computing nodes to be upgraded, the upgrade operation can be suspended, and the functions of the upgraded computing nodes in the first batch of computing nodes are verified. For example, the functional verification includes verifying that the virtual machines already deployed on the compute node are communicating properly with the network. For example, the functional verification further includes verifying whether a new virtual machine can be deployed on the compute node and whether communication of the network of the deployed new virtual machine is normal.

Optionally, in some implementation manners, the method further includes updating the computing nodes to be updated, except for the M computing nodes to be updated, in the N computing nodes to be updated. That is, the M to-be-upgraded computing nodes of the N to-be-upgraded computing nodes are upgraded, and then the computing nodes to be upgraded except the M to-be-upgraded computing nodes of the N to-be-upgraded computing nodes are upgraded.

In the embodiment of the present application, a manner of upgrading the computing nodes to be upgraded, except for the M computing nodes to be upgraded, in the N computing nodes to be upgraded is not particularly limited.

In an implementation manner, the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded may be upgraded in an equally dividing manner. The computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded in an equally dividing manner, which can be understood as that the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded can be upgraded in batches, and the number of the computing nodes included in each batch of upgrading operation is the same.

For example, the computing nodes to be upgraded, except for the M computing nodes to be upgraded, in the N computing nodes to be upgraded include 6 computing nodes, and the 6 computing nodes may be divided into 2 batches for upgrading. In this case, any 3 computing nodes of the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded, and then the remaining 3 computing nodes of the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded. For example, the computing nodes to be upgraded, except for the M computing nodes to be upgraded, in the N computing nodes to be upgraded include 6 computing nodes, and the 6 computing nodes may be divided into 3 batches for upgrading. In this case, any 2 computing nodes in the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded, any 2 computing nodes in the remaining 4 computing nodes in the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded, and finally the remaining 2 computing nodes in the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded.

In another implementation manner, the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded are upgraded according to an increasing policy. Specifically, the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded may be upgraded in batches, the increment policy of the upgrade node is k, the difference between the number of the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded included in the j +1 th batch of upgrade operation and the number of the computing nodes to be upgraded except the M computing nodes to be upgraded in the N computing nodes to be upgraded included in the j +1 th batch of upgrade operation is k, and j is a positive integer greater than or equal to 1.

For example, the computing nodes to be upgraded, except for the M computing nodes to be upgraded, in the N computing nodes to be upgraded include 10 computing nodes, the increment policy of the upgrade node is 2, and the 10 computing nodes are divided into 2 batches for upgrading. In this case, any 4 computing nodes to be upgraded among the 10 computing nodes to be upgraded may be upgraded first, and then the remaining 6 computing nodes to be upgraded among the 10 computing nodes to be upgraded may be upgraded.

For another example, the computing nodes to be upgraded, except for the M computing nodes to be upgraded, in the N computing nodes to be upgraded include 30 computing nodes, the increment policy of the upgrade node is 5, and the 30 computing nodes may be divided into 3 batches for upgrading. In this case, any 5 computing nodes to be upgraded among the 30 computing nodes to be upgraded are upgraded, then any 10 computing nodes to be upgraded among the remaining 25 computing nodes to be upgraded among the 30 computing nodes to be upgraded are upgraded, and finally the remaining 15 computing nodes to be upgraded among the 30 computing nodes to be upgraded are upgraded.

According to the method for upgrading the computing node, the influence on the tenant service can be reduced under the condition that the computing node is in fault during upgrading. Specifically, according to the method for upgrading the computing nodes provided by the present application, when the computing nodes to be upgraded are upgraded in the first batch (that is, when the M computing nodes to be upgraded are upgraded), it is possible to avoid that most of, even all, VMs allocated to the same tenant are in the same upgrade operation. Alternatively, VMs of a high-level tenant may be avoided in the same upgrade operation. In this case, when a compute node upgrade fails, the impact on tenant traffic can be reduced.

For convenience of understanding, the method 200 for upgrading a computing node provided in the present application is described below with reference to fig. 3, by taking "the upgrade policy includes a first threshold and a second threshold, and the tenant information includes VM allocation information" as an example.

FIG. 3 illustrates a schematic block diagram of a method 200 for computing node upgrade provided by embodiments of the present application.

As shown in fig. 3, the cloud system includes 6 computing nodes (i.e., an example of N computing nodes to be upgraded), which are respectively computing node 1, computing node 2, computing node 3, computing node 4, computing node 5, and computing node 6. And each compute node includes 3 tenant VMs (i.e., VM1, VM2, VM 3).

The following information may be obtained by software installed on the computing node: VM2 in compute node 1 is assigned to tenant B; VM3 in compute node 2 is assigned to tenant C; VM2 in compute node 3 is assigned to tenant a; the VM1 and the VM3 in the computing node 4 are allocated to the tenant B; VM1 in compute node 5 is assigned to tenant D, VM2 is assigned to tenant a, and VM3 is assigned to tenant E; VM1, VM3 in compute node 6 are assigned to tenant D, and VM2 is assigned to tenant a.

The upgrading strategy acquired according to the upgrading tool comprises a first threshold and a second threshold, wherein the first threshold is 1, and the second threshold is 2. And the tenant information includes VM allocation information.

According to the first threshold, the computing node 1, the computing node 2, the computing node 3, and the computing node 5 in the cloud system may be determined, and the computing node 6 is a candidate computing node (i.e., an example of K candidate computing nodes).

According to the second threshold, it can be determined that the computing node 1 and the computing node 2 in the candidate computing nodes are the computing nodes for the first upgrade (i.e., an example of M computing nodes to be upgraded).

Therefore, in the cloud system, the first upgraded computing nodes include: computing node 1, computing node 2. The non-first-batch computing nodes (i.e. an example of the computing nodes to be upgraded except the computing nodes to be upgraded in the N computing nodes to be upgraded) include: compute node 3, compute node 4, compute node 5, compute node 6.

In this case, when the computing nodes in the cloud system are upgraded, the computing nodes 1 and 2 in the cloud system may be upgraded first. And after the upgrading operation is successful, performing non-initial upgrading operation on the rest computing nodes 3, 4, 5 and 6 in the cloud system.

It should be noted that the manner of performing the upgrade operation on the remaining compute nodes 3, 4, 5, and 6 in the cloud system is not particularly limited.

For example, compute nodes 3, 4, 5, 6 may be upgraded together. Alternatively, the 4 computing nodes may be upgraded in batches according to the number of upgraded nodes.

It should be understood that fig. 3 is only an example and does not constitute any limitation to the present application.

Optionally, the upgrade policy includes a first threshold, a second threshold, and a third threshold. The tenant information includes VM allocation information and tenant level information.

Optionally, the upgrade policy includes a first threshold, a second threshold, a third threshold, and the number of nodes to be upgraded. The tenant information includes VM allocation information and tenant level information.

In the embodiment of the application, when the first batch of upgrade operations are performed on the computing nodes included in the cloud system, the same tenant (for example, tenant a or tenant B) can be prevented from being in the same batch of upgrade operations. According to the method provided by the embodiment of the application, under the condition that the upgrading operation of the computing node fails, the influence on the service of the tenant can be reduced.

For convenience of understanding, in the following description, with reference to fig. 3, the method 200 for upgrading a computing node provided in the present application is described by taking "an upgrade policy includes a first threshold, a second threshold, a third threshold, and the number of computing nodes to be upgraded, and tenant information includes tenant level information and VM allocation information.

FIG. 3 illustrates a schematic block diagram of a method 200 for computing node upgrade provided by embodiments of the present application.

As shown in fig. 3, the cloud system includes 6 computing nodes (i.e., an example of N computing nodes to be upgraded), which are respectively computing node 1, computing node 2, computing node 3, computing node 4, computing node 5, and computing node 6. And each compute node includes 3 tenant VMs (i.e., VM1, VM2, VM 3).

VM2 in compute node 1 is assigned to tenant B; VM3 in compute node 2 is assigned to tenant C; VM2 in compute node 3 is assigned to tenant a; the VM1 and the VM3 in the computing node 4 are allocated to the tenant B; VM1 in compute node 5 is assigned to tenant D, VM2 is assigned to tenant a, and VM3 is assigned to tenant E; VM1, VM3 in compute node 6 are assigned to tenant D, and VM2 is assigned to tenant a. The level of the tenant A and the level of the tenant C are both 1, and the level of the tenant B, the level of the tenant D and the level of the tenant E are 8. It should be understood that, in the embodiment of the present application, a tenant having a tenant level of 8 is a high-level tenant, and a tenant having a tenant level of 1 is a low-level tenant.

The upgrading strategy acquired by the upgrading tool comprises a first threshold, a second threshold, a third threshold and the number of nodes to be upgraded, wherein the first threshold is 3, the second threshold is 2, the third threshold is less than or equal to 10, and the number of computing nodes to be upgraded is 4. And the tenant information includes VM allocation information and tenant level information.

From the first threshold and the VM allocation information, it may be determined that compute node 1, compute node 2, compute node 3, compute node 4, compute node 5, compute node 6 are all candidate compute nodes (i.e., one example of Q candidate compute nodes).

From the above-mentioned 6 candidate compute nodes, it can be determined that compute node 1, compute node 2, compute node 3, compute node 4, compute node 5, and compute node 6 are all candidate compute nodes (i.e., an example of K candidate compute nodes) according to the third threshold and the tenant level information.

According to the number of tenant VMs included in 6 candidate compute nodes (i.e., an example of K candidate compute nodes), the 6 candidate compute nodes are sorted from large to small, and the candidate compute nodes included in the arrangement list can be obtained as follows: compute node 5, compute node 4, compute node 6, compute node 1, compute node 2, compute node 3. Or compute node 5, compute node 6, compute node 4, compute node 1, compute node 2, compute node 3. Or compute node 5, compute node 6, compute node 4, compute node 2, compute node 1, compute node 3. For the sake of brevity, no examples are given here.

In the following, the "candidate computing nodes included in the arrangement list are: compute node 5, compute node 4, compute node 6, compute node 1, compute node 2, compute node 3. "is an example. In the arrangement list, 4 candidate calculation nodes are sequentially selected according to the sequence, and at least the following conditions can be met:

group 1: a computing node 5, a computing node 4, a computing node 6 and a computing node 1;

group 2: a computing node 5, a computing node 4, a computing node 6 and a computing node 2;

group 3: a computing node 5, a computing node 4, a computing node 6 and a computing node 3;

group 4: a computing node 5, a computing node 6, a computing node 1 and a computing node 2;

group 5: a computing node 5, a computing node 6, a computing node 1 and a computing node 3;

group 6: a computing node 5, a computing node 1, a computing node 2 and a computing node 3;

group 7: a computing node 4, a computing node 6, a computing node 1 and a computing node 2;

group 8: a computing node 4, a computing node 6, a computing node 1 and a computing node 3;

group 9: a computing node 4, a computing node 1, a computing node 2 and a computing node 3;

group 10: compute node 6, compute node 1, compute node 2, compute node 3.

Firstly, the number of VMs distributed to the tenant B by the computing nodes in the 1 st group is judged to be 3, and the number is larger than a second threshold value and does not meet the upgrading strategy. And then judging that the number of the VMs distributed to the tenant B by the computing nodes in the group 2 is 3 and is larger than a second threshold value, and the upgrading strategy is not met. And then judging that the number of the VMs distributed to the tenant B by the computing nodes in the group 3 is 3 and is larger than a second threshold value, and the upgrading strategy is not met. And continuing to judge the 4 th group, wherein the data and the tenant level of the VM allocated to the tenant A, the tenant B, the tenant C, the tenant D and the tenant E by the computing nodes in the 4 th group all meet the requirement of the upgrading strategy. Thus, it may be determined that the set of included candidate compute nodes is the first to be upgraded compute node (i.e., one instance of the M to be upgraded compute nodes).

Therefore, in the cloud system, the first upgraded computing nodes include: computing node 5, computing node 1, computing node 2, computing node 3. The computing nodes which are not upgraded in the first batch (namely, one example of the computing nodes to be upgraded except the computing nodes to be upgraded in the N computing nodes to be upgraded) comprise: compute node 4, compute node 6.

Therefore, the first-batch upgraded computing nodes are upgraded first, and then the non-first-batch upgraded computing nodes are upgraded.

In the conventional method for upgrading the computing nodes, any one group of candidate computing nodes from the 1 st group to the 3 rd group may be selected as the computing nodes for first upgrading. In this case, when the first batch of upgraded compute nodes performs the compute node upgrade and fails, the service of the tenant B may be seriously affected.

In the traditional method for upgrading the computing nodes, the computing nodes 4, 5 and 6 may be selected as the computing nodes for first upgrading. In this case, when the first upgraded computing node performs a computing node upgrade, the service of the high-level tenant (e.g., tenant B, tenant D, tenant E) is seriously affected.

Compared with the traditional method for upgrading the computing node, the method for upgrading the computing node can avoid most or even all VMs distributed to the same tenant from being in the same batch of upgrading operation as far as possible. The VM of the high-level tenant can be prevented from being in the same upgrading operation as much as possible. And under the condition that the upgrade of the computing node fails, the influence on the tenant service can be reduced.

It should be understood that fig. 3 is only an example and does not constitute any limitation to the present application.

The method for upgrading the computing node can also be applied to different scenes of virtual machines included in the computing node in the cloud system. For example, each compute node in a cloud system does not include the same number of virtual machines.

FIG. 4 illustrates a schematic block diagram of a method 200 for computing node upgrade provided by embodiments of the present application.

As shown in fig. 4. The cloud system includes 6 computing nodes (i.e., an example of N computing nodes to be upgraded), which are respectively computing node 1, computing node 2, computing node 3, computing node 4, computing node 5, and computing node 6. The computing node 1 and the computing node 4 respectively include 3 VMs, and the computing node 2 includes one VM. Compute node 3 includes 5 VMs and compute node 5 includes 6 VMs. The compute nodes 6 each include 2 VMs. VM1 of compute node 1 is assigned to tenant A, VM2 to tenant B and VM3 to tenant C; VM1 of compute node 2 is assigned to tenant C; the VM1 and the VM2 of the computing node 3 are allocated to the tenant A, the VM3 is allocated to the tenant C, and the VM5 is allocated to the tenant E; VM2 of compute node 4 is assigned to tenant E; VM2 of compute node 5 is assigned to tenant C, VM3 is assigned to tenant B, VM5 is assigned to tenant D, and VM6 is assigned to tenant a; VM2 of compute node 6 is assigned to tenant a. The level of the tenant A and the level of the tenant C are both 1, and the level of the tenant B is 1. The level of the tenants D and E is 10. It should be understood that a tenant with a tenant level of 10 is a high-level tenant, and a tenant with a tenant level of 1 is a low-level tenant.

The method for upgrading a computing node according to the embodiment of the present application is described in detail above with reference to fig. 2 to 4. The following describes in detail the apparatus and device for upgrading a computing node according to the embodiment of the present application with reference to fig. 5 and 6.

FIG. 5 illustrates a schematic block diagram of an apparatus 500 for computing node upgrade of an embodiment of the present application. The apparatus 500 may correspond to (e.g., be configured in or be itself the control node described in the system 100 and the method 200, and each module or unit in the apparatus 500 is respectively configured to execute the function of the control node and each executed action or processing procedure, and a detailed description thereof is omitted here for avoiding redundancy.

FIG. 6 illustrates a schematic block diagram of an apparatus 600 for computing node upgrade of an embodiment of the present application. The apparatus 600 comprises: a processor. Optionally, the device 600 further comprises a memory and/or a transceiver, which may be connected to the processor, further optionally, the device 600 comprises a bus. Wherein a processor, a memory and a transceiver may be connected by a bus, the memory may be used for storing instructions, and the processor is used for executing the instructions stored by the memory to control the transceiver to receive information or signals, so that the device 600 performs the functions of the control node, the actions or processes performed in the system 100 and the method 200. The device 600 may correspond to (for example, be configured in or be itself the control node), and each module or unit in the device 600 is respectively configured to execute the function of the control node and each executed action or processing procedure, and here, detailed descriptions thereof are omitted for avoiding redundancy.

It should be noted that the embodiments of the present application may be applied to or implemented by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.

It will be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.

It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

It should be understood that, in the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:容器服务管理方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!