Container service management method and device
1. A method of managing container services, comprising:
determining a second storage space based on a first storage space corresponding to the container service, wherein the second storage space comprises N physical disks;
creating a mapping relation between the mounting point directories of the N physical disks and the file directory of the first storage space;
and mounting the mounting point directories of the N physical disks to the file directories corresponding to the first storage space according to the mapping relation.
2. The method of claim 1, wherein the file directory comprises a plurality of directories, the plurality of directories storing data blocks;
the method further comprises the following steps:
setting the file directory as a logical directory; and
and transferring the data blocks in the logic directory to the mounting point directory.
3. The method of claim 1, further comprising:
creating the file directory as a logical directory of a plurality of directories in which data blocks are stored; and
and transferring the data blocks in the multiple directories to the mounting point directory according to the corresponding relation between the multiple directories and the logic directory.
4. The method of claim 2 or 3, further comprising:
acquiring directory related information of the plurality of directories;
acquiring disk related information of the N physical disks; and
and determining the mapping relation between the mounting point directory and the file directory based on the directory related information and the disk related information of the N physical disks.
5. The method of claim 4, wherein the plurality of directories are sequentially numbered peer directories, and each of the plurality of directories stores one or more data blocks therein.
6. The method of claim 5, further comprising:
and storing the data blocks into the mounting point directory with corresponding numbers according to the initial numbers of the file names of the data blocks.
7. A method as claimed in claim 2 or 3, wherein the data block is a layer in a container store.
8. The method of claim 7, wherein the first storage space comprises a physical disk on a target device that performs a container warehouse service;
the method further comprises the following steps:
executing the container warehouse service within the second storage space on the target device.
9. The method of claim 8, further comprising:
stopping container warehouse services on the target device prior to determining the second storage space; and
after the mount point directories of the N physical disks are mounted to the file directory corresponding to the first storage space, starting container warehouse service on the target device.
10. The method of claim 2 or 3, further comprising:
stopping the container warehouse service corresponding to the first storage space before determining the second storage space;
after the mount point directories of the N physical disks are mounted to the file directory corresponding to the first storage space, starting container warehouse service corresponding to the first storage space.
11. A container service management apparatus, comprising:
the storage space determining unit is used for determining a second storage space based on a first storage space corresponding to the container service, wherein the second storage space comprises N physical disks;
a mapping relationship creating unit, configured to create a mapping relationship between the mount point directories of the N physical disks and the file directory of the first storage space; and
and the mounting unit is used for mounting the mounting point directories of the N physical disks to the file directory corresponding to the first storage space according to the mapping relation.
12. The apparatus of claim 11, wherein the file directory comprises a plurality of directories, the plurality of directories storing data blocks,
the device further comprises:
a logical directory creating unit configured to set the file directory as a logical directory; and
and the unloading unit is used for unloading the data blocks in the logic directory to the mounting point directory.
13. The apparatus of claim 11, further comprising:
a logical directory creating unit configured to create the file directory as a logical directory of a plurality of directories in which the data blocks are stored; and
and the unloading unit unloads the data blocks in the multiple catalogs to the mounting point catalogs according to the corresponding relation between the multiple catalogs and the logic catalogs.
14. The apparatus of claim 12 or 13, further comprising:
an information obtaining unit, configured to obtain directory related information of the multiple directories and disk related information of the N physical disks; and
and the mapping relation determining unit is used for determining the mapping relation between the mounting point directory and the file directory based on the directory related information and the disk related information of the N physical disks.
15. The apparatus of claim 14, wherein the plurality of directories are sequentially numbered peer directories, and each of the plurality of directories stores one or more data blocks therein.
16. The apparatus of claim 12 or 13, wherein the data chunk is a layer in a container store.
17. The apparatus of claim 16, wherein the first storage space comprises a physical disk on a target device that performs a container warehouse service,
the device further comprises:
a warehouse service execution unit to execute the container warehouse service within the second storage space on the target device.
18. The apparatus of claim 17, wherein the apparatus further comprises:
a service switching unit for:
stopping container warehouse services on the target device prior to determining the second storage space; and
and after the mounting unit finishes the unloading of the data blocks in the logical directory to the mounting point directory, starting container warehouse service on the target equipment.
19. The apparatus of claim 12 or 13, wherein the apparatus further comprises:
a service switching unit for:
stopping the container warehouse service corresponding to the first storage space before determining the second storage space; and
and after the mounting unit finishes transferring the data blocks in the logical directory to the mounting point directory, starting container warehouse service corresponding to the first storage space.
20. A computing device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-10.
21. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-10.
Background
When a single machine is used for storage, especially for storage of support services, because the storage space of a single physical disk is limited, new storage space is occupied when container services need to be frequently upgraded, and the problem of insufficient storage space is encountered. Although the distributed storage scheme may introduce multiple physical machines as backend storage, in the case where a single machine is already used to provide cloud storage for one or a small number of users, the introduction of the distributed storage system may increase the deployment complexity and maintenance cost of the entire service and bring additional system overhead.
For this reason, a more convenient and feasible container service management scheme is needed.
Disclosure of Invention
In order to solve at least one of the above problems, the present invention provides a container service management scheme, which can implement transparent storage capacity expansion for users by mapping mount of a physical storage directory, and is particularly suitable for stand-alone capacity expansion of a container mirror warehouse service.
According to a first aspect of the present invention, a method for managing a container service is provided, including: determining a second storage space based on a first storage space corresponding to the container service, wherein the second storage space comprises N physical disks; creating a mapping relation between the mounting point directories of the N physical disks and the file directory of the first storage space; and mounting the mounting point directories of the N physical disks to the file directories corresponding to the first storage space according to the mapping relation.
According to a second aspect of the present invention, there is provided a container service management apparatus, comprising: the storage space determining unit is used for determining a second storage space based on a first storage space corresponding to the container service, wherein the second storage space comprises N physical disks; a mapping relationship creating unit, configured to create a mapping relationship between the mount point directories of the N physical disks and the file directory of the first storage space; and the mounting unit is used for mounting the mounting point directories of the N physical disks to the file directory corresponding to the first storage space according to the mapping relation.
According to a third aspect of the invention, there is provided a computing device comprising: a processor; and a memory having stored thereon executable code which, when executed by the processor, causes the processor to perform the container management service method as described above in the first aspect.
According to a fourth aspect of the present invention, there is provided a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the container management service method as described in the first aspect above.
By the capacity expansion scheme, the transparent storage capacity expansion for the user can be realized by directly utilizing the existing disk to carry out mounting mapping. The storage capacity expansion is particularly suitable for single machine capacity expansion of a container mirror image warehouse, and the bottleneck problem of single machine storage capacity can be solved on the premise of not changing service deployment and use modes.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a basic configuration example of a Docker platform.
Fig. 2 shows a flow diagram of a container management service method according to an embodiment of the invention.
FIG. 3 illustrates an example of binding mount.
Figure 4 shows a typical mirror structure.
Fig. 5 shows an example of a mirrored internal layered layer structure.
Fig. 6 shows a schematic composition diagram of a container service management apparatus according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a computing device that can be used to implement the container service management method according to an embodiment of the present invention.
FIG. 8 illustrates an existing storage scheme for professional clouds.
FIG. 9 illustrates a professional cloud-optimized storage example according to the present invention.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
When a stand-alone storage is used, particularly in support of services, because the storage space of each physical disk is limited, a problem of insufficient storage space is encountered when services need to be frequently upgraded to occupy new storage space.
When supporting the container mirror image warehouse service, the local disk is used as a back-end storage service by default, and meanwhile, a plurality of distributed storage back-ends are supported. For example, the existing container mirror warehouse service uses a local 2TB disk as backend storage, and if the mirror difference of similar versions is in the order of several hundred G, the local disk can only support less than 10 upgrades. Therefore, the upgrade times of the existing container mirror image warehouse service can be seriously influenced by the limitation of a single stand-alone single disk.
Although the distributed storage scheme may perform capacity expansion by introducing multiple physical machines as backend storage, in the case that a single machine is already used to provide cloud storage for one or a small number of users, the introduction of the distributed storage system may increase the deployment complexity and maintenance cost of the entire service, and bring additional system overhead.
Therefore, the invention provides a container management service scheme, which can realize transparent storage capacity expansion for users through mounting of a physical storage directory for a logical directory, and is particularly suitable for container mirror image warehouse service with default data block storage directory naming rules.
Since the container management service scheme of the present invention is particularly suitable for single-machine storage capacity expansion for the Docker Registry service, for the convenience of understanding the present invention, a brief introduction is first made to the Docker and Docker Registry service. It should be understood that the principles of the present invention are also applicable to other container management capacity expansion scenarios beyond the Docker Registry service.
The Docker is a container platform for development, deployment and operation of application development and operation and maintenance personnel, is lighter than a Virtual Machine, uses Linux Namespace (UTS, IPC, PID, Network, Mount, User) and cgroups (control groups) technologies to perform virtualization isolation and resource control on an application process at a bottom layer, and has the characteristics of flexibility, light weight, expandability, scalability and the like. The Docker container instance is loaded from an image containing all executables, configuration files, runtime dependent libraries, environment variables, etc. required by the application, which may be loaded on any Docker Engine (Engine) machine. More and more developers and companies are packing their products into Docker images for distribution and sale.
The Docker platform is basically composed of three parts. Fig. 1 shows a basic configuration example of a Docker platform. As shown, the Docker platform includes a Client (Client), a Docker Host (Host), and a Docker mirror repository (registry). The user can build a client using tools provided by Docker (CLI and API, etc.), upload the image and issue commands to create and start the container. The Docker host is then used to download the image from Docker registry and start the container. The Docker registry, which may also be referred to as a Docker image repository, is used to store images and provide image upload and download
In the Docker platform, the service that provides storage, distribution, and management of images is a Docker Registry image warehouse service. The Docker Registry mirror repository is a centralized storage, stateless application, node-extensible HTTP public service. The functions of mirror image management, storage, uploading and downloading, AAA authentication, Webhook notification, log and the like are provided.
A user constructs a Docker image from a Docker file and a 'context' through a Docker command, a packed image (Images) is issued to Docker Registry image warehouse service through a Docker push command, other users acquire the image from the image warehouse through Docker pull, and a Docker Engine starts a Docker instance.
The Docker Daemon is a Daemon process running in the background in the Docker architecture, can receive a Docker Client request, and then creates and runs a specified instance according to the request type. The Container (Container) is an entity of the Image (Image) runtime. The Docker container instance is loaded from the mirror. The container service management scheme of the present invention is particularly suitable for providing management for a Docker registry service, for example, for storage capacity expansion for running the service.
Fig. 2 shows a schematic flow diagram of a method of managing a container service according to one embodiment of the invention. The method is suitable for being performed on a physical machine for providing container services (e.g., the Docker registration service described above). In various embodiments, multiple machines may be involved, but is particularly suited to doing so for a single physical machine, e.g., for a physical computer carrying individual stand-alone storage engines in a large data system.
Herein, "stand-alone storage" is in contrast to the concept of "distributed storage" and refers to an interface that provides a storage medium inside a stand-alone machine, e.g., storing an image on (a physical disk of) a physical machine and its subsequent upgrades. For systems with large data volumes, a single-machine storage engine can be a basic component of the system, as data needs to be split and not put on different machines.
In step S210, a second storage space is determined based on the first storage space corresponding to the container service, where the second storage space includes N physical disks. Here, N is 1 or more. In some embodiments, the first storage space and the second storage space may be located on different physical machines, and even the N physical disks included in the second storage space may be located on different physical machines (in this case, N ≧ 2). In a preferred embodiment, the first and second storage spaces may be located on the same target device, and the N physical disks included in the second storage space may be regarded as a capacity expansion operation of a stand-alone storage. In some embodiments, the first storage space may include an existing disk, such as an existing disk, and the N disks included in the second storage space may be newly added disks, such as manually added disks. In the case of adding multiple new disks, operations to order the new disks may also be included. It should be understood that "first" and "second" are intended to distinguish between different objects in a class, and do not imply any contextual and primary relationship between the two.
In step S220, a mapping relationship between the mount point directories of the N physical disks and the file directory of the first storage space is created. In step S230, mount point directories of the N physical disks to the file directory corresponding to the first storage space according to the mapping relationship.
In the actual mounting operation, the storage capacity expansion which is transparent to the user can be realized by utilizing the logical directory. Here, a "logical directory" is a storage abstraction that the kernel uses as a fixed-size, randomly accessed, and chunked linear sequence. The disk device driver maps these blocks to the underlying physical storage device.
In one embodiment, before step S210, even before the N physical disks are added, a logical directory may be first created for multiple directories in an existing disk, so as to facilitate the mount during subsequent expansion. For example, the existing disk may be used as a logical disk, and a logical directory including the plurality of directories may be pre-created on the logical disk. In other embodiments, a single physical disk can be made into an LVM or a RAID can be built, so that a plurality of physical disks are combined into one logical disk, but this method needs to reformat the disk, and cannot be implemented for a system running an existing service.
In different embodiments, the logical directory may be the same as or different from the physical directory in which the data is stored. Thus, in one embodiment of the invention, the original storage directory may be used directly as the logical directory and the physical unloading may be performed after the mount. At this time, the file directory may include a plurality of directories as original physical storage directories, and each of the plurality of directories may store a file, for example, a data block (for example, a blob described below). And, the management method may include: setting the file directory as a logical directory; and transferring the data blocks in the logic directory to the mounting point directory.
In another embodiment of the invention, the original directory in which the data blocks are stored may be different from the logical directory. In this case, the original directory in which the data block is stored may be stored in the first storage space, or may be stored in another storage space of the first and second storage spaces. And, the management method may include: creating the file directory as a logical directory of a plurality of directories in which data blocks are stored; and transferring the data blocks in the plurality of directories to the mounting point directory according to the corresponding relation between the plurality of directories and the logic directory.
Herein, "unloading" refers to transferring and storing at least a portion of the directory to the newly added disk, for example, the at least a portion of the directory may be physically moved or copied directly to the newly added disk. The corresponding content left in the existing disk may then be overwritten in a subsequent operation.
The "mount" or "binding mount" allows a user to mount a directory or file to a specified directory, and then the operation at the mount point only occurs on the mounted directory or file, while the content of the original mount point is hidden and not affected. FIG. 3 illustrates an example of binding mount. As shown, the/home and/test directories originally belonging to the node 1(inode _1) and the node 2(inode _2) respectively can be mounted to the node 1(inode _1) through a mounting command (e.g., mount-bind/home/test shown in the figure), and the hierarchical relationship of the directories can be determined through the mounting directory, such as mounting the/test directory to the lower layer of the/home shown in the figure. After mount, user operations on/home and/test directories will only occur in node 1(inode _1) that includes the mount point.
Therefore, storage and capacity expansion can be conveniently and efficiently realized through addition of the physical disk and unloading and mounting of the directory.
Specifically, for unloading, directory-related information of the plurality of directories and disk-related information of the N physical disks may be acquired. Subsequently, a mapping relationship between the mount point directory and the file directory may be determined based on the directory-related information and the disk-related information of the N physical disks. Specifically, after the mapping relationship between the mount point directory and the logical directory is calculated, the mount points may be created in the N disks according to the mapping relationship. Therefore, the reasonable distribution of the directories can be realized through the calculation of the mapping relation.
As previously mentioned, the first storage space may correspond to an existing disk. The existing disk may be a local physical disk installed in a stand-alone machine, or a plurality of local physical disks. The existing disk preferably includes the plurality of directories storing the data blocks, and the plurality of directories are preferably hierarchical directories, for example, directories located under the same upper level directory. In other embodiments, the directories may be non-peer directories, and may even be located in different upper directories or different hierarchies as long as the directories are mutually exclusive. Preferably, the plurality of directories are sequentially numbered same-tier directories, and each of the plurality of directories stores one or more data blocks therein.
In the case where the N physical disks (e.g., newly added disks) include multiple disks, the calculation of the mapping relationship may be calculated based on at least the number and number of the new disks. For example, when the existing disk is a block and a disk with the same capacity is newly added, half of the directories in the plurality of directories may be transferred to the new disk. If there is one disk and three disks with the same capacity are newly added, three-quarters of the directories can be transferred to the new disk, that is, each disk includes one-quarter of the original directories.
When the container service is implemented as a docker registry service, the data block may be a layer in a container store. The method may then also involve service shutdown and startup during storage capacity, e.g., the Docker registry service on a physical machine (e.g., stand-alone) may be stopped before a new disk is added; and after the mounting to the logical directory, starting a Docker registry service on the single machine.
The Docker is a container management framework that is responsible for creating and managing container instances, one loaded from the Docker image, which is a compressed file containing all the content required by an application. One image may depend on another image and be a single inheritance relationship. Figure 4 shows a typical mirror structure. The most initial mirror is called a Base Image (Base Image), a new mirror can be made by inheriting the Base Image, the new mirror can also be inherited by other mirrors, the new mirror is called a Parent Image (Parent Image), and the new mirror inheriting the Parent Image is called a child Image (child Image). As shown, alpine is a base image, providing a lightweight, secure Linux operating environment, both base App1 and base App2 are based on and share this base image alpine, and base App1 and basicp 2 may be distributed as a single image, and are also the parent images of Advanced (Advanced) App 1/2/3. Specifically, advanced App1 based on basic App1, and advanced App2 and advanced App3 based on basic App2 were developed in the sub-images. When advanced App1/2/3 downloads, the basic App1/2 detects and downloads all dependent parent images and basic images, and often only one parent image instance and one basic image are stored in a registry storage node and shared by other images, so that the storage space is efficiently saved.
An image is divided into a plurality of layers (layers), and each Layer contains part of files of the whole image. When the Docker container instance is loaded from the mirror, the instance will see the set of files that all layers merge together, and the instance need not be concerned with the hierarchical relationships of the layers. All layers in the mirror image have read-only property, when the current container instance performs Write operation, Copy On Write operation is performed from the old layer to Copy the old file, generate a new file, and generate a new layer that can be written. The Copy On Write method can save space and efficiency to the maximum extent, and the hierarchy can be fully reused.
Fig. 5 shows an example of a mirrored internal layered layer structure.
The internal file of the advanced App1 is divided into 4 layers of storage, each layer is a compressed type file and is identified by the sha256 value of the file, and as shown in the figure, the four layers are named by the sha256 value of each layer. The files of all layers constitute the content of the final image, and after the container is started from the image, the container instance sees the file contents of all layers. If one of the layers is stored as follows:
$file
/var/lib/registry/docker/registry/v2/blobs/sha256/40/4001a1209541c37465e524db0b9bb20744ceb319e8303ebec3259fc8317e2dec/data
data:gzip compressed data
$sha256sum
/var/lib/registry/docker/registry/v2/blobs/sha256/40/4001a1209541c37465e524db0b9bb20744ceb319e8303ebec3259fc8317e2dec/data
4001a1209541c37465e524db0b9bb20744ceb319e8303ebec3259fc8317e2dec
in the present invention, each of the plurality of directories stores one or more data blocks therein. The data block may be a layer in a Docker registry as described above. In an application scenario of the docker register service, the plurality of directories may be directories of a reidentisteryblob. Here, blob (binary large object) is a concept in a data block management system, which refers to storing binary data as a single, individual set (i.e., a data block). For this purpose, the directory naming rule can be stored according to the registry blob, and all directory composition modes of 16 × 16 are calculated, and defined as follows: blob Dirs ═ 00,01,02 … 10,11,12 … f0, f1, f2 … fd, fe, ff, and blob Num values are the number of blob Dirs combinations, which is 16 × 16. Then, the method may further comprise: and storing the data blocks into the mounting point directory with corresponding numbers according to the initial numbers of the file names of the data blocks. The four levels of container reference in the figure each have a file name identified by its sha256 value. As shown in fig. 5, the four layers are identified by dashed boxes, and it is known that they belong to data block directories blob dir ═ 40], [63], [ b2], [97], and layers beginning with 40, 63, b2, and 97, which are generated before or after, should be stored to the corresponding directories.
As described above, when the container service management method of the present invention is implemented on a single machine, the first and second storage spaces may be located in the same target device. At this time, the first storage space includes a physical disk on the target device that performs the container warehouse service, and the method further includes: executing the container warehouse service within the second storage space on the target device. Further, the container warehouse service on the target device may be stopped before the second storage space is determined, and the container warehouse service on the target device may be started after the mount point directories of the N physical disks are mounted to the file directories corresponding to the first storage space.
In a wider application scenario, the first and second storage spaces may be limited to within the same target device. At this time, the method may further include: stopping the container warehouse service corresponding to the first storage space before determining the second storage space; after the mount point directories of the N physical disks are mounted to the file directory corresponding to the first storage space, starting container warehouse service corresponding to the first storage space.
The present invention may also be embodied as a container service management system that includes a master node and a plurality of stand-alone nodes. Each stand-alone node is connected (e.g., directly or indirectly connected) to a master node that sends instructions (e.g., directly or indirectly via other forwarding nodes) to the plurality of stand-alone nodes to cause each stand-alone node to perform the container service management method as above.
Fig. 6 shows a schematic composition diagram of a container service management apparatus according to an embodiment of the present invention. As shown, the container service management apparatus 600 may include a storage space determination 610, a mapping relationship creation unit 620, and a mount unit 630. It should be understood that the storage apparatus 600 may be implemented on a physical stand-alone machine, such as each of the physical stand-alone storage systems described above, or on a device that acts as a control node when multiple physical devices are involved.
The storage space determining unit 610 is configured to determine a second storage space based on the first storage space corresponding to the container service, where the second storage space includes N physical disks. The mapping relationship creating unit 620 is configured to create a mapping relationship between the mount point directories of the N physical disks and the file directory of the first storage space. The mounting unit 630 is configured to mount the mounting point directories of the N physical disks to the file directory corresponding to the first storage space according to the mapping relationship.
In one embodiment, the file directory is an original data block storage directory and may itself serve as a logical directory. At this time, the file directory may include a plurality of directories that store the data blocks. The apparatus 600 may further comprise: a logical directory creating unit configured to set the file directory as a logical directory; and the unloading unit is used for unloading the data blocks in the logic directory to the mounting point directory.
Alternatively, the file directory is a directory other than the original data block storage directory, and may be used as a logical directory. At this time, the apparatus 600 may further include: a logical directory creating unit configured to create the file directory as a logical directory of a plurality of directories in which the data blocks are stored; and the unloading unit unloads the data blocks in the multiple catalogs to the mounting point catalogs according to the corresponding relation between the multiple catalogs and the logic catalogs.
Further, the apparatus 600 may further include: an information obtaining unit, configured to obtain directory related information of the multiple directories and disk related information of the N physical disks; and the mapping relation determining unit is used for determining the mapping relation between the mounting point directory and the file directory based on the directory related information and the disk related information of the N physical disks.
Preferably, the plurality of directories may be sequentially numbered same-layer directories, and each of the plurality of directories stores one or more data blocks therein. The data block may be a layer in a container store.
In a stand-alone implementation, the first storage space includes a physical disk on a target device that performs a container warehouse service, and the apparatus further includes: a warehouse service execution unit to execute the container warehouse service within the second storage space on the target device. Preferably, the apparatus 600 may further comprise: a service switching unit for: stopping container warehouse services on the target device prior to determining the second storage space; and after the mounting unit finishes the unloading of the data blocks in the logical directory to the mounting point directory, starting container warehouse service on the target device.
In a non-stand-alone implementation, the apparatus 600 may further include: a service switching unit for: stopping the container warehouse service corresponding to the first storage space before determining the second storage space; and after the mounting unit finishes transferring the data blocks in the logical directory to the mounting point directory, starting container warehouse service corresponding to the first storage space.
Fig. 7 is a schematic structural diagram of a computing device that can be used to implement the container service management method according to an embodiment of the present invention.
Referring to fig. 7, computing device 700 includes memory 710 and processor 720.
Processor 720 may be a multi-core processor or may include multiple processors. In some embodiments, processor 1020 may include a general-purpose host processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, processor 1020 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 710 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by processor 720 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, the memory 710 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 710 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 710 has stored thereon executable code that, when processed by the processor 720, causes the processor 720 to perform the container service management methods described above.
[ application example ]
FIG. 8 illustrates an existing storage scheme for professional clouds. As shown, 256 blobsdirs ═ 00,01,02 … 10,11,12 … f0, f1, f2 … fd, fe, ff are all stored on an existing physical disk (disk) of a stand-alone storage device.
FIG. 9 illustrates a professional cloud-optimized storage example according to the present invention. The specific operation is as follows:
1. firstly, stopping the Docker Registry service;
2. then, acquiring the number N of the physical disks according to a specified rule, and defining as follows: { disk1, disk2 … disk … disk }, where disk denotes the ith block of disks. For example, a valid disk is obtained according to the rule that the disk capacity is larger than 1.5 TB. In the example in fig. 9, N is 8, for example, 7 new disks may be added on the basis of the original storage in fig. 8, thereby obtaining 8 local disks; or directly adding 8 new disks;
3. next, all directory composition ways of 16 × 16 can be calculated according to the registry blob storage directory naming rule, and defined as: blob Dirs ═ 00,01,02 … 10,11,12 … f0, f1, f2 … fd, fe, ff, blob Num value is the number of blob Dirs combinations, its value is 16 × 16;
4. taking a Docker Registry disk of the existing storage scheme of FIG. 8 as a logical disk, and pre-creating all blob Dirs directories in the logical disk. Therefore, the deployment and the use of the logic disk to the Docker Registry service are completely transparent, and no additional transformation is needed. The difference of the Docker Registry service is that the container mirror image is stored in a real physical disk, and is stored in a logic disk at present, and then is stored in other physical disks through the logic disk;
5. then, calculating the mapping relation between the mount point directory in the ith disk and the logical disk, wherein the calculation formula is as follows:
blob Dirs { (i-1) + N0, (i-1) + N1, (i-1) + N2, (i-1) + N3 … (i-1) + N (blob Num/N-1) }, assuming that N is 8 in FIG. 2, a mapping relationship as shown in FIG. 9 can be obtained, and the disk loads can be randomly distributed to a plurality of disks through the above mapping relationship;
6. creating a corresponding mounting point directory on the corresponding physical disk according to the mapping relation calculated in the step 4;
7. moving the content in the original mounted disk blob Dirs directory to a directory corresponding to a new physical disk;
8. mounting the mounting point directory of the physical disk to the directory corresponding to the logical disk by using a mount-bind system mounting tool, wherein the mounted storage structure is as shown in fig. 9;
9. and finally, starting the Docker registry service to finish transparent storage and expansion of the user.
The container service management scheme according to the present invention has been described in detail above with reference to the accompanying drawings. By the container service management scheme, storage capacity expansion can be realized directly through simple addition of a new disk under the condition that the deployment and the use of the existing application are not changed or are changed minimally, for example, capacity expansion storage for Docker Registry service is realized. Therefore, the service which originally can only deal with 2-7 upgrades is promoted to 30-77 times when the number of disk blocks is increased to be N equal to 8.
Furthermore, the method according to the invention may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out the above-mentioned steps defined in the above-mentioned method of the invention.
Alternatively, the invention may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the steps of the above-described method according to the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:容器管理方法、装置及计算设备