RAID reconstruction method and equipment

文档序号:7124 发布日期:2021-09-17 浏览:30次 中文

1. A computer-implemented method, comprising:

operating a Redundant Array of Independent Disks (RAID) group in a pool of disks, the RAID group being formed from a plurality of disks in the pool of disks;

detecting a failure of a disk in the RAID group; and

reconstructing data of a failed disk onto a plurality of target disks of the storage pool, the reconstructing comprising operating a plurality of I/O producers in parallel, the I/O producers being associated with respective aggregates of the target disks and reconstructing the respective data of the failed disk,

wherein rebuilding the data of the failed disk comprises: the I/O generator reconstructs the respective unique set of data of the failed disk to the respective target disk.

2. The computer-implemented method of claim 1, wherein the respective sets of the target disks each comprise a single target disk, such that each of the I/O producers is dedicated to one and only one target disk.

3. The computer-implemented method of claim 1, further comprising: operating each I/O generator of the plurality of I/O generators in a respective thread.

4. The computer-implemented method of claim 1, wherein operating the I/O generators in parallel comprises: operating N I/O generators in parallel, and wherein rebuilding the data of the failed disk onto a plurality of target disks comprises: each I/O generator reconstructs essentially 1/N of the data on the failed disk.

5. The computer-implemented method of claim 1, wherein the I/O generator reconstructs the data of the failed disk by: (i) read data at corresponding locations of other disks in the RAID group, (ii) calculate data for the failed disk based on the data read from the corresponding locations of the other disks, and (iii) write the calculated data to a target disk.

6. A computer-implemented apparatus, comprising:

at least one processing unit; and

at least one memory coupled to the at least one processing unit and storing instructions thereon that, when executed by the at least one processing unit, perform acts comprising:

operating a Redundant Array of Independent Disks (RAID) group in a pool of disks, the RAID group being formed from a plurality of disks in the pool of disks;

detecting a failure of a disk in the RAID group; and

reconstructing data of a failed disk onto a plurality of target disks of the storage pool, the reconstructing comprising operating a plurality of I/O producers in parallel, the I/O producers being associated with respective aggregates of the target disks and reconstructing the respective data of the failed disk,

wherein rebuilding the data of the failed disk comprises: the I/O generator reconstructs the respective unique set of data of the failed disk to the respective target disk.

7. The apparatus of claim 6, wherein the respective sets of the target disks each comprise a single target disk, such that each of the I/O generators is dedicated to one and only one target disk.

8. The apparatus of claim 6, wherein the actions further comprise: operating each I/O generator of the plurality of I/O generators in a respective thread.

9. The apparatus of claim 6, wherein operating the I/O generators in parallel comprises: operating N I/O generators in parallel, and wherein rebuilding the data of the failed disk onto a plurality of target disks comprises: each I/O generator reconstructs essentially 1/N of the data on the failed disk.

10. A computer program product comprising a non-transitory computer-readable medium encoded with computer-executable code, the code configured to enable performance of a method comprising:

operating a Redundant Array of Independent Disks (RAID) group in a pool of disks, the RAID group being formed from a plurality of disks in the pool of disks;

detecting a failure of a disk in the RAID group; and

reconstructing data of a failed disk onto a plurality of target disks of the storage pool, the reconstructing comprising operating a plurality of I/O producers associated with respective aggregates of the target disks and reconstructing the respective data of the failed disk,

wherein rebuilding the data of the failed disk comprises: the I/O generator reconstructs the respective unique set of data of the failed disk to the respective target disk, and

wherein the respective sets of the target disks each comprise a single target disk, such that each of the I/O generators is dedicated to one and only one target disk.

11. The computer program product of claim 10, wherein the method further comprises: operating the plurality of I/O generators in parallel.

12. The computer program product of claim 10, wherein the method further comprises: operating each I/O generator of the plurality of I/O generators in a respective thread.

13. The computer program product of claim 11, wherein operating the I/O generators in parallel comprises: operating N I/O generators in parallel, and wherein rebuilding the data of the failed disk to a plurality of target disks comprises: each I/O generator reconstructs essentially 1/N of the data on the failed disk.

14. The computer program product of claim 11, wherein the I/O generator reconstructs the data of the failed disk by: (i) read data at corresponding locations of other disks in the RAID group, (ii) calculate data for the failed disk based on the data read from the corresponding locations of the other disks, and (iii) write the calculated data to a target disk.

Background

Redundant Array of Independent Disks (RAID) is a data storage virtualization technique that combines multiple physical disk drives into a single logical unit for purposes of data redundancy backup and/or performance improvement. Taking RAID5 as an example, it may be composed of block level stripes with distributed parity information. When a single disk fails, subsequent reads can be computed over the distributed parity information so that no data is lost. At the same time, the spare disk will be selected to replace the failed disk, and all data on the failed disk will be reconstructed and written onto the spare disk. In conventional RAID, a RAID Group (RG) will consume all of the disk space within the group, which will adversely affect the effectiveness and cost of the failover.

Disclosure of Invention

Embodiments of the present disclosure are directed to a scheme for improving RAID rebuild performance.

In one aspect of the disclosure, a method performed by a computer is provided. The method includes determining a spare RAID group having a spare capacity from a plurality of disks included in at least one RAID group in a storage pool; establishing a provisioning logical unit from the provisioning RAID group; and in response to one of the at least one RAID group in the storage pool being in a degraded state, rebuilding a failed disk in the degraded RAID group with the preliminary logical unit.

In some embodiments, determining a spare RAID group having a spare capacity comprises: determining allocation of spare capacity among the plurality of disks based on a correspondence of the number of disks in the storage pool to a number of spare disks.

In some embodiments, establishing a provisioning logical unit from the provisioning RAID group comprises: and determining the number of the prepared logic units established from the prepared RAID group according to the size of the prepared capacity.

In some embodiments, rebuilding the failed disk in the degraded RAID group using the preliminary logical unit includes: detecting whether the preparation logic unit is available; in response to the preparatory logical unit being available, assigning the preparatory logical unit to the degraded RAID group; and in response to the degraded RAID group initiating a rebuild action, writing data in the failed disk to the preparatory logical unit.

In some embodiments, the method further comprises: releasing the provisioning logic unit after the failed disk is replaced.

In some embodiments, releasing the preliminary logic unit comprises: writing data written to the failed disk in the preparation logical unit back to a replaced disk in response to the replacement of the failed disk; removing the preparatory logical unit from the degraded RAID group; and adding the replaced disk to the degraded RAID group.

In a second aspect of the disclosure, a computer-implemented device is provided. The apparatus comprises at least one processing unit; and at least one memory. At least one memory coupled to at least one processing unit and storing instructions thereon, which when executed by the at least one processing unit perform acts comprising: determining a spare Redundant Array of Independent Disks (RAID) group having a spare capacity from a plurality of disks included in at least one RAID group in a storage pool; establishing a provisioning logical unit from the provisioning RAID group; and in response to one of the at least one RAID group in the storage pool being in a degraded state, rebuilding a failed disk in the degraded RAID group with the preliminary logical unit.

In a third aspect of the disclosure, there is provided a computer program product tangibly stored on a non-transitory computer readable medium and comprising computer readable program instructions that, when executed on a device, cause the device to perform the steps of the method described according to the first aspect above.

Compared with the prior art, the embodiment of the disclosure can remarkably improve the reconstruction performance of the traditional RAID. In addition, since a dedicated spare disk in the storage pool is eliminated, all disks in the storage pool can be used for user IO, which further improves the efficiency of the disks. The use of a RAID-X0 type RAID group to manage the distributed spare disk space enables distribution of write IOs to all disks in the storage pool during reconstruction. The reconstruction method of embodiments of the present disclosure can be implemented based on conventional RAID techniques.

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure.

Drawings

The above and other objects, features and advantages of the embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:

FIG. 1 illustrates a schematic diagram of a reconstruction of a conventional RAID;

FIG. 2 is a schematic diagram illustrating the internal behavior of a conventional RAID for reconstruction;

FIG. 3 illustrates a schematic diagram of a storage pool having multiple RAID groups and dedicated spare disks, according to an embodiment of the present invention;

FIG. 4 shows a flow diagram of a rebuild method 400 for a RAID according to an embodiment of the present disclosure;

FIG. 5 illustrates a schematic diagram of a storage pool having multiple RAID groups and distributed spare disks, according to an embodiment of the present invention;

FIG. 6 illustrates a schematic diagram of rebuilding a RAID using distributed spare disks, according to an embodiment of the present invention;

FIG. 7 shows a schematic diagram of a simulation of a reconstruction process for a conventional RAID with IO generators in accordance with an embodiment of the present disclosure;

FIG. 8 shows a schematic diagram of a simulation of a distributed reconstruction process with an IO generator in accordance with an embodiment of the present disclosure; and

fig. 9 shows a schematic block diagram of a device 900 that may be used to implement embodiments of the present disclosure.

FIG. 10 illustrates an exemplary block diagram of an apparatus 1000 for RAID rebuild according to embodiments of the present disclosure.

Like or corresponding reference characters designate like or corresponding parts throughout the several views.

Detailed Description

Hereinafter, various exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the drawings and description relate to exemplary embodiments only. It is noted that from the following description, alternative embodiments of the structures and methods disclosed herein are readily contemplated and may be employed without departing from the principles of the present disclosure as claimed.

It should be understood that these exemplary embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way.

The terms "including," comprising, "and the like, as used herein, are to be construed as open-ended terms, i.e.," including/including but not limited to. The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment". Relevant definitions for other terms will be given in the following description.

Hereinafter, a scheme for evaluating reconstruction performance of a RAID according to an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings. For ease of description, embodiments of the present disclosure are described in detail below with reference to RAID5 having 4 data blocks and 1 parity block (4D +1P) as an example. However, it should be understood that the principles and methods of embodiments of the present disclosure may be applied to any level or layout of RAID and are not limited to the examples listed below, and the scope of the present disclosure is not limited in this respect.

As described above, in conventional RAID, a RAID Group (RG) will consume all of the disk space within the group. Through the research of the inventor, the traditional scheme causes the following problems: first, if a single disk fails, the write Input Output (IO) for the reconstruction will be for the only spare disk, and thus the bandwidth of the spare disk will be the bottleneck for reconstruction performance. Spindle disks (spindles) have different read/write IO capabilities and different bandwidths for read IO or write IO. Secondly, the user IO for the same RG will be severely affected, and the response time of these user IO will be significantly increased because the IO performance of the RG is limited by the slowest disk in the RG; and in the rebuild case, the disk under rebuild will limit user IO performance. Thirdly, the RAID group requires a specific user IO process during the rebuilding process, and data loss may be caused if a failure of another disk occurs during the rebuilding process. While certain user IO processes may also significantly degrade user IO performance. Also, as the capacity of the disc increases year by year, several of the above problems are magnified, thereby posing a greater risk of data loss to the user.

FIG. 1 shows a schematic diagram of a reconstruction of a conventional RAID. Conventional RAIDs are made up of block-level stripes with distributed parity information that may be distributed across multiple disks. Fig. 1 shows an RG 110, which is RAID5 with 4 data blocks and 1 parity block (4D + 1P). As shown in (1A) of fig. 1, the RG 110 uses 5 disks, respectively disks 1200Disc 1201Disc 1202Disc 1203And a disc 1204. In addition, the RG 110 utilizes a disc 1205As its spare disk. Each stripe of the RG 110 may include 5 blocks consisting of 4 data blocks (i.e., blocks storing D00, D01 … … DN 3) and 1 parity block (i.e., blocks storing P0, P1 … … PN). (1B) in FIG. 1 shows one disc (e.g., disc 120) in the RG 1102) A failure occurs. At this time, as shown in (1C) of fig. 1, a disc (e.g., the disc 120) is prepared5) Will replace the failed disk (i.e., disk 120)2) (ii) a And as shown in (1D) of fig. 1, a failed disk (i.e., disk 120)2) All data on will be reconstructed and written to the spare disk (i.e., disk 120)5) The above.

Further, fig. 2 shows a schematic diagram of the internal behavior of the RG 110 for reconstruction as shown in fig. 1. The internal behavior for reconstruction may include three steps of backup in disk, pre-read, exclusive-or (XOR), and write-back. As has been described with reference to fig. 1, the disc 120 in the RG 1102Malfunction occurs and the disc 1202All data on willIs reconstructed and written to the disc 1205(i.e., the preparation tray). For example, as shown in (2A) of fig. 2, at this time the RG 110 is about to reconstruct the blocks following P4. The first step to do is pre-reading, as shown in (2B) of FIG. 2, the RG 110 from among the 4 non-failed disks (i.e., disks 120)0Disc 1201Disc 1203And a disc 1204) Reading data D50, D51, D52 and D53 in the same stripe, respectively; the step then proceeds as exclusive-or, as shown in (2C) of fig. 2, the RG 110 performs exclusive-or operation on the read data to obtain data stored in a corresponding block in the failed disc (e.g., D50 XOR D51 XOR D52 XOR D53 — P5); the last step to do is write back, as shown in (2D) of fig. 2, the RG 110 writes the result of the exclusive or operation (e.g., P5) to the corresponding block of the spare disk to complete the reconstruction of the block.

In one storage pool, it is common to concatenate components having the same RAID type and width (number of disks in a RAID group) and having a mapping to maintain both Thin logical unit (Thin LUN) address space and non-Thin logical unit (Thin LUN) address space, after which Thin LUNs can be launched/allocated from Thin LUNs as necessary. Thus, in general, RAID groups with the same disk technology need to have the same type and width in the same storage pool, and the user needs to configure a spare disk for each storage pool, or the spare disk can be shared in the storage pool, which in any case needs the spare disk in an array. Fig. 3 illustrates a schematic diagram of a storage pool having multiple RAID groups and dedicated spare disks, according to an embodiment of the present invention. As shown in FIG. 3, multiple RAID groups (i.e., RGs 310) are included in the storage pool1,RG3102,…RG310N) And a dedicated spare disc. The spare disk is used to rebuild the RAID in the degraded RAID group when one of the RAID groups in the storage pool is in a degraded state. However, the configuration of such dedicated spare disks is wasteful of client resources if no failed disk is present in the storage pool.

Therefore, it is desirable to implement a scheme that can effectively improve the rebuild performance of conventional RAIDs. FIG. 4 shows a flow diagram of a rebuild method 400 for a RAID according to an embodiment of the present disclosure.

At 401, a spare RAID group having spare capacity is determined from a plurality of disks included in at least one RAID group of a storage pool. In some embodiments, the allocation of spare capacity among the plurality of disks may be determined based on a correspondence of the number of disks in the storage pool to a number of spare disks. Method 400 is described in detail below with reference to fig. 5, which illustrates a schematic diagram of a storage pool having multiple RAID groups and distributed spare disks, according to an embodiment of the present invention. FIG. 5 illustrates a storage pool including multiple RAID groups, i.e., RGs 5101、RG 5102…RG 510N. With RG 5101For example, it is a 4D +1P conventional RAID5, comprising disks 5200Disc 5201Disc 5202Disc 5203And a disk 5204. It can be seen that the configuration of the dedicated spare disc is eliminated in the embodiment shown in fig. 5, compared to the embodiment shown in fig. 3. Instead, the spare disks are distributed in each disk in the RAID group of the storage pool. The size of the spare capacity divided in each disc may depend on the correspondence of the number of discs to the number of spare discs, i.e. a predetermined spare disc rate. For example, 30 disks corresponds to 1 spare disk. If the storage pool includes less than 30 disks, a spare disk of 1 disk capacity is partitioned from all disks contained in the storage pool. If a storage pool contains more than 30 but less than 60 disks, then a spare disk of 2 disk capacity is partitioned. The divided spare disks form a spare RAID group, i.e., a spare RG in fig. 5.

It should be noted that the RAID group in the storage pool may be provided in the form of RAID-X, such as RAID5 (width Y) or RAID 6 (width Z), which is typically predetermined by the user during the initialization phase of the storage pool. And the spare RAID group established by the spare disk section (piece) partitioned from these RAID groups (RAID-X) may be a RAID group (width Y/Z) in the form of RAID-X0, that is, the spare RAID group can support all conventional types of RAID. Also, the provisioning RG (RAID-X0) can evenly distribute IO across all disks, which can typically distribute IO to all disks and to a granularity of a few kilobits.

In addition, although in the embodiment shown in fig. 5, some capacity is partitioned as spare disks in each disk in each RAID group in the storage pool. However, the capacity as a spare disk may be partitioned in only a part of the RAID group in the storage pool. For example, only one RAID group in the storage pool may be used. The embodiment in fig. 5 is intended to illustrate the layout of the distributed spare disk only by way of example and not as a limitation on the layout of the spare disk.

At 420, a provisioning logical unit is established from the provisioning RAID group. To this end, in some embodiments, the number of spare logical units established from the spare RAID group may be determined based on the size of the spare capacity. For example, if the divided spare disc capacity is equal to the capacity of 1 disc, one spare logical unit, e.g., LUN 0, is established from the spare RG, if the divided spare disc capacity is equal to the capacity of 2 discs, the spare logical units, e.g., LUN 0 and LUN 1, are established from the spare RG, and so on. For example, as shown in fig. 5, the preparation logical units LUN 0, LUN 1 … LUN n are established from the preparation RG. These preliminary logical units provide a block device access interface that is very similar in nature to a physical disk, all that is required is to add a very thin shim (shim) on these preliminary logical units to simulate the preliminary logical units as disks.

At 430, responsive to one of the at least one RAID group in the storage pool being in a degraded state, a failed disk in the degraded RAID group is rebuilt using the preparation logic. An example implementation of the operations at 430 are described below with reference to FIG. 6, which illustrates a schematic diagram of rebuilding a RAID with distributed spare disks, according to an embodiment of the present invention.

According to an embodiment of the invention, it may be detected whether the preparation logic unit is available. Assigning the preparatory logical unit to the degraded RAID group once the preparatory logical unit is available. If the degraded RAID group initiates a rebuild action, the data in the failed disk is written to the prepared logical unit. For example, in FIG. 6, the RG 5102Disc 520 in6Malfunction occurs so that the RG 5102In a degraded state. In this case, it may be detected, for example, whether the preparatory logical unit LUN 0 is available. Once the preparatory logical unit LUN 0 is determined to be available, it may be assigned to the RG 5102For the disk 5206And (4) reconstructing. When the RG 5102When the reconstruction is initiated, the disk 520 is replaced6Write the data in LUN 0. The rebuilding operation still includes a read operation, an exclusive-or operation, and a write operation, which are similar to the process in fig. 2, and therefore, the description thereof is omitted here. In FIG. 6, since the preparation logical unit is built from a pre-disk RAID (RAID-X0) group, the write IO may be distributed to all disks in the storage pool. With this change, the rebuild performance of the RAID can be significantly improved.

When a disk in a RAID group fails, the customer will receive an alert to replace the old disk with the new disk. However, in practice, the number of spare logical units is limited regardless of how many spare disks are arranged, and therefore, after all data in a failed disk is rebuilt to a spare logical unit, the failed disk is replaced. According to an embodiment of the invention, method 400 may further comprise releasing the preparation logic unit after the failed disk is replaced.

According to an embodiment of the present disclosure, if a failed disk is to be replaced, data written to the failed disk in the spare logical unit is written back to the replaced disk. After write back, the spare logical unit is removed from the degraded RAID group and the replaced disk is included in the degraded RAID group. For example, in FIG. 6, if a new disk is inserted to replace the failed disk 5206Then a copy process needs to be initiated, i.e. the data on the spare logical unit LUN 0 is copied to a new disk, in order to free the spare logical unit LUN 0 for the next reconstruction. Even if this process takes a long time, it will not affect if the RG 510 is present2In that the second disk defect occurs because the spare logical unit LUN 0 now already has all the original data. Once the copying of the spare logical unit LUN 0 to the new disk is complete, the emulated disk based on the spare logical unit LUN 0 will be copied from the RG 5102Is removed and a new disc will be brought into the RG 5102

Fig. 7 is a schematic diagram of a simulation of a reconstruction process of a conventional RAID using an IO generator according to an embodiment of the present disclosure, in which an RG 110 and an IO generator 710 are shown. As shown in fig. 6, the disc 120 in the RG 1102A fault, causing the RG 110 to be in a degraded state. IO generator 710 feeds disk 120 in RG 1102A read request is initiated. For a failed disk 120 in the RG 110 due to the RG 110 being in degraded mode2Will trigger the RG 110 to read from 4 other discs (i.e., disc 120)0Disc 1201Disc 1203And a disc 1204) Respectively, the data from the 4 disks are xor' ed to obtain the data in the failed disk, and the obtained data is returned to IO generator 710. IO generator 710 writes the resulting data to disk 1205In (1).

The simulated reconstruction results are listed in the following table with the actual conventional RAID reconstruction results:

table 1: simulated reconstruction results and actual conventional RAID reconstruction results

Fig. 8 shows a schematic diagram of a simulation of a distributed reconstruction process with an IO generator, according to an embodiment of the disclosure. The reconstruction model here pertains to a distributed reconstruction where degraded disks can be reconstructed into multiple disks instead of a dedicated spare disk as in a conventional RAID reconstruction. During the rebuild process, the pre-read IOs will point to a particular subset of disks or an assigned RAID group, e.g., as shown in FIG. 6, all of the pre-read IOs point to the RG 5102

Here, the simulation process satisfies the following conditions:

there is only one source RAID group to which all the pre-read IOs are directed; more than one reconstruction target disk;

measuring the parallel scaling (scaling) by the added IO generator thread; and

measure the reconstruction rate ratio by increasing target disk.

FIG. 8 shows an RG 810, which is a 4D +1P conventional RAID5, including disks 8301、8302……8304. The RG 810 is used as a source RG to simulate the reconstruction process for the RG 710 as shown in fig. 7, and all read requests will be directed to the RG 810. In addition, FIG. 8 also shows 4 preparation disks 8305、8306、8307And 8308And 4 IO generators 8200、8201、8202And 8203

First, a disc in the RG 810 (e.g., disc 830)2) A fault and thus the RG 810 is in a degraded state. The failed disk 830 may then be forwarded via the IO generator2A read request is initiated. The entire RAID group need not be targeted for reconstruction for each reconstruction thread because all write IO loads to each of the target disks are identical to each other. For example, 4 IO generators 820 may be utilized0、8201、8202And 8203In parallel to the disk 8302The 25% of the data area in parallel initiates a read request to bring the read IO load to the simulated mapping RG.

Next, in response to receiving the requested data, the requested data is written to the spare disk via the IO generator. For example, 4 IO generators 820 may be utilized0、8201、8202And 8203To 4 preparation disks 8305、8306、8307And 8308The requested data is written in parallel so that the write IO load is substantially the same as the simulated mapping RG.

With the above model, for example, the number of target disks is increased to 8, and/or the number of IO generator threads is increased to 8. In the simulation process, the CPU utilization rate and the memory occupation are not obviously increased. The results measured via the above model are shown in the following table:

table 2: simulated distribution reconstruction results

It can be seen that the reconstruction rate of the simulated distributed reconstruction is significantly increased by a factor of five to six compared to the conventional RAID reconstruction method.

Fig. 9 shows a schematic block diagram of a device 900 that may be used to implement embodiments of the present disclosure. As shown, device 900 includes a Central Processing Unit (CPU)901 that can perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)902 or loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the device 900 can also be stored. The CPU 901, ROM 902, and RAM 903 are connected to each other via a bus 904. An input/output (I/O) interface 905 is also connected to bus 904.

A number of components in the device 900 are connected to the I/O interface 905, including: an input unit 906 such as a keyboard, a mouse, and the like; an output unit 907 such as various types of displays, speakers, and the like; a storage unit 908, such as a disk, optical disk, or the like; and a communication unit 909 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 909 allows the device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The various processes and processes described above, such as method 400, may be performed by processing unit 901. For example, in some embodiments, the method 400 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 900 via ROM 902 and/or communications unit 909. When the computer program is loaded into RAM 903 and executed by CPU 901, one or more steps of method 400 described above may be performed.

FIG. 10 illustrates an exemplary block diagram of an apparatus 1000 for RAID rebuild according to embodiments of the present disclosure. The apparatus 1000 is operable to perform the method 400 described with reference to fig. 4 and the processes and methods described in conjunction with fig. 5 and 6, as well as any other processes and methods.

To this end, the device 1000 comprises: a determining unit 1002 configured to determine a spare RAID group having a spare capacity from a plurality of disks included in at least one RAID group of the storage pool; a building unit 1004 configured to build a preparatory logical unit from the preparatory RAID group; and a rebuilding unit 1006 configured to rebuild the failed disk in the degraded RAID group using the preparation logic unit in response to one of the at least one RAID group in the storage pool being in a degraded state.

In certain embodiments, the determining unit 1002 is further configured to determine the allocation of the spare capacity among the plurality of disks based on a correspondence of the number of disks in the storage pool to a number of spare disks. In some embodiments, the establishing unit 1004 is further configured to determine the number of prepared logical units established from the prepared RAID group according to the size of the prepared capacity. In certain embodiments, the reconstruction unit 1006 is further configured to detect whether a preparation logic unit is available. In the event that a preparatory logical unit is available, the preparatory logical unit is assigned to the RAID group being destaged. Once the degraded RAID group initiates a rebuild action, the data in the failed disk is written to the spare logical unit.

In certain embodiments, the apparatus 1000 further comprises a release unit configured to release the preparation logic unit after the failed disk is replaced. The release unit is further configured to, in a case where replacement of the failed disk is to be performed, write back data written to the failed disk in the spare logical unit to the replaced disk. After write back, the spare logical unit is removed from the degraded RAID group and the replaced disk is added to the incorporated degraded RAID group.

The elements included in apparatus 1000 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In one embodiment, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 1000 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standards (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and so forth.

In summary, the embodiments of the present disclosure provide a solution for improving reconstruction performance of redundant arrays of independent disks. Compared with the prior art, the embodiment of the disclosure can remarkably improve the reconstruction performance of the traditional RAID. In addition, since a dedicated spare disk in the storage pool is eliminated, all disks in the storage pool can be used for user IO, which further improves the efficiency of the disks. The use of a RAID-X0 type RAID group to manage the distributed spare disk space enables distribution of write IOs to all disks in the storage pool during reconstruction. The reconstruction method of embodiments of the present disclosure can be implemented based on conventional RAID techniques.

The present disclosure may be methods, apparatus, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for carrying out various aspects of the present disclosure.

The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.

The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.

The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present disclosure by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).

Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.

These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods, apparatus, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Having described various embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种分布式事务节点信息存储方法、装置、设备及介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类