Controller and memory system

文档序号:7112 发布日期:2021-09-17 浏览:46次 中文

1. A memory system, comprising:

a first memory device including a plurality of first physical blocks;

a second memory device including a plurality of second physical blocks;

a first kernel that manages a plurality of first super blocks that store data associated with a first logical address, the plurality of first super blocks being mapped to the plurality of first physical blocks;

a second kernel that manages a plurality of second superblocks that store data associated with second logical addresses, the plurality of second superblocks being mapped to the plurality of second physical blocks;

a global wear leveling manager to change a mapping between the first physical block mapped to one of the first super blocks and the second physical block mapped to one of the second super blocks based on a degree of wear of the first super blocks and the second super blocks; and

a host interface classifying a logical address received from a host into the first logical address and the second logical address, providing the first logical address to the first core, and providing the second logical address to the second core.

2. The memory system of claim 1, wherein the extent of wear of the super block is determined by an erase count of the physical block mapped to the super block.

3. The memory system according to claim 2, wherein the memory unit is a single memory unit,

wherein the global wear leveling manager changes the mapping of the first physical block and the second physical block between the corresponding first super block and the corresponding second super block if:

the corresponding first super block is a global maximum super block,

the corresponding second super block is a global minimum super block, an

A difference between the degree of wear of the corresponding first super block and the corresponding second super block is equal to or greater than a threshold,

wherein the global largest super block has the highest degree of wear out among all of the first super block and the second super block, and

wherein the global minimum superblock has the lowest degree of wear out among all of the first and second superblocks.

4. The memory system of claim 3, wherein the global wear leveling manager restores the changed mapping to a mapping before the mapping was changed when a difference between degrees of wear of the corresponding first superblock and the corresponding second superblock becomes less than the threshold.

5. The memory system according to claim 1, wherein the memory unit is a single memory unit,

further comprising a global virtual flash layer, global VFL, performing mapping of first and second virtual addresses to physical addresses associated with the first and second physical blocks,

wherein the first core drives a first local Flash Translation Layer (FTL) that performs mapping between the first virtual address and the first logical address as addresses of the first superblock, and

wherein the second core drives a second local FTL that performs mapping between the second virtual address and the second logical address as addresses of the second superblock.

6. The memory system of claim 5, further comprising:

a first hardware accelerator to queue commands provided from the first local FTL and the second local FTL to the global VFL; and

a second hardware accelerator to schedule commands provided from the first and second cores to the first and second memory devices.

7. The memory system according to claim 1, wherein the memory unit is a single memory unit,

wherein the first core drives a first local FTL that performs mapping between the first logical address and a first virtual address associated with the first superblock and drives a first local VFL that performs mapping between the first virtual address and a physical address, and

wherein the second core drives a second local FTL that performs mapping between the second logical address and a second virtual address associated with the second superblock and drives a second local VFL that performs mapping between the second virtual address and a physical address.

8. The memory system according to claim 1, wherein the memory unit is a single memory unit,

wherein the first core performs a local wear leveling operation that moves data between the first superblocks based on a degree of wear of the first superblocks, and

wherein the second core performs a local wear leveling operation that moves data between the second superblocks based on a degree of wear of the second superblocks.

9. A controller that controls a first memory device and a second memory device, the controller comprising:

a first core that manages a plurality of first super blocks that store data associated with a first logical address, the plurality of first super blocks being mapped to a plurality of first physical blocks included in the first memory device;

a second core that manages a plurality of second super blocks that store data associated with a second logical address, the plurality of second super blocks being mapped to a plurality of second physical blocks included in the second memory device;

a global wear leveling manager to change a mapping between the first physical block mapped to one of the first super blocks and the second physical block mapped to one of the second super blocks based on a degree of wear of the first super blocks and the second super blocks; and

a host interface classifying a logical address received from a host into the first logical address and the second logical address, providing the first logical address to the first core, and providing the second logical address to the second core.

10. The controller of claim 9, wherein the extent of wear of the super block is determined by an erase count of the physical block mapped to the super block.

11. The controller according to claim 10, wherein the controller is a microprocessor,

wherein the global wear leveling manager changes the mapping of the first physical block and the second physical block between the corresponding first super block and the corresponding second super block if:

the corresponding first super block is a global maximum super block,

the corresponding second super block is a global minimum super block, an

A difference between the degree of wear of the corresponding first super block and the corresponding second super block is equal to or greater than a threshold,

wherein the global largest super block has the highest degree of wear out among all of the first super block and the second super block, and

wherein the global minimum superblock has the lowest degree of wear out among all of the first and second superblocks.

12. The controller of claim 11, wherein the global wear leveling manager restores the changed mapping to a mapping before the mapping was changed when a difference between degrees of wear of the corresponding first superblock and the corresponding second superblock becomes less than the threshold.

13. The controller according to claim 9, wherein the controller is a microprocessor,

further comprising a global virtual flash layer, global VFL, performing mapping of first and second virtual addresses to physical addresses associated with the first and second physical blocks,

wherein the first core drives a first local Flash Translation Layer (FTL) that performs mapping between the first virtual address and the first logical address as addresses of the first superblock, and

wherein the second core drives a second local FTL that performs mapping between the second virtual address and the second logical address as addresses of the second superblock.

14. The controller of claim 13, further comprising:

a first hardware accelerator to queue commands provided from the first local FTL and the second local FTL to the global VFL; and

a second hardware accelerator to schedule commands provided from the first and second cores to the first and second memory devices.

15. The controller according to claim 9, wherein the controller is a microprocessor,

wherein the first core drives a first local FTL that performs mapping between the first logical address and a first virtual address associated with the first superblock and drives a first local VFL that performs mapping between the first virtual address and a physical address, and

wherein the second core drives a second local FTL that performs mapping between the second logical address and a second virtual address associated with the second superblock and drives a second local VFL that performs mapping between the second virtual address and a physical address.

16. The controller according to claim 9, wherein the controller is a microprocessor,

wherein the first core performs a local wear leveling operation that moves data between the first superblocks based on a degree of wear of the first superblocks, and

wherein the second core performs a local wear leveling operation that moves data between the second superblocks based on a degree of wear of the second superblocks.

17. A memory system, comprising:

a plurality of memory devices including a first group of memory units and a second group of memory units, the first group of memory units and the second group of memory units being exclusively assigned to a first core and a second core, respectively;

the first and second cores to control the memory device to perform local wear leveling operations on the first and second sets of memory units, respectively; and

a global wear leveling manager to exchange the exclusive allocation of one or more memory units between the first core and the second core based on a degree of wear of each memory unit.

Background

Computer environment paradigms have transitioned to pervasive computing, which makes computing systems available anytime and anywhere. Therefore, the use of portable electronic devices such as mobile phones, digital cameras, and laptop computers is rapidly increasing. These portable electronic devices typically use a memory system having one or more memory devices to store data. The memory system may be used as a primary memory device or a secondary memory device for the portable electronic device.

Because there are no moving parts, the memory system provides advantages such as excellent stability and durability, high information access speed, and low power consumption. Examples of the memory system having such advantages include a Universal Serial Bus (USB) memory device, a memory card having various interfaces, and a Solid State Drive (SSD).

Disclosure of Invention

Various embodiments of the present disclosure relate to a memory system and an operating method thereof, which can balance wear-out degrees of memory blocks included in different memory devices.

According to an embodiment, a memory system includes: a first memory device including a plurality of first physical blocks; a second memory device including a plurality of second physical blocks; a first core adapted to manage a plurality of first super blocks storing data associated with a first logical address, the plurality of first super blocks being mapped to a plurality of first physical blocks; a second core adapted to manage a plurality of second super blocks storing data associated with a second logical address, the plurality of second super blocks mapped to a plurality of second physical blocks; a global wear leveling manager adapted to change a mapping between a first physical block mapped to one of the first super blocks and a second physical block mapped to one of the second super blocks based on a degree of wear of the first super blocks and the second super blocks; a host interface adapted to classify a logical address received from a host into a first logical address and a second logical address, provide the first logical address to the first core, and provide the second logical address to the second core.

According to another embodiment, a controller to control a first memory device and a second memory device, the controller includes: a first core adapted to manage a plurality of first super blocks storing data associated with a first logical address, the plurality of first super blocks being mapped to a plurality of first physical blocks included in a first memory device; a second core adapted to manage a plurality of second super blocks storing data associated with a second logical address, the plurality of second super blocks being mapped to a plurality of second physical blocks included in a second memory device; a global wear leveling manager adapted to change a mapping between a first physical block mapped to one of the first super blocks and a second physical block mapped to one of the second super blocks based on a degree of wear of the first super blocks and the second super blocks; a host interface adapted to classify logical addresses received from a host into a first logical address and a second logical address, provide the first logical address to the first core, and provide the second logical address to the second core.

According to yet another embodiment, a memory system includes: a plurality of memory devices including a first group of memory units and a second group of memory units, the first and second groups being exclusively assigned to a first core and a second core, respectively; a first core and a second core adapted to control a memory device to perform local wear leveling operations on the first group and the second group, respectively; and a global wear leveling manager adapted to exchange a dedicated allocation of one or more memory units between the first core and the second core based on a degree of wear of the respective memory units.

According to yet another embodiment, a method of operating a memory system including a plurality of memory devices including a first group of memory units and a second group of memory units, the method of operating includes: assigning the first and second groups exclusively to the first and second cores, respectively; controlling, by the first core and the second core, the memory device to perform local wear leveling operations on the first group and the second group, respectively; the exclusive allocation of one or more memory units is exchanged between the first core and the second core based on a degree of wear of the respective memory units.

These and other features and advantages of the claimed invention will become apparent to those of ordinary skill in the art in view of the following drawings and detailed description.

Drawings

FIG. 1 is a block diagram illustrating a data processing system including a memory system according to an embodiment.

Fig. 2 is a block diagram illustrating an example of a memory device.

Fig. 3 and 4 are diagrams illustrating an operation of the memory system according to the embodiment.

Fig. 5 is a block diagram showing the structure of the memory system according to the first embodiment.

Fig. 6A to 6C are examples of address mapping tables of the memory system shown in fig. 5.

Fig. 7 is a block diagram showing the structure of a memory system according to the second embodiment.

FIG. 8 is an example of an address mapping table of the memory system shown in FIG. 7.

Detailed Description

Hereinafter, various embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In the following description, only portions necessary for understanding the operation according to the present embodiment will be described, and descriptions of other portions may be omitted so as not to obscure the subject matter of the present embodiment.

The figures are schematic diagrams of various embodiments (and intermediate structures). As such, variations from the configurations and shapes of the illustrations as a result, for example, of manufacturing techniques and/or tolerances, are to be expected. Accordingly, the described embodiments should not be construed as limited to the particular configurations and shapes shown herein but are to include deviations in configurations and shapes that do not depart from the spirit and scope of the invention as defined by the appended claims.

The present invention is described herein with reference to cross-sectional and/or plan views of desirable embodiments of the invention. However, the embodiments of the present invention should not be construed as limiting the inventive concept. Although a few embodiments of the present invention will be shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention. Unless defined otherwise, all terms including technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.

It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the present disclosure and relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

It should also be noted that features present in one embodiment may be used with one or more features of another embodiment without departing from the scope of the invention.

It is further noted that throughout the drawings, like reference numerals refer to like elements.

FIG. 1 is a block diagram illustrating a data processing system 100 according to an embodiment of the present invention.

Referring to FIG. 1, data processing system 100 may include a host 102 operably coupled to a memory system 110.

The host 102 may include any of a variety of portable electronic devices such as a mobile phone, an MP3 player, and a laptop computer, or any of a variety of non-portable electronic devices such as a desktop computer, a game console, a Television (TV), and a projector.

Host 102 may include at least one Operating System (OS) that may manage and control the overall functionality and operation of host 102 and provide operations between host 102 and a user using data processing system 100 or memory system 110. The OS may support functions and operations corresponding to usage, purpose, and usage of a user. For example, the OS may be divided into a general-purpose OS and a mobile OS according to the mobility of the host 102. The general-purpose OS may be classified into a personal OS and an enterprise OS according to the user's environment.

The memory system 110 may operate to store data for the host 102 in response to requests by the host 102. Non-limiting examples of the memory system 110 may include a Solid State Drive (SSD), a multimedia card (MMC), a Secure Digital (SD) card, a Universal Serial Bus (USB) device, a universal flash memory (UFS) device, a Compact Flash (CF) card, a Smart Media Card (SMC), a Personal Computer Memory Card International Association (PCMCIA) card, and a memory stick. The MMC may include an embedded MMC (emmc), a reduced-size MMC (RS-MMC), a micro MMC, and the like. The SD card may include a mini SD card and a micro SD card.

The memory system 110 may be implemented as various types of storage devices. Examples of such memory devices may include, but are not limited to, volatile memory devices such as Dynamic Random Access Memory (DRAM) and static RAM (sram), and volatile memory devices such as Read Only Memory (ROM), mask ROM (mrom), programmable ROM (prom), erasable programmable ROM (eprom), electrically erasable programmable ROM (eeprom), ferroelectric RAM (fram), phase change RAM (pram), magnetoresistive RAM (mram), resistive RAM (RRAM or ReRAM), and flash memory. The flash memory may have a 3-dimensional (3D) stack structure.

Memory system 110 may include a controller 130 and a plurality of memory devices 150-1, 150-2, 150-3, and 150-4. The plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may store data for the host 102, and the controller 130 may control the storage of data into the plurality of memory devices 150-1, 150-2, 150-3, and 150-4.

The controller 130 and the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may be integrated into a single semiconductor device. For example, the controller 130 and the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may be integrated into one semiconductor device to constitute a Solid State Drive (SSD). When the memory system 110 is used as an SSD, the operation speed of the host 102 connected to the memory system 110 can be increased. In addition, the controller 130 and the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may be integrated into one semiconductor device to constitute a memory card. For example, the controller 130 and the memory device 150 may constitute a memory card such as: personal Computer Memory Card International Association (PCMCIA) card, Compact Flash (CF) card, Smart Media (SM) card, memory stick, multimedia card (MMC) including reduced-size MMC (RS-MMC), micro MMC, Secure Digital (SD) card including mini SD card, micro SD card, and SDHC card, or Universal Flash (UFS) device.

The plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may be non-volatile memory devices and may retain stored data even if power is not supplied. The memory device 150 may store data provided from the host 102 through a programming operation and may provide the stored data to the host 102 through a read operation. The plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may include a plurality of memory blocks, each memory block may include a plurality of pages, and each page may include a plurality of memory cells coupled to a word line. In an embodiment, the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 may be flash memories. The flash memory may have a 3-dimensional (3D) stack structure.

FIG. 2 is a block diagram illustrating an example of memory device 150-1.

Fig. 2 representatively illustrates a first memory device 150-1 among a plurality of memory devices 150-1, 150-2, 150-3, and 150-4 included in the memory system 110 of fig. 1.

First memory device 150-1 may include a plurality of memory DIE DIEs. For example, each of the memory DIEs DIE may be a NAND memory DIE. Memory DIE may be connected to controller 130 through channel CH. The number of dies may vary by design.

Each of the memory DIEs DIE may have a hierarchy containing planes, memory blocks, and pages. The memory DIE may receive one command at a time. The memory DIE may include multiple planes, and the multiple planes may process each command received in parallel. Each of the planes may include a plurality of physical blocks. Each of the physical blocks may be a minimum unit of an erase operation. One physical block may include a plurality of pages. Each of the pages may be a minimum unit of write operations. Multiple memory DIE may operate in parallel with each other.

The structures of the second to fourth memory devices 150-2, 150-3 and 150-4 may be substantially the same as or similar to the structure of the first memory device 150-1. Multiple memory devices 150-1, 150-2, 150-3, and 150-4 may operate in parallel with each other.

Referring back to fig. 1, controller 130 may include a host interface 132, a plurality of cores 134-1, 134-2, 134-3, and 134-4, a plurality of memory interfaces 142-1 and 142-2, a plurality of memories 144-1, 144-2, 144-3, and 144-4, and a Global Wear Leveling Manager (GWLM)136, all operatively coupled via an internal bus.

To provide higher data input/output performance, the memory system 110 may include multiple cores 134-1, 134-2, 134-3, and 134-4 operating in parallel with one another. Each of the cores 134-1, 134-2, 134-3, and 134-4 may drive firmware called a Flash Translation Layer (FTL).

The FTL is firmware used between the file system of the host 102 and the flash memory. Flash memory can provide faster read speeds at a relatively lower unit cost than other memory devices. However, since the flash memory does not support the rewrite operation, an erase operation must first be performed to write data to the flash memory. Also, the unit of data to be erased is larger than the unit of data to be written to the flash memory. When the memory system 110 including a flash memory is used as a storage device of the host 102, a file system for a hard disk may not be utilized as it is due to an erasure characteristic. Thus, the FTL may be used between the file system and the flash memory.

The cores 134-1, 134-2, 134-3, and 134-4 may be implemented as microprocessors or Central Processing Units (CPUs).

The host I/F132 may be configured to process commands and data for the host 102 and may communicate with the host 102 through one or more of a variety of interface protocols, such as: universal Serial Bus (USB), multi-media card (MMC), peripheral component interconnect express (PCI-e or PCIe), Small Computer System Interface (SCSI), serial SCSI (sas), Serial Advanced Technology Attachment (SATA), Parallel Advanced Technology Attachment (PATA), Enhanced Small Disk Interface (ESDI), and Integrated Drive Electronics (IDE).

The host I/F132 may be driven by firmware called a Host Interface Layer (HIL) to exchange data with the host.

The host interface 132 may receive a request from the host 102 and a logical address corresponding to the request. For example, the logical address may be a logical block address LBA used in a file system of the host 102.

The host interface 132 may assign requests to the cores 134-1, 134-2, 134-3, and 134-4 based on the logical addresses. For example, the host interface 132 may provide requests to the cores 134-1, 134-2, 134-3, and 134-4 based on the result values obtained by performing a modulo operation on the logical address.

The plurality of memory I/Fs 142-1 and 142-2 may serve as a memory/storage interface for interfacing controller 130 with the plurality of memory devices 150-1, 150-2, 150-3, and 150-4, such that controller 130 controls the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 in response to requests from host 102. When the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 are flash memories or, particularly, NAND flash memories, the plurality of memory I/Fs 142-1 and 142-2 may generate control signals for the memory device 150 and process data to be provided to the plurality of memories 150-1, 150-2, 150-3, and 150-4 under the control of the plurality of cores 134-1, 134-2, 134-3, and 134-4. The plurality of memory I/Fs 142-1 and 142-2 may serve as an interface (e.g., a NAND flash interface) for processing commands and data between the controller 130 and the plurality of memory devices 150-1, 150-2, 150-3, and 150-4. In particular, the plurality of memory I/Fs 142-1 and 142-2 may support data transfers between the controller 130 and the plurality of memory devices 150-1, 150-2, 150-3, and 150-4.

The plurality of cores 134-1, 134-2, 134-3, and 134-4 may control the plurality of memory devices 150-1, 150-2, 150-3, and 150-4 through the plurality of memory interfaces 142-1 and 142-2. Each of the cores 134-1, 134-2, 134-3, and 134-4 may access physical blocks exclusively assigned to itself. For example, when the first core 134-1 accesses physical blocks included in the first memory device 150-1, the second to fourth cores 134-2, 134-3, and 134-4 may not access the physical blocks.

The plurality of memories 144-1, 144-2, 144-3, and 144-4 may serve as operation memories of the memory system 110 and the controller 130 and store data for driving the memory system 110 and the controller 130. For example, the first memory 144-1 may store data needed by the first core 134-1 to perform operations. Similarly, the second to fourth memories 144-2, 144-3 and 144-4 may store data required for the second to fourth cores 134-2, 134-3 and 134-4 to perform operations, respectively.

The plurality of memories 144-1, 144-2, 144-3 and 144-4 may be implemented as volatile memories. For example, each of the memories 144-1, 144-2, 144-3, and 144-4 may be implemented as a Static Random Access Memory (SRAM) or a Dynamic Random Access Memory (DRAM). The memories 144-1, 144-2, 144-3, and 144-4 may be disposed internal or external to the controller 130. FIG. 1 shows, as an example, memories 144-1, 144-2, 144-3, and 144-4 disposed within controller 130. In some embodiments, the memories 144-1, 144-2, 144-3 and 144-4 may be implemented as external volatile memory devices having a memory interface for inputting and outputting data between the memories 144-1, 144-2, 144-3 and 144-4 and the controller 130.

The cores 134-1, 134-2, 134-3, and 134-4 may perform operations corresponding to requests received from the host interface 132, i.e., foreground operations. For example, the first core 134-1 may control the first memory device 150-1 to program data into physical blocks of the first memory device 150-1 in response to a write request of the host interface 132, and control the first memory device 150-1 to read data from physical blocks of the first memory device 150-1 in response to a read request of the host interface 132.

In addition, the cores 134-1, 134-2, 134-3, and 134-4 may perform background operations on the memory devices 150-1, 150-2, 150-3, and 150-4, respectively.

The plurality of physical blocks included in the memory devices 150-1, 150-2, 150-3, and 150-4 may have a limited lifetime. Therefore, when an erase operation is performed on a specific physical block a predetermined number of times, the physical block may not be used any more. Each of the cores 134-1, 134-2, 134-3, and 134-4 may perform a local wear leveling operation on physical blocks exclusively allocated to itself as a background operation.

When each of the cores 134-1, 134-2, 134-3, and 134-4 performs a local wear leveling operation only on physical blocks exclusively allocated to itself, it is difficult to balance the degree of wear of all the physical blocks of the memory system 110. This is because the number of requests distributed to each of the cores 134-1, 134-2, 134-3, and 134-4 may vary depending on the logical address associated with the request provided from the host 102.

Even if the number of requests provided to the respective cores 134-1, 134-2, 134-3, and 134-4 are different from each other, the degree of wear of the physical blocks exclusively allocated to the same core may be balanced by the local wear leveling operation. However, it is difficult to balance the degree of wear of physical blocks specifically assigned to different cores. For example, when different cores exclusively access different memory devices, it is difficult to balance the degree of wear of physical blocks included in the different memory devices. Furthermore, when the physical blocks of a particular memory device reach the end of the useful life earlier than the physical blocks of the remaining memory devices, the entire memory system 110 may not be able to be used properly even though the physical blocks of the remaining memory devices are still available.

When the host interface 132 exchanges logical addresses associated with different cores, the frequency with which the respective cores service requests provided after the logical addresses are exchanged may be balanced. However, although the exchange is performed after the balance is broken, it may not be possible to restore the balance in which the degree of wear of the physical block has been broken. Thus, the exchange of logical addresses may not be sufficient to effectively balance the degree of wear of physical blocks specifically assigned to different cores.

According to this embodiment, the global wear leveling manager 136 may perform a global wear leveling operation that swaps physical blocks that are specifically allocated to the various cores 134-1, 134-2, 134-3, and 134-4. The global wear leveling operation according to the present embodiment will be described in detail with reference to fig. 3 to 5.

In the present disclosure, a global wear leveling operation of exchanging physical blocks exclusively allocated to a first core and a second core, respectively, may mean an operation of exchanging exclusive allocation of physical blocks between the first core and the second core. For example, when a first set of physical blocks is exclusively assigned to a first core and a second set of physical blocks is exclusively assigned to a second core, the first and second sets may belong to a first memory device and a second memory device respectively controlled by the first core and the second core. The exclusive allocation to the first group may be changed to the second core and the exclusive allocation to the second group may be changed to the first core by the global wear leveling operation. When the first and second groups are first and second superblocks, respectively, the exclusive allocation of the first and second superblocks may be exchanged between the first and second cores through a global wear leveling operation.

Fig. 3 and 4 are diagrams illustrating an operation of the memory system 110 according to an embodiment of the present invention.

FIG. 3 illustrates the host interface 132, the plurality of cores 134-1, 134-2, 134-3, and 134-4, and the global wear leveling manager 136 described above with reference to FIG. 1.

The host interface 132 may receive a request including a logical address from the host 102. The host interface 132 may distribute requests to the cores 134-1, 134-2, 134-3, and 134-4 based on the logical address of the request. For example, the host interface 132 may provide the request to the first core 134-1 when a result value obtained by performing a modulo 4 operation ("LBA% 4") on the logical address is "1", the host interface 132 may provide the request to the second core 134-2 when the result value is "2", the host interface 132 may provide the request to the third core 134-3 when the result value is "3", and the host interface 132 may provide the request to the fourth core 134-4 when the result value is "0".

Multiple physical blocks and multiple superblocks may be allocated to each of cores 134-1, 134-2, 134-3, and 134-4. Each of the super blocks may be mapped to one or more physical blocks. For example, the cores 134-1, 134-2, 134-3, and 134-4 may group blocks that can be accessed in parallel among a plurality of physical blocks into super blocks, and access and manage the grouped blocks in units of super blocks.

In an initial state prior to performing a global wear leveling operation in the memory system 110, the superblock of the first core 134-1 may be mapped to physical blocks of the first memory device 150-1. Similarly, the super blocks of the second to fourth cores 134-2, 134-3 and 134-4 may be mapped to physical blocks included in the second to fourth memory devices 150-2, 150-3 and 150-4, respectively. Hereinafter, the superblock assigned to the first core 134-1 is referred to as a first superblock, and the superblock assigned to the second core 134-2 is referred to as a second superblock. In addition, the physical blocks allocated to the first memory device 150-1 are referred to as first physical blocks, and the physical blocks allocated to the second memory device 150-2 are referred to as second physical blocks.

Each of cores 134-1, 134-2, 134-3, and 134-4 may perform local wear leveling operations on the superblock assigned to itself. For example, when the difference between the degrees of wear of the first superblock is equal to or greater than a threshold, the first core 134-1 may move the data stored in the first superblock with the least wear to the first superblock with the most wear.

The extent of wear of the super block may be determined by the erase count. For example, the first core 134-1 may count the number of erasures of the first superblock to determine the extent of wear of the superblock. The erase count of a superblock may be determined by the erase count of the physical blocks mapped to the superblock.

The first core 134-1 may place free blocks from the lowest erase count to the highest erase count among the first superblock in a free block queue. In fig. 3, the super block queued to the free block queue is denoted by reference symbol "SB". The free block may not currently store any data. The erase count and free block queue for each of the super blocks may be stored in the first memory 144-1, however, the first memory 144-1 is omitted from FIG. 3. Among the superblocks assigned to the same core, the superblock with the highest erase count is referred to as the local maximum superblock, and the superblock with the lowest erase count is referred to as the local minimum superblock.

When the difference between the erase counts of the local maximum superblock and the local minimum superblock exceeds a threshold, the first core 134-1 may transfer and store data of the local minimum superblock into the local maximum superblock, and frequently use the local minimum superblock by allowing user data to be stored in the local minimum superblock. In fig. 3, the local maximum super block is represented by a background super block (BGSB) and the local minimum super block is represented by a foreground super block (FGSB).

Similarly, each of the second through fourth cores 134-2, 134-3, and 134-4 may count the number of erasures assigned to its own superblock, generate a free block queue, and perform a local wear leveling operation. The free block queues of the third core 134-3 and the fourth core 134-4 are omitted from FIG. 3.

According to an embodiment, when the degree of wear of superblocks assigned to different cores is unbalanced, global wear leveling manager 136 may perform a global wear leveling operation, swapping physical blocks assigned to different cores. For example, global wear leveling manager 136 may swap the physical block with the highest erase count among the physical blocks assigned to the core with the highest requested execution frequency for the physical block with the lowest erase count among the physical blocks assigned to the core with the lowest requested execution frequency.

When the global wear leveling manager 136 performs a global wear leveling operation, the degree of wear of the physical blocks included in the entire memory system 110 may be leveled or balanced even if the number of requests from the host 102 for each logical address is different. According to the present embodiment, since the memory system 110 can be used until all physical blocks of the memory system 110 reach the end of their lifetime, the lifetime of the memory system 110 can be improved.

Fig. 3 and 4 show an example of a global wear leveling operation in the case where the extent of wear of the superblock between cores has not been leveled because the first core 134-1 has the highest frequency of requests and the second core 134-2 has the lowest frequency of requests.

FIG. 4 illustrates the operation of the memory system 110 according to an embodiment of the invention.

In operation S401, the first core 134-1 may erase the first superblock. In FIG. 3, the first superblock to be erased is represented by "SBCLOSED" in the first core 134-1.

In operation S403, the first core 134-1 may determine whether the first superblock that was erased is the superblock with the highest erase count in the overall memory system 110. Hereafter, the superblock with the highest erase count throughout memory system 110 is referred to as the global maximum superblock. Since the degree of wear of the superblock allocated to the same core is uniformly maintained by the local wear leveling operation, the globally largest superblock may be included in the superblock of the core having the largest amount of received requests.

In operation S405, when the first superblock is a global maximum superblock, the first core 134-1 may determine whether a difference in erase counts (MAX-MIN) between the global maximum superblock and the superblock with the lowest erase count throughout the memory system 110 is equal to or greater than a threshold. Hereinafter, the superblock with the lowest erase count throughout memory system 110 is referred to as the global minimum superblock. According to an embodiment, each of cores 134-1, 134-2, 134-3, and 134-4 may store erase counts of superblocks of entire memory system 110 in memory linked to each of cores 134-1, 134-2, 134-3, and 134-4, and obtain an erase count of a global minimum superblock from the erase counts. The global minimum superblock may be included in the superblock of the kernel with the least amount of requests received. In the example of fig. 3 and 4, the global minimum superblock may be included in the superblock assigned to the second core 134-2.

When the difference between the erase counts of the global maximum superblock and the global minimum superblock is equal to or greater than the threshold, the global wear leveling manager 136 may perform the global wear leveling operations of operations S407, S409, S411, S413, and S415.

In operation S407, the first core 134-1 may request a global wear leveling operation from the global wear leveling manager 136.

In operation S409, the global wear leveling manager 136 may request a global minimum superblock from the second core 134-2.

In operation S411, when data is stored in the global minimum superblock, the second core 134-2 may move and store the data into a superblock having the highest erase count (i.e., a local maximum superblock) in the second superblock, and erase the global minimum superblock. In FIG. 3, the global minimum superblock is represented by "SBCLOSED" in the second core 134-2. In operation S413, the second core 134-2 may provide the global wear leveling manager 136 with a response to the request of operation S409.

In operation S415, the global wear leveling manager 136 may swap the physical block (MAX PB) mapped to the global maximum superblock with the physical block (MIN PB) mapped to the global minimum superblock. That is, global wear leveling manager 136 may map physical blocks mapped to a global maximum superblock to a global minimum superblock, and map physical blocks mapped to a global minimum superblock to a global maximum superblock. The physical block with the highest erase count throughout the memory system 110 that was accessed via the first core 134-1 may thereafter be accessed by the second core 134-2. In addition, the physical block having the lowest erase count in the overall memory system 110 that was accessed via the second core 134-2 may thereafter be accessed by the first core 134-1.

When the global wear leveling manager 136 exchanges physical blocks, the global maximum superblock included in the first core 134-1 may be changed to a global minimum superblock, and the global minimum superblock included in the second core 134-2 may be changed to a global maximum superblock. This is because the erase count of a superblock may be determined from the erase count of the physical blocks mapped to that superblock.

The second core 134-2 may rarely use the global maximum superblock when the second core 134-2 tends to maintain the lowest requested execution frequency. The global minimum superblock may be frequently used by the first core 134-1 when the first core 134-1 tends to maintain the highest request execution frequency. Thus, the degree of wear of the superblock and physical blocks of the entire memory system 110 may be balanced.

Fig. 5 is a block diagram showing the structure of the memory system 110 according to the first embodiment of the present invention.

Fig. 6A to 6C are examples of address mapping tables of the memory system 110 shown in fig. 5.

The memory system 110 shown in FIG. 5 may further include a fifth core 134-5 and a fifth memory 144-5 in the memory system 110 shown in FIG. 1. The memory system 110 shown in FIG. 5 may include a plurality of memories 144-1, 144-2, 144-3, and 144-4, however, the memories 144-1, 144-2, 144-3, and 144-4 are omitted from the figure.

The host interface 132 may drive the HIL. As described with reference to FIGS. 1 and 3, the HIL may distribute requests received from the host 102 to the multiple cores 134-1, 134-2, 134-3, and 134-4 based on the logical address associated with the request.

The first through fourth cores 134-1, 134-2, 134-3, and 134-4 may drive the FTL. In FIG. 5, the FTLs driven by the first through fourth cores 134-1, 134-2, 134-3 and 134-4, respectively, are represented by a first FTL1, a second FTL2, a third FTL3 and a fourth FTL4, respectively.

Each of the FTLs may translate logical addresses used in a file system of the host 102 into virtual addresses. The virtual address may be an address indicating the super block described above with reference to fig. 3. Each FTL can store a mapping table indicating a mapping between logical addresses and virtual addresses in the memories 144-1, 144-2, 144-3, and 144-4.

FIG. 6A is an example of a logical to virtual (L2V) mapping table stored in the first memory 144-1.

The L2V mapping table stored in the first memory 144-1 may include a mapping of the virtual address VA to each logical address LA associated with the first core 134-1. In the present embodiment, because the first core 134-1 only performs requests associated with logical addresses having a result value of "1" of the modulo 4 operation, the L2V mapping table may only include logical addresses having a result value of "1". Additionally, the L2V mapping table may only include the virtual address of the first superblock assigned to the first core 134-1.

Referring back to fig. 5, each of the FTLs can manage a super block mapped to a physical block. For example, each of the FTLs may perform a local wear leveling operation based on the erase count of the superblock.

The fifth core 134-5 may include a Global Virtual Flash Layer (GVFL), a Global Flash Interface Layer (GFIL), and a Global Wear Leveling Manager (GWLM) 136.

The GVFL may perform mapping between virtual addresses and physical addresses throughout the memory system 110. The GVFL can map normal physical blocks other than the defective physical blocks to the super block. The GVFL may store a table indicating the mapping between virtual addresses and physical addresses in the fifth memory 144-5.

Fig. 6B provides an example of the V2P mapping table stored in the fifth memory 144-5. The V2P mapping table stored in the fifth memory 144-5 may include mappings between superblocks SB assigned to cores 134-1, 134-2, 134-3, and 134-4 and physical blocks PB stored in the plurality of memory devices 150-1, 150-2, 150-3, and 150-4. Each of the super blocks SB may be mapped to physical blocks PB operable in parallel with each other. Fig. 6B provides an example of a case where one super block SB is mapped to physical blocks PB included in four memory DIEs DIE1, DIE2, DIE3, and DIE 4. For example, a first super block of the first core 134-1 may be mapped to physical blocks included in the four memory DIEs DIE1, DIE2, DIE3, and DIE4 of the first memory device 150-1.

According to an embodiment, the GVFL may further perform an operation of storing parity data in a portion of the super block in preparation for chip kill (chipkill) that may occur in the memory device.

Referring back to fig. 5, the GFIL may generate a command to be provided to the memory device based on the physical address translated by the GVFL. For example, the GFIL may generate program commands, read commands, and erase commands for physical blocks of the memory devices 150-1, 150-2, 150-3, and 150-4.

The global wear leveling manager 136 may perform a global wear leveling operation by changing the mapping between the virtual addresses and the physical addresses stored in the fifth memory 144-5.

Referring back to FIG. 6B, prior to performing the global wear leveling operation, each of the super blocks allocated to the first core 134-1 may be mapped to physical blocks of the first memory device 150-1, and each of the super blocks allocated to the second core 134-2 may be mapped to physical blocks of the second memory device 150-2.

The dashed lines shown in fig. 6B represent the mapping between physical blocks and super blocks exchanged by the global wear leveling operation. When performing a global wear leveling operation, some of the superblocks assigned to first core 134-1 may be mapped to physical blocks of memory devices other than first memory device 150-1. Fig. 6B provides an example of the following: with global wear leveling operations, the second superblock ("2" in the BLOCK "field) of the first CORE 134-1 (" 1 "in the CORE" field) is mapped to the physical BLOCKs ("5", "4", and "5" in the respective "DIE 1" to "DIE 4" fields) of the second memory DEVICE 150-2 ("2" in the DEVICE "field), and the fifth superblock (" 5 "in the BLOCK" field) of the second CORE 134-2 ("1" in the CORE "field) is mapped to the physical BLOCKs (" 2 ", and" 2 "in the respective" DIE1 "to" DIE4 "fields) of the first memory DEVICE 150-1 (" 1 "in the DEVICE" field). That is, according to an embodiment, the exclusive allocation of the second superblock and the fifth superblock may be exchanged between the first core 134-1 and the second core 134-2 through a global wear leveling operation.

According to an embodiment, the fifth memory 144-5 may further store an exchange information table including information of physical blocks exchanged between the cores 134-1, 134-2, 134-3, and 134-4.

Fig. 6C provides an example of a table of exchange information that may be stored in the fifth memory 144-5.

In the example of fig. 6C, the exchange information table may represent the following states: physical blocks mapped to the second superblock of the first core 134-1 are swapped for physical blocks mapped to the fifth superblock of the second core 134-2 by a global wear leveling operation. When physical blocks are swapped, the degree of wear on the superblock assigned to the first core 134-1 and the second core 134-2 may be balanced.

According to an embodiment, when the degree of wear of the superblock between cores 134-1, 134-2, 134-3, and 134-4 is balanced, global wear leveling manager 136 may refer to the swap information table to restore the mapping between the virtual block and the physical block after the swap to the mapping between the virtual block and the physical block before the swap. For example, the global wear level manager 136 may determine: when the erase counts of the superblocks in cores 134-1, 134-2, 134-3, and 134-4 are less than the threshold, their wear levels are balanced and the mapping between virtual blocks and physical blocks is restored. In the example of fig. 6C, to restore the mapping between virtual blocks and physical blocks, the global wear leveling manager 136 may map physical blocks mapped to the second superblock of the first core 134-1 to a fifth superblock of the second core 134-2, map physical blocks mapped to the fifth superblock of the second core 134-2 to the second superblock of the first core 134-1, and remove the second superblock of the first core 134-1 and the fifth superblock of the second core 134-2 from the swap information table.

Referring back to FIG. 5, the memory system 110 may further include one or more hardware accelerators (not shown) to improve the computational processing performance of the fifth core 134-5. As a first example, the hardware accelerator may queue commands provided to the GVFL from multiple FTLs to quickly receive commands provided to the fifth core 134-5 from the first through fourth cores 134-1, 134-2, 134-3, and 134-4. As a second example, the hardware accelerator may schedule commands output from the GFIL to the plurality of memory interfaces 142-1 and 142-2 to process the commands in parallel in the memory devices 150-1, 150-2, 150-3, and 150-4.

The memory interfaces 142-1 and 142-2 may control the memory devices 150-1, 150-2, 150-3, and 150-4 based on commands received from the GFIL. Memory interfaces 142-1 and 142-2 have been described in detail with reference to FIG. 1.

Fig. 7 is a block diagram showing the structure of a memory system 110 according to a second embodiment of the present invention.

Fig. 8 is an example of an address mapping table of the memory system 110 shown in fig. 7.

The memory system 110 shown in fig. 7 may correspond to the memory system 110 shown in fig. 1. The memory system 110 shown in FIG. 7 may include a plurality of memories 144-1, 144-2, 144-3, and 144-4, however, the memories 144-1, 144-2, 144-3, and 144-4 are omitted from the figure.

The host interface 132 may drive the HIL. The HIL may distribute requests received from the host 102 to the multiple cores 134-1, 134-2, 134-3, and 134-4 based on the logical address associated with the request.

Each of the cores 134-1, 134-2, 134-3, and 134-4 may drive an FTL, a Virtual Flash Layer (VFL), and a Flash Interface Layer (FIL).

In FIG. 7, the FTLs driven by the cores 134-1, 134-2, 134-3 and 134-4 are represented by a first FTL1, a second FTL2, a third FTL3 and a fourth FTL4, respectively.

Each of the FTLs may translate logical addresses to virtual addresses. The FTL according to the second embodiment may perform substantially similar operations as the FTL according to the first embodiment described above with reference to fig. 5. For example, a first FTL included in the first core 134-1 may store a table in the first memory 144-1 that is substantially similar to the L2V mapping table described above with reference to FIG. 6A. In addition, each of the FTLs may perform a local wear leveling operation.

In FIG. 7, the VFLs driven by cores 134-1, 134-2, 134-3, and 134-4 are represented by first, second, third, and fourth VFL VFL1, FVL2, VFL3, and VFL4, respectively.

Each of the VFLs may translate virtual addresses to physical addresses. For example, the first VFL may translate virtual addresses indicating the superblock assigned to the first core 134-1 to physical addresses. Each of the VFLs may store a V2P mapping table indicating mappings between virtual addresses and physical addresses in a corresponding memory.

Fig. 8 provides an example of the V2P mapping table stored in the first memory 144-1. The V2P mapping table stored in the first memory 144-1 may include: a superblock SB assigned to the first core 134-1 and a physical block PB mapped to the superblock SB. Similarly, the second through fourth memories 144-2, 144-3 and 144-4 may store V2P mapping tables associated with the superblock SB assigned to the second through fourth cores 134-2, 134-3 and 134-4, respectively. Fig. 8 provides an example of a case where one super block is mapped to physical blocks included in four memory DIEs DIE1, DIE2, DIE3, and DIE 4.

According to an embodiment, each of the VFLs may further perform operations to store parity data in a portion of the superblock in preparation for chip kill that may occur in the memory device.

Referring back to fig. 7, each of the FILs may generate a command to be provided to the memory device based on a physical address translated by the VFL included in the same core. In FIG. 7, the FILs driven by the cores 134-1, 134-2, 134-3, and 134-4 are represented by a first FIL1, a second FIL2, a third FIL3, and a fourth FIL4, respectively. For example, the first FIL may generate a program command, a read command, and an erase command for a physical block assigned to the first core 134-1. According to the present embodiment, according to the global wear leveling operation, the first FIL is not limited to generating commands for physical blocks of the first memory device 150-1, and may generate commands for physical blocks of the second to fourth memory devices 150-2, 150-3, and 150-4. Similarly, the second to fourth FILs may generate commands for controlling the first to fourth memory devices 150-1, 150-2, 150-3, and 150-4 according to physical blocks allocated to the kernel including the second to fourth FILs, respectively.

The global wear leveling manager 136 may control the cores 134-1, 134-2, 134-3, and 134-4 to access the memories 144-1, 144-2, 144-3, and 144-4. Global wear leveling manager 136 may perform global wear leveling operations by changing the V2P mapping stored in memories 144-1, 144-2, 144-3, and 144-4.

Referring back to FIG. 8, each of the superblocks assigned to first core 134-1 prior to performing the global wear leveling operation may be mapped to physical blocks of memory DIEs DIE1, DIE2, DIE3, and DIE4 of first memory device 150-1. The portion within the dashed box in FIG. 8 represents the case where the second superblock of the first core 134-1 is mapped to a physical block of the second memory device 150-2 by a global wear leveling operation.

According to an embodiment, the memory system 110 may further include a sixth memory (not shown) for storing data required to drive the global wear leveling manager 136. Global wear leveling manager 136 may further store a swap information table that includes information for physical blocks that are swapped between cores. The exchange information table is substantially similar to the exchange information table described above with reference to fig. 6C.

According to an embodiment, when the degree of wear of the superblock between cores 134-1, 134-2, 134-3, and 134-4 is balanced, global wear leveling manager 136 may refer to the swap information table to restore the mapping between the virtual block and the physical block after the swap to the mapping between the virtual block and the physical block before the swap. The exchanged physical block may be exchanged again and the information of the physical block stored in the exchange information table may be removed.

The plurality of memory interfaces 142-1 and 142-2 may control the memory devices 150-1, 150-2, 150-3, and 150-4 based on commands received from the plurality of FILs. Memory interfaces 142-1 and 142-2 have been described in detail with reference to FIG. 1.

The memory system 110 described with reference to fig. 1 to 8 includes: a first memory device 150-1, a second memory device 150-2, a first core 134-1, a second core 134-2, a global wear leveling manager 136, and a host interface 132.

The first memory device 150-1 may include a plurality of first physical blocks, and the second memory device 150-2 may include a plurality of second physical blocks. Each of the first super blocks allocated to the first core 134-1 may be mapped to a plurality of physical blocks of a plurality of first physical blocks, and each of the second super blocks allocated to the second core 134-2 may be mapped to a plurality of physical blocks of a plurality of second physical blocks.

The global wear leveling manager 136 may swap a first physical block mapped to a first superblock for a second physical block mapped to a second superblock based on the extent of wear of the first and second superblocks. For example, global wear leveling manager 136 may swap the physical blocks mapped between the global maximum superblock included in the core with the highest requested execution frequency and the global minimum superblock included in the core with the lowest requested execution frequency.

The host interface 132 may distribute logical addresses received from the host 102 to the first core 134-1 and the second core 134-2. For example, the host interface 132 may classify the logical addresses into a first logical address and a second logical address according to a predetermined operation, provide the first logical address to the first core 134-1, and provide the second logical address to the second core 134-2.

When the first core 134-1 receives the first logical address from the host interface 132, the first core 134-1 may search for a superblock corresponding to the first logical address and physical blocks mapped to the superblock.

According to an embodiment of the present disclosure, the degree of wear of physical blocks included in the entire memory system 110 is equalized or balanced, thereby improving the lifespan of the memory system 110.

According to an embodiment of the present disclosure, a controller and a memory system that can equalize or balance wear levels of memory blocks included in different memory devices may be provided.

According to the embodiments of the present disclosure, it is possible to provide a controller and a memory system that can improve the lifespan of a plurality of memory devices by uniformly using all memory blocks included in the memory devices.

These effects obtainable in the present disclosure are not limited to the above-described embodiments, and other effects not described herein will be clearly understood by those skilled in the art to which the present disclosure pertains from the above detailed description.

Although the present disclosure has been described with respect to specific embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosure as defined in the following claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:闪存控制器、闪存控制器的方法及记忆装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类