Data synchronization method and device, electronic equipment and storage medium

文档序号:7639 发布日期:2021-09-17 浏览:25次 中文

1. A data synchronization method is characterized in that the method is executed by a server in any computer room realized based on Redis, and the method comprises the following steps:

acquiring data to be synchronized;

determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized;

and writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread, and synchronizing the data in the message queue to other machine rooms.

2. The method according to claim 1, wherein the determining, according to the service identifier of the data to be synchronized, the processing thread corresponding to the data to be synchronized includes:

performing hash operation on the service identifier of the data to be synchronized to obtain a hash value of the service identifier;

and determining a processing thread corresponding to the data to be synchronized according to the hash value of the service identifier and the number of threads in the thread pool.

3. The method of claim 1, wherein writing the data to be synchronized to a message queue in a corresponding service partition using the processing thread comprises:

determining a service partition corresponding to the data to be synchronized from at least two service partitions according to the service identification of the data to be synchronized;

and writing the data to be synchronized into a message queue in a corresponding service partition by adopting the processing thread.

4. The method of claim 1, wherein synchronizing data in the message queue to other rooms comprises:

and synchronizing the data in the message queue to other machine rooms through synchronizers corresponding to the service partitions.

5. The method according to claim 1, wherein the determining, according to the service identifier of the data to be synchronized, the processing thread corresponding to the data to be synchronized further comprises:

identifying whether the data to be synchronized is loopback data;

if not, determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized.

6. The method of claim 5, further comprising:

and if the data to be synchronized is identified to be loop data, forbidding to synchronize the data to be synchronized to other machine rooms.

7. A data synchronization apparatus configured in a server in any computer room implemented based on Redis, the apparatus comprising:

the data acquisition module is used for acquiring data to be synchronized;

the thread determining module is used for determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized;

and the data synchronization module is used for writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread and synchronizing the data in the message queue to other machine rooms.

8. An electronic device, comprising:

one or more processors;

a memory for storing one or more programs;

when executed by the one or more processors, cause the one or more processors to implement the data synchronization method of any of claims 1-6.

9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the data synchronization method of any one of claims 1 to 6.

Background

Redis is used as a cache database, has very wide application in various service systems, can cache hot data in a memory, reduces the access of the system to the database and a disk, reduces the pressure of the database, improves the access performance and efficiency of services, and plays an important role in a software architecture.

With the rapid development of the internet, the data scale and the number of users are rapidly increased, and the requirements for high concurrency, remote disaster recovery and the like are also higher and higher. At present, a master-slave allopatric deployment mode is generally adopted, namely a plurality of computer rooms realized based on Redis are deployed in different regions, one host computer room is arranged in the plurality of computer rooms, and the rest are slave computer rooms; however, the remote deployment of the master-slave architecture has problems of high data transmission and synchronization delay, and data write performance bottleneck, and needs to be improved.

Disclosure of Invention

The application provides a data synchronization method, a data synchronization device, an electronic device and a storage medium, so as to reduce time delay and improve data synchronization throughput.

In a first aspect, an embodiment of the present application provides a data synchronization method, which is executed by a master server in any computer room implemented based on Redis, and the method includes:

acquiring data to be synchronized;

determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized;

and writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread, and synchronizing the data in the message queue to other machine rooms.

In a second aspect, an embodiment of the present application further provides a data synchronization apparatus configured in a main server in any computer room implemented based on Redis, where the apparatus includes:

the data acquisition module is used for acquiring data to be synchronized;

the thread determining module is used for determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized;

and the data synchronization module is used for writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread and synchronizing the data in the message queue to other machine rooms.

In a third aspect, an embodiment of the present application further provides an electronic device, where the electronic device includes:

one or more processors;

a memory for storing one or more programs;

when executed by the one or more processors, cause the one or more processors to implement a data synchronization method as in any embodiment of the present application.

In a fourth aspect, the embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the data synchronization method according to any embodiment of the present application.

According to the technical scheme of the embodiment of the application, the server in any computer room can be implemented based on Redis, and by introducing the service identifier and combining the processing thread and the service partition, the throughput of data synchronization is improved, and the synchronization sequence between related service data is maintained; meanwhile, the machine rooms do not have the difference between master and slave, so that the resources of the machine rooms can be fully utilized. In addition, compared with the existing master-slave deployment scheme, the master-slave allocation does not exist between the machine rooms, namely any machine room has the write operation and read operation authority, the data transmission and synchronization time delay can be reduced, the rapid remote disaster recovery is realized, and the service reliability is further improved. A new idea is provided for real-time data synchronization among different machine rooms in different places.

Drawings

Fig. 1A is a flowchart of a data synchronization method provided in an embodiment of the present application;

FIG. 1B is a diagram of a data synchronization system architecture according to an embodiment of the present application;

FIG. 2A is a flow chart of another data synchronization method provided by an embodiment of the present application;

fig. 2B is a schematic diagram of a data write service partition according to an embodiment of the present application;

FIG. 3 is a flow chart of another data synchronization method provided by an embodiment of the present application;

fig. 4 is a block diagram of a data synchronization apparatus according to an embodiment of the present application;

fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.

Fig. 1A is a flowchart of a data synchronization method provided in an embodiment of the present application, and fig. 1B is a data synchronization system architecture diagram provided in an embodiment of the present application; the embodiment can be applied to the condition of data synchronization among a plurality of computer rooms realized based on Redis. Optionally, in this embodiment, each machine room implemented based on the Redis may be regarded as a Redis cluster, each machine room may be deployed in different regions, and all machine rooms may be in a completely equivalent relationship, that is, each machine room has a read-write function, and there is no master-slave distinction between each machine room; furthermore, each computer room comprises at least two servers, all the servers can be in a complete peer-to-peer relationship, or one server can be set as a master server in a switching manner, and other servers can be set as slave servers.

The method of this embodiment may be executed by the data synchronization apparatus provided in this embodiment, and the apparatus may be implemented in a software and/or hardware manner, and may be integrated in any computer room, specifically, may be integrated in any server of any computer room, and is particularly suitable for being integrated in a main server of any computer room. With reference to fig. 1A and fig. 1B, the data synchronization method specifically includes:

and S110, acquiring data to be synchronized.

In this embodiment, the data to be synchronized is data acquired by the local computer room and required to be synchronized with other computer rooms; optionally, the data to be synchronized acquired by the local computer room (further, may be a main server in the local computer room) may be data written by the service end. The local machine room is a machine room to which a server executing the data synchronization method provided in this embodiment belongs, and the service end may be a terminal held by a user. For example, as shown in fig. 1B, when a user has a data writing requirement, the user may send data to be written to the access layer through the service end; the access layer selects a machine room (namely a local machine room, such as machine room A) from at least two machine rooms according to the region (further can be an IP address) to which the user belongs, and sends the data to be written of the user to a main server in the selected machine room; and then the main server in the local machine room can acquire the data to be written by the user, and can write the acquired data into the Redis configured by the main server for subsequent synchronization to other machine rooms so as to ensure the consistency of the data, so that the main server in the local machine room acquires the data to be synchronized.

The access layer is used for managing access of each machine room and a service end, and the like, and in this embodiment, one console can be selected as the access layer; further, the access stratum may specifically select one machine room from the at least two machine rooms according to the region to which the user belongs, where the access stratum selects a machine room, which is the same as the region to which the user belongs, from the at least two machine rooms, or the access stratum selects a machine room, which is the closest to the region to which the user belongs, from the at least two machine rooms.

It should be noted that, in the existing master-slave deployment scheme, only the master node has a write operation authority, and if a user needs to perform a write operation in the deployment site of the slave node, the user needs to connect a different place where the master node is located, which may further result in higher data transmission and synchronization delay; in the embodiment, by introducing the access layer and distributing the machine rooms based on the region to which the user belongs, network transmission examples are reduced, and time delay is reduced. Furthermore, if the machine room of the area where the user belongs is in fault, the data of the user is transmitted to the machine room nearest to the area where the user belongs through the embodiment under the condition that the user does not sense, namely, the rapid remote disaster recovery is realized, and the reliability of the service is greatly improved.

And S120, determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized.

In this embodiment, the service identifier is used to uniquely identify the service, and may be, for example, a service code; different services have different service identities. Optionally, the access layer may present service specifications of various services that can be processed by the system to the user, so that the user performs a write operation and/or a read operation according to the service specifications, where the service specifications may include a service identifier. Further, the present embodiment may obtain the service identifier of the data to be synchronized from the setting field.

For example, with continued reference to fig. 1B, in this embodiment, any server of the local computer room (for example, computer room a) may be configured with a replication module, and the replication module may be composed of a replicator, Kafka, a synchronizer, and the like. Furthermore, a replicator in a server (i.e., a master server) serving as a master node in the local machine room can simulate a Redis slave node, and synchronize data from the locally configured Redis in real time based on a master-slave protocol, that is, the replicator can replicate data to be synchronized from the locally configured Redis; and then the duplicator can determine a processing thread corresponding to the data to be synchronized according to the service identifier.

Optionally, in an implementation manner, a plurality of ordered threads may be preset according to the number of services that can be served by the system, and an association relationship between the threads and the service identifiers is established, where each thread may process data of at least one service; and then, determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized and the incidence relation between the thread and the service identifier.

It should be noted that, in the prior art, the adoption of the Keyspace Notification function may have a certain influence on the performance of the service Redis, and may intrude into the existing Redis architecture; in the embodiment, the replicator simulates the Redis slave node to replicate data, and zero intrusion reconstruction can be performed on the basis of the existing Redis architecture.

S130, writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread, and synchronizing the data in the message queue to other machine rooms.

Optionally, in this embodiment, the Kafka-based multi-partition function partitions different services, for example, one service partition may correspond to one service; furthermore, a service partition is associated with a message queue, and further, by partitioning the service, data of different services can be written into different message queues; the data stored in the message queue is the data that needs to be synchronized to other computer rooms.

According to an implementation mode, an association relationship between a service identifier and a service partition is established in advance, and then the service partition corresponding to data to be synchronized can be determined according to the service identifier of the data to be synchronized and the association relationship between the service identifier and the service partition, and then the data to be synchronized can be written into a message queue in the corresponding service partition by adopting a processing thread corresponding to the data to be synchronized.

It should be noted that, the access amount of the Redis serving as a high-performance cache database is huge in a production environment, in this embodiment, by using Kafka multi-partition, data of different services are written into message queues in different partitions, that is, data of the same service is written into a message queue in the same service partition, so that not only is the synchronization sequence between related service data maintained, but also the traffic peak clipping and data loss prevention effects are achieved; meanwhile, a plurality of ordered threads are introduced into the embodiment, so that data of different services can be synchronized concurrently, and the throughput is improved.

Further, after the data to be synchronized is written into the message queue in the corresponding service partition, the data can be sequentially taken out from the message queue and can be synchronized to other machine rooms through a pipeline, i.e., the pipeline. For example, with continued reference to fig. 1B, a synchronizer configured by a main server in a local computer room (e.g., computer room a) may sequentially fetch data from the message queue, and synchronize the data to other computer rooms such as computer room B, computer room C, and computer room D, so that Redis of each computer room reaches final consistency. Specifically, the synchronization may be performed to a master server in another computer room, such as a master server in the computer room B, and the master server in the computer room B synchronizes to a slave server in the computer room B.

Optionally, any server of the local machine room may be configured with a plurality of synchronizers; in yet another implementation, synchronizing data in the message queue to another machine room may be to synchronize data in the message queue to another machine room through a synchronizer corresponding to the service partition.

Wherein, a synchronizer can obtain data from the message queue of one or more service partitions; different synchronizers may be obtained from message queues of different service partitions. Optionally, a synchronizer may be allocated to each service partition in advance according to the configured number of synchronizers and the number of service partitions.

Furthermore, the main server in the local machine room can synchronize the data in the message queue to other machine rooms through the synchronizer corresponding to each service partition. It can be understood that, in the present embodiment, a plurality of synchronizers are introduced to synchronize data, so that the throughput of data is greatly improved.

The technical scheme provided by the embodiment of the application can be executed by a server in any computer room based on Redis, and not only is the throughput of data synchronization improved, but also the synchronization sequence between related service data is maintained by introducing the service identifier and combining the processing thread and the service partition; meanwhile, the machine rooms do not have the difference between master and slave, so that the resources of the machine rooms can be fully utilized. In addition, compared with the existing master-slave deployment scheme, the master-slave allocation does not exist between the machine rooms in the embodiment, namely any machine room has the write operation and read operation permission, the data transmission and synchronization time delay can be reduced, the rapid remote disaster recovery is realized, and the service reliability is further improved. A new idea is provided for real-time data synchronization among different machine rooms in different places.

In order to control the propagation of the non-compliant data, for example, on the basis of the foregoing embodiment, before determining the processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized, compliance detection may be performed on the data to be synchronized. Specifically, it may be detected whether a format of the data to be synchronized meets a requirement, whether a service identifier of the data to be synchronized is compliant, whether the data to be synchronized includes the yellow-reflective data, and the like.

Fig. 2A is a flowchart of another data synchronization method provided in the embodiment of the present application, and the embodiment further explains how to determine a processing thread corresponding to data to be synchronized based on the above embodiment. Referring to fig. 2A, the data synchronization method includes:

s210, acquiring data to be synchronized.

S220, carrying out hash operation on the service identification of the data to be synchronized to obtain a hash value of the service identification.

Specifically, the embodiment may obtain the service identifier of the data to be synchronized, and perform hash operation on the service identifier of the data to be synchronized by using a set hash algorithm to obtain a hash value of the service identifier.

And S230, determining a processing thread corresponding to the data to be synchronized according to the hash value of the service identifier and the number of threads in the thread pool.

In this embodiment, a thread pool is formed by a plurality of threads, and each thread in the thread pool is an ordered thread.

Optionally, in the phase of the replicator, a hash value of the service identifier may be used to perform modulo on the thread number in the thread pool, and a processing thread corresponding to the data to be synchronized is determined according to a modulo result. For example, each thread in the thread pool may be assigned a unique number in advance according to the serial number, for example, as shown in fig. 2B, the thread pool includes thread 1, thread 2, and thread 3 …; and then the thread with the serial number consistent with the model in the thread pool can be used as a processing thread corresponding to the data to be synchronized.

Further, after determining the processing thread corresponding to the data to be synchronized, the data to be synchronized may be placed under the corresponding processing thread for execution.

It can be understood that, with the present embodiment, data of the same service is executed by the same thread, and thus, the order in which related service data is written into the message queue can be ensured, and the phenomenon that delete a occurs before set a occurs is avoided.

S240, adopting a processing thread to write the data to be synchronized into the message queue in the corresponding service partition, and synchronizing the data in the message queue to other machine rooms.

According to the technical scheme provided by the embodiment of the application, the processing thread corresponding to the data to be synchronized is determined according to the hash value of the service identifier and the number of threads in the thread pool, so that the sequence of writing the related service data into the message queue can be ensured, and the phenomenon that delete a is performed first and then set a is avoided.

Optionally, on the basis of any of the foregoing embodiments, the processing thread is adopted, and the writing of the data to be synchronized into the message queue in the corresponding service partition may be: determining a service partition corresponding to the data to be synchronized from at least two service partitions according to the service identification of the data to be synchronized; and writing the data to be synchronized into the message queue in the corresponding service partition by adopting a processing thread.

Further, in this embodiment, the number of service partitions is greater than the number of threads in the thread pool. For example, the number of traffic partitions may be 2 times the number of threads in the thread pool.

Specifically, a set hash algorithm may be adopted to perform hash operation on the service identifier of the data to be synchronized, so as to obtain a hash value of the service identifier; then, the number of the service partitions is subjected to modulus extraction by adopting the hash value of the service identification, and the service partition corresponding to the data to be synchronized is determined according to a modulus extraction result; and writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread.

For example, with reference to fig. 2B, it is assumed that the processing thread corresponding to the data to be synchronized is a thread 1 in the thread pool, the service partition corresponding to a part of the data to be synchronized is a service partition 1, the service partition corresponding to a part of the data is a service partition 2, and then the thread 1 may write the data to be synchronized into the message queue of the service partition 1 and the message queue of the service partition 2.

It can be understood that, by the above scheme, data of the same service can be written into the message queue of the same service partition, thereby maintaining the synchronization sequence between related service data. Furthermore, the data synchronization throughput is greatly improved by adopting a mode of combining a thread pool, a Kafka multi-partition, a plurality of synchronizers and the like.

Fig. 3 is a flowchart of another data synchronization method provided in an embodiment of the present application, and the present embodiment further explains how to determine a processing thread corresponding to data to be synchronized based on the foregoing embodiment. Referring to fig. 3, the data synchronization method includes:

s310, acquiring data to be synchronized.

S320, identifying whether the data to be synchronized is loopback data; if not, executing S330; if yes, go to S350.

It should be noted that, with reference to fig. 1B, the data to be synchronized acquired by the main server in the local machine room may be data written by the service end; the data may also be data written by the main server in another machine room to the main server in the local machine room, for example, the main server in the machine room B extracts data from the message queue through the synchronizer and synchronizes the data to the machine room a, and specifically, the data may be written in Redis in the main server of the machine room a. Further, in this embodiment, data transmitted in the system may be classified into two types according to the source of the data, where one type is source data (i.e., data written by the service end), and the other type is loopback data (i.e., data synchronized from other computer rooms).

It can be understood that, for data obtained from other rooms (i.e. loopback data), if the local room synchronizes with other rooms, the data will be executed repeatedly and endlessly, that is, the data is synchronized repeatedly, thereby causing invalid memory usage.

Therefore, to avoid data duplicate synchronization; in this embodiment, after the data to be synchronized is acquired, the data to be synchronized may be identified to determine whether the data to be synchronized is loopback data. Specifically, loop verification is performed on the data to be synchronized, and whether the data to be synchronized is loop data is determined according to an execution result; or whether the data to be synchronized carries loopback verification data can be identified, and if so, the data to be synchronized can be determined to be loopback data.

For example, referring to fig. 1B, assuming that the local machine room is a machine room a, the service end writes data, such as set prefix: aikey 1, to the main server of the machine room a, that is, the main server of the machine room a acquires data to be synchronized, where prefix (i.e., prefix) represents a service identifier; the replicator in the main server of the machine room A can simulate a Redis slave node, synchronizes data from a locally configured Redis in real time based on a master-slave protocol, namely the replicator can replicate data to be synchronized from the locally configured Redis, and then the replicator can perform loopback verification, specifically del prefix, circleKey, md5, if the execution is successful, the data to be synchronized is loopback data; if not, the duplicator can determine a processing thread corresponding to the data to be synchronized according to the service identification, and the processing thread writes the data to be synchronized into a message queue in the corresponding service partition; the synchronizer takes out the data from the message queue of the service partition, calculates the loopback verification of the data, namely MD5(set prefix: aKey 1), obtains prefix: circleKey: MD5, and then writes loopback verification data setex prefix: MD 5101 and real data set prefix: aKey 1 into other machine rooms. Where 10 denotes a synchronization delay of 10s, which can be adjusted according to actual conditions.

Furthermore, other machine rooms, for example, replicators in the master server of the machine room B may simulate a Redis slave node, synchronize data from a locally configured Redis in real time based on a master-slave protocol, and may execute del prefix: circleKey: md5, if the execution is successful, it indicates that the data to be synchronized is loop data, and at this time, the data will not be executed by a thread, and will not be written into a message queue, that is, the phenomenon that the machine room B synchronizes the data to the machine room a will not occur, so that a cycle is broken.

S330, determining a processing thread corresponding to the data to be synchronized according to the service identification of the data to be synchronized.

S340, writing the data to be synchronized into the message queue in the corresponding service partition by adopting the processing thread, and synchronizing the data in the message queue to other machine rooms.

And S350, forbidding to synchronize the data to be synchronized to other computer rooms.

According to the technical scheme, after the data to be synchronized is acquired, whether the data to be synchronized is loopback data or not is firstly identified, and only under the condition that the data to be synchronized is not the loopback data is determined, the subsequent data synchronization operation is executed. The embodiment can avoid data repeated synchronization by adding a loop data identification process.

Fig. 4 is a block diagram of a data synchronization apparatus provided in an embodiment of the present application, where the apparatus may be integrated in any computer room, specifically, may be integrated in any server of any computer room, and is particularly suitable for being integrated in a main server of any computer room. The device can execute the data synchronization method provided by any embodiment of the invention, and specifically executes the corresponding functional modules and beneficial effects of the method. As shown in fig. 4, the data synchronization apparatus 400 includes:

a data obtaining module 410, configured to obtain data to be synchronized;

the thread determining module 420 is configured to determine, according to the service identifier of the data to be synchronized, a processing thread corresponding to the data to be synchronized;

and the data synchronization module 430 is configured to write the data to be synchronized into the message queue in the corresponding service partition by using the processing thread, and synchronize the data in the message queue to other computer rooms.

The technical scheme provided by the embodiment of the application can be executed by a server in any computer room based on Redis, and not only is the throughput of data synchronization improved, but also the synchronization sequence between related service data is maintained by introducing the service identifier and combining the processing thread and the service partition; meanwhile, the machine rooms do not have the difference between master and slave, so that the resources of the machine rooms can be fully utilized. In addition, compared with the existing master-slave deployment scheme, the master-slave allocation does not exist between the machine rooms in the embodiment, namely any machine room has the write operation and read operation permission, the data transmission and synchronization time delay can be reduced, the rapid remote disaster recovery is realized, and the service reliability is further improved. A new idea is provided for real-time data synchronization among different machine rooms in different places.

Illustratively, the thread determining module 420 is specifically configured to:

performing hash operation on the service identifier of the data to be synchronized to obtain a hash value of the service identifier;

and determining a processing thread corresponding to the data to be synchronized according to the hash value of the service identifier and the number of threads in the thread pool.

Illustratively, when the data synchronization module 430 uses a processing thread to write data to be synchronized into a message queue in a corresponding service partition, the data synchronization module is specifically configured to:

determining a service partition corresponding to the data to be synchronized from at least two service partitions according to the service identification of the data to be synchronized;

and writing the data to be synchronized into the message queue in the corresponding service partition by adopting a processing thread.

For example, when synchronizing data in the message queue to other computer rooms, the data synchronization module 430 is specifically configured to:

and synchronizing the data in the message queue to other machine rooms through the synchronizer corresponding to the service partition.

Illustratively, the thread determining module 420 is further specifically configured to:

identifying whether the data to be synchronized is loopback data;

if not, determining a processing thread corresponding to the data to be synchronized according to the service identifier of the data to be synchronized.

Exemplarily, the apparatus further includes:

and the forbidding module is used for forbidding to synchronize the data to be synchronized to other machine rooms if the data to be synchronized is identified to be loop data.

Fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and fig. 5 shows a block diagram of an exemplary device suitable for implementing an embodiment of the present application. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.

As shown in fig. 5, electronic device 50 is embodied in the form of a general purpose computing device. The components of the electronic device 50 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.

Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.

Electronic device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 50 and includes both volatile and nonvolatile media, removable and non-removable media.

The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory (cache 32). The electronic device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.

A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.

Electronic device 50 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 50, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 50 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 50 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 50 over the bus 18. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 50, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.

The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, to implement the data synchronization method provided in the embodiments of the present application.

Embodiments of the present application further provide a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, and when the program is executed by a processor, the computer program is used to perform the data synchronization method provided in the embodiments of the present application.

The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for embodiments of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the embodiments of the present application have been described in more detail through the above embodiments, the embodiments of the present application are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:实现实时关系型数据库数据同步的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!