Distributed data source heterogeneous synchronization method

文档序号:7635 发布日期:2021-09-17 浏览:34次 中文

1. A method for heterogeneous synchronization of distributed data sources, the method comprising:

(1) front-end data processing: creating configuration information to be filled in a task, judging and selecting a task type, wherein the task type service comprises: calling service interfaces according to the selected transmission parameters in full quantity and batch quantity;

(2) setting aliases for different distributed database clusters through a distributed data source configuration center, wherein each alias represents a complete cluster or a single-node data configuration environment, finding matched configuration information for each cluster, and randomly generating a key corresponding to each task; if the matched configuration information exists, a data source synchronous processor is established, task information is newly established and cached, a client and connection are established according to the specific information of the data source, and the client and the connection are placed into a thread pool for waiting scheduling; if the matched configuration information does not exist, storing the task information into a database;

(3) starting a task, and processing data through a data batch processor, wherein the task comprises two modes: full and batch synchronous data; when an error exists or the task is completed, storing the task information into a database, and ending the task;

the full-volume synchronous data refers to that all data between two heterogeneous data sources are synchronous; the batch synchronous data refers to partial field synchronization among a plurality of heterogeneous data sources, data cutting is needed, and the type and name of each column are judged;

wherein, the data source adopting the full synchronous data mode comprises: a relational data source Mysql, a distributed data source Hbase and an elastic search;

wherein, the data source adopting the batch synchronous data mode comprises: relational data sources Mysql, distributed data sources Hbase, Elasticissearch, Oracle, SQLServer, MongoDB, TXTfilm, and FTP.

2. The method for heterogeneous synchronization of distributed data sources according to claim 1, wherein the batch synchronization data is exported or imported by a data batch processor, and is cut for read-write management; the batch synchronous data mode of the relational data source Mysql is realized by exporting data into sql files and adopting data segmentation.

3. The method for heterogeneous synchronization of distributed data sources according to claim 1, further comprising:

when the server where the database in the task is located or the database itself has a problem, the task is suspended or stopped: the key value is transferred through the calling interface, the task information corresponding to the key value is inquired in the server cache, the state value of the task is modified to be suspended or stopped, then the task information in the database is updated, the message is sent to the message processor, and the message processor sends the message to the client through the webSocket so as to suspend or stop the task.

4. The method for heterogeneous synchronization of distributed data sources according to claims 1-3, further comprising:

after the fault is repaired, the service is restarted: after the task information is obtained, the task state value is modified to be running, and the converted file or the batch data imported by the data batch processor is found through the starting Key value and the ending Key value recorded by the cache.

Background

With the rapid development of modern e-commerce technology, big data analysis and artificial intelligence technology, it has become more and more important to integrate a large amount of information data with different types to realize information sharing and internal information integration. At present, most of informatization development of various enterprises has the problem that heterogeneous data sources are difficult to synchronize. Generally, the submodules for completing specific functions are respectively closed by taking an enterprise information system as a center, and have poor expansibility and interactivity; various information systems are mutually independent, and the advantages of different data sources cannot be fully utilized to integrate data; the invention aims at the problem of project data source heterogeneity existing in the current enterprise, and researches a method for realizing offline distributed heterogeneous data source synchronization.

At present, most data exchange and sharing services are not mature, an open source framework Datax of the Alibara is better to be made, most common data sources used in the market are supported at present, high-speed data exchange between heterogeneous databases or file systems is supported, a uniform interface is provided for interacting with different plug-ins, the plug-ins only need to access a data processing system, but the plug-ins are difficult to independently deploy and are complex to operate.

And for the distributed heterogeneous data source synchronization scheme at the present stage, only a few data sources can be supported, and the data sources such as elastic search, Redis and the like are not supported basically, and meanwhile, the data sources cannot control the task process to suspend at any time.

Disclosure of Invention

The invention aims to provide a distributed data source heterogeneous synchronization platform and a synchronization method, solves the problem that the synchronization of the existing service architecture among multiple heterogeneous data sources is difficult to realize, can provide a safe and universal platform, and each micro service in the platform is loosely coupled, can be independently deployed and can be managed in a plug-in mode.

In order to achieve the above object, the present invention provides a distributed data source heterogeneous synchronization platform, which includes: a front end and a server end; the front end is used for submitting a data source and displaying a data source synchronization result; the server is used for receiving the data source submitted by the front end, completing a data source synchronization request and transmitting a synchronization result to the front end; the front end and the server side communicate by adopting a WebSocket technology; the front end comprises: a act-admin-master front end independent module, the front end independent module employing a act + reduce + ruby + webpack front end frame, comprising: the system comprises a synchronous task creating sub-module, an operation state sub-module and a historical task information sub-module; the synchronous task creating sub-module is used for creating a synchronous task; the operation state submodule is used for checking the operation state of the current synchronous task; the historical task information submodule is used for checking the information of the historical tasks; the server side comprises: a data-transform-api interface module, a data-transform-server service module and a datax-all service plug-in module.

The data-transform-API interface module is used for abstracting a RESTful API interface required to be provided by the micro service and defining an entity class of an information object required to be received by the server; the data-transform-api interface module provides an entrance for the micro-service, and the micro-service performs the transmission of related information and the response of the service request through the interface module.

The data-transform-server service module is used for realizing the function of importing the full quantity of a relational data source Mysql and a distributed data source Hbase into an elastic search full-text search engine, converting data types among heterogeneous data sources and performing heterogeneous data source synchronization service by combining an external interface; the data-transform-server service module comprises: the system comprises a distributed data source configuration center, a task scheduling thread pool, a file processing module, a data batch processor, a cache pool and a task message processing thread pool. The distributed data source configuration center is used for setting aliases for different distributed database clusters, creating a data source synchronous processor for the clusters, creating clients and connections according to specific types of data sources, creating a new synchronous task, and putting the task into a thread pool for waiting for scheduling; the task scheduling thread pool is used for storing data synchronization tasks to be executed, selecting proper tasks according to a parallel task scheduling algorithm to perform full or batch synchronization, and controlling the execution of the tasks, including starting, suspending, stopping and resuming; the file processing module is used for converting data in the data table into an SQL file and paging the data file to realize load balance; the data batch processor is responsible for batch import and export of data, and comprises: the controller comprises two controllers of an HBaseETLController and a MySqlETLController, wherein the two controllers are controllers for importing and exporting the full quantity of a relational data source Mysql and a distributed data source Hbase, and realize an interface which is provided by the data-transform-api interface module and can be used for controlling external calling and inquiring data; the cache pool is used for storing data information of the tasks, the task progress is controllable when full synchronization is carried out, and the task information state value in the cache pool is changed when the tasks need to be suspended; the task message processing thread pool is used for transmitting data information in the message queue between the client and the server at will by using a WebSocket technology.

The DataX-all service plug-in module is used for developing read-write plug-ins and referenced external interfaces of an elastic search, mainly integrates a DataX open source framework in a platform, performs plug-in management on the plug-ins, performs tracking control, flow control and load balancing control on task information, and solves the problem of data type mismatch in heterogeneous data sources.

Preferably, the act-admin-master front-end independent module comprises: and a synchronous task creating sub-module, an operation state sub-module and an information inquiry sub-module are created.

The task creating and synchronizing submodule is used for receiving task configuration information submitted by a user interface of the front end and calling a task management interface in the data-transform-api interface module according to the configuration information to create a task.

The running state sub-module receives the user request of the front end, calls a task control interface in the data-transform-api interface module to display the running operation state and a progress bar of the task, and if the operation is full operation, the running state sub-module can control the state of the task and perform pause and restart operations on the task.

The information query submodule is connected with the user interface of the front end, and is used for checking historical tasks and judging whether each currently used data source is synchronous or not by calling a task query interface in the data-transform-api interface module.

Preferably, the data-transform-api interface module comprises: the internal interfaces are defined by different plug-ins designed for different data sources and used for calling the corresponding plug-ins; the internal interface includes: the device comprises a core HE interface, a core ME interface, a core PAUSE interface, a core STOP interface, a unified plug-in interface and a historical task interface.

Wherein, the core HE interface is used for full synchronization from HBase to Elasticissearch; the core ME interface is used for full synchronization from Mysql to an elastic search; the core PAUSE interface is used for full synchronization from start to PAUSE; the core STOP interface: for full-volume synchronization from start to stop and full-volume synchronization from pause to restart; the unified plug-in interface comprises: for partially synchronizing multiple heterogeneous data sources; the historical task interface: for task history viewing.

Preferably, the datax-all service plug-in module comprises: the system comprises a plurality of plug-in submodules, three core submodules and an external interface.

Wherein, the plug-in submodule includes: a Reader submodule and a Writer submodule; the Reader submodule is used for collecting data of the relational data source Mysql and sending the data to a Framework interface in the external interface; and the Writer submodule is used for continuously taking out data from the Framework interface and then writing the data into a target data source.

Wherein, the core submodule includes: a datax core module, a datax common module and a dataxtransform conversion module; the datax core module is used as a manager of all tasks Job and is responsible for initialization, splitting, scheduling, running, recovery, monitoring and reporting, but does not perform actual data synchronization operation; the dataxcommon general module is used for dividing a Task Job into a plurality of tasks, distributing the tasks to different Task clusters Taskgroup through load balancing, and scheduling and operating the tasks in the form of the cluster Group; the datax transform conversion module is used for connecting the Reader submodule and the Writer submodule, is a data transmission channel of the Reader submodule and the Writer submodule, and provides functions of data volume statistics, flow control, concurrency, data conversion and the like.

Preferably, the external interface includes: reader interface, Writer interface and Framework interface.

The Reader interface is connected with the Reader module, is used for acquiring data of a data source in the Reader module, and sends the data to the Framework interface.

The Writer interface is connected with the Writer module and used for continuously taking out data from the Framework interface and writing the obtained data into a target data source through the Writer module.

The Framework interface is used for connecting the reader module and the writer module, is used as a data transmission channel of the reader module and the writer module, and is used for processing buffering, flow control, concurrence and conversion of data.

Preferably, the microservice comprises: spring Cloud service; the heterogeneous data sources include: the system comprises a relational data source Mysql, a distributed data source Hbase, a data source Elasticissearch, a data source Oracle, a data source SQLServer, a data source MongoDB, a data source TXTFile and a data source FTP; the data-transform-server service module combines plug-ins matched with different data sources in the DataX, provides corresponding interfaces for the plug-ins, and provides a configuration file analysis tool.

The invention also provides a distributed data source heterogeneous synchronization method, which comprises the following steps:

(1) front-end data processing: creating configuration information to be filled in a task, judging and selecting a task type, wherein the task type service comprises: calling service interfaces according to the selected transmission parameters in full quantity and batch quantity;

(2) setting aliases for different distributed database clusters through a distributed data source configuration center, wherein each alias represents a complete cluster or a single-node data configuration environment, finding matched configuration information for each cluster, and randomly generating a key corresponding to each task; if the matched configuration information exists, a data source synchronous processor is established, task information is newly established and cached, a client and connection are established according to the specific information of the data source, and the client and the connection are placed into a thread pool for waiting scheduling; if the matched configuration information does not exist, storing the task information into a database;

(3) starting a task, and processing data through a data batch processor, wherein the task comprises two modes: full and batch synchronous data; and when an error exists or the task is completed, storing the task information into the database and ending the task.

The full-volume synchronous data refers to that all data between two heterogeneous data sources are synchronous; the batch synchronous data refers to partial field synchronization among a plurality of heterogeneous data sources, data cutting is needed, and the type and name of each column are judged.

Wherein, the data source adopting the full synchronous data mode comprises: a relational data source Mysql, a distributed data source Hbase and an elastic search.

Wherein, the data source adopting the batch synchronous data mode comprises: relational data sources Mysql, distributed data sources Hbase, Elasticissearch, Oracle, SQLServer, MongoDB, TXTfilm, and FTP.

Preferably, the batch synchronous data is exported or imported by using a data batch processor, and is subjected to data cutting to perform read-write management; the batch synchronous data mode of the relational data source Mysql is realized by exporting data into sql files and adopting data segmentation.

Preferably, the method further comprises: when the server where the database in the task is located or the database itself has a problem, the task is suspended or stopped: the key value is transferred through the calling interface, the task information corresponding to the key value is inquired in the server cache, the state value of the task is modified to be suspended or stopped, then the task information in the database is updated, the message is sent to the message processor, and the message processor sends the message to the client through the webSocket so as to suspend or stop the task.

Preferably, the method further comprises: after the fault is repaired, the service is restarted: after the task information is obtained, the task state value is modified to be running, and the converted file or the batch data imported by the data batch processor is found through the starting Key value and the ending Key value recorded by the cache.

The distributed data source heterogeneous synchronization platform and the synchronization method solve the problem that the synchronization of multiple heterogeneous data sources is difficult to realize by the conventional service architecture, and have the following advantages that:

(1) the differentiation of distributed heterogeneous data sources is eliminated, and the function of mutually importing and exporting a large amount of information by various data sources is realized;

(2) plug-in management is carried out on the heterogeneous data sources, and the new data source can support non-differential data synchronization with other data sources only by realizing the new plug-in;

(3) the multiple tasks are executed concurrently, and the tasks can be subjected to scheduling control of starting, pausing, restarting and stopping, so that independent operation is realized;

(4) the method adopts a micro-service architecture, and the realization mode is that the micro-service architecture can be independently deployed on a plurality of machines to enable colleagues to operate;

(5) and the task progress tracking and the information monitoring of the resource consumption of a CPU and the like are realized. Therefore, a relatively perfect distributed heterogeneous data source synchronization service is achieved.

Drawings

Fig. 1 is a schematic structural diagram of a distributed data source heterogeneous synchronization platform according to the present invention.

Fig. 2 is a schematic diagram of the connection relationship of the external interface according to the present invention.

FIG. 3 is a flow diagram of front-end data processing of the present invention.

FIG. 4 is a flow chart of the startup task of the present invention.

FIG. 5 is a flow chart of the pause and stop tasks of the present invention.

FIG. 6 is a flow chart of the restart task of the present invention.

Detailed Description

The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.

Fig. 1 shows a schematic structural diagram of a distributed data source heterogeneous synchronization platform according to the present invention, where the platform includes: a front end and a server end; the front end is used for submitting a data source and displaying a data source synchronization result; the server is used for receiving the data source submitted by the front end, completing the data source synchronization request and transmitting the synchronization result to the front end; the front end and the server side communicate by adopting a WebSocket technology.

The front end includes: a act-admin-master front end independent module, the front end independent module employing a act + reduce + ruby + webpack front end frame, comprising: the system comprises a synchronous task creating sub-module, an operation state sub-module and a historical task information sub-module; the task creating and synchronizing submodule is used for creating a synchronizing task, the running state submodule is used for checking the running state of the current synchronizing task, and the historical task information submodule is used for checking the information of the historical task. When a task is created, various configuration information of the task needs to be selected, when the task state needs to be checked, the timely communication service WebSocket of the front end and the server is called, the previous task information can be checked, and whether all currently used data sources are synchronous or not is judged.

The invention adopts the front end frame design of act + reduce + ruby + webpack to realize the separation of the front end and the service end, so that the front end and the service end can be independently deployed, and the labor division is obvious.

The server includes: a data-transform-API interface module (API interface module), a data-transform-server service module (synchronization service module), and a datax-all service plug-in module.

The data-transform-API interface module is used for abstracting a RESTful API interface required to be provided by the micro-service and defining an entity class of an information object required to be received by the server; the data-transform-api interface module provides an entrance for the micro-service, and the micro-service performs the transmission of related information and the response of the service request through the interface module. The external interface is mainly defined in a data-transform-api module, and items needing to be used can provide some services for some servers with deployed micro-services only by adding the dependency of the items on the module in the own pom dependency.

The data-transform-server service module is used for realizing the function of importing the full-text search engine of a relational data source Mysql and a distributed data source Hbase into an elastic search engine in a full-text manner, providing a full-text search engine with distributed multi-user capability based on a Lucene search server, converting data types among heterogeneous data sources, and performing heterogeneous data source synchronization service by combining an external interface; the data-transform-server service module comprises: the system comprises a distributed data source configuration center, a task scheduling thread pool, a file processing module, a data batch processor, a cache pool and a task message processing thread pool. And when the task control is scheduled, a timely communication technology WebSocket is combined to schedule a plurality of tasks for the message queue and the thread pool. The data-transform-server service module is also combined with plug-ins matched with various data sources in the DataX, and the service module provides an interface and a configuration file analysis tool, so that a server side can work better.

The distributed data source configuration center is used for setting aliases for different distributed database clusters, creating a data source synchronous processor for the clusters, creating clients and connections according to specific types of data sources, creating a new synchronous task, and putting the task into a thread pool for waiting for scheduling; the task scheduling thread pool is used for storing data synchronization tasks to be executed, selecting proper tasks according to a parallel task scheduling algorithm to perform full or batch synchronization, and controlling the execution of the tasks, including starting, suspending, stopping and resuming; the file processing module is used for converting the data in the data table into an SQL file, and then paging the data file to realize load balancing; the data batch processor is responsible for batch processing for importing and exporting data, and comprises the following steps: the controller comprises two controllers of an HBaseETLController and a MySqlETLController, wherein the two controllers are controllers for leading in and out the full quantity of a relational data source Mysql and a distributed data source Hbase, and realize an interface which is provided by a data-transform-api interface module and can be used for controlling external calling and inquiring data; the cache pool is used for storing data information of the tasks, the task progress is controllable when full synchronization is carried out, and the task information state value in the cache pool is changed when the tasks need to be suspended; the task message processing thread pool is used for transmitting data information in a message queue between a client and a server at will by using a WebSocket technology.

The DataX-all service plug-in module is used for developing read-write plug-ins and referenced external interfaces of an elastic search, mainly integrates a DataX open source framework in a platform, performs plug-in management on the plug-ins, performs tracking control, flow control and load balancing control on task information, and solves the problem of data type mismatch in heterogeneous data sources. The datax-all service plug-in module faces a large amount of data processing, the provided external interface can evenly and reasonably divide the Task theory into a plurality of tasks to be executed together, and the single-machine multithreading execution mode can enable the speed to increase along with the concurrent increase.

According to an embodiment of the present invention, the act-admin-master front-end independent module comprises: and a synchronous task creating sub-module, an operation state sub-module and an information inquiry sub-module are created.

The task creating and synchronizing submodule is used for receiving task configuration information submitted by a front-end user interface and calling a task management interface in the data-transform-api interface module according to the configuration information to create a task.

The running state submodule receives a user request at the front end, calls a task control interface in the data-transform-api interface module to display the running operation state and a progress bar of the task, and can control the state of the task and perform pause and restart operations on the task if the running operation state is full operation.

The information query submodule is connected with a front-end user interface, and is used for checking historical tasks and judging whether each currently used data source is synchronous or not by calling a task query interface in the data-transform-api interface module.

According to an embodiment of the present invention, the data-transform-api interface module comprises: the internal interfaces are defined by different plug-ins designed for different data sources and used for calling the corresponding plug-ins; the internal interface includes: the device comprises a core HE interface, a core ME interface, a core PAUSE interface, a core STOP interface, a unified plug-in interface and a historical task interface.

Wherein, the core HE interface is used for full synchronization from HBase to Elasticissearch; the core ME interface is used for full synchronization from Mysql to Elasticissearch; the core PAUSE interface is used for full synchronization from start to PAUSE; core STOP interface: for full-volume synchronization from start to stop and full-volume synchronization from pause to restart; unifying plug-in interfaces: for partially synchronizing multiple heterogeneous data sources; historical task interface: for task history viewing.

According to an embodiment of the present invention, the datax-all service plug-in module comprises: the system comprises a plurality of plug-in submodules, three core submodules and an external interface.

Wherein, the plug-in submodule includes: a Reader submodule and a Writer submodule; the Reader submodule is used for collecting data of the relational data source Mysql and sending the data to a Framework interface in an external interface; the Writer sub-module is used for continuously taking out data from the Framework interface and then writing the data into a target data source.

Wherein, the core submodule includes: a datax core module, a datax common module and a dataxtransform conversion module; the datax core module is used as a manager of all tasks Job and is responsible for initialization, splitting, scheduling, running, recovery, monitoring and reporting, but does not perform actual data synchronization operation; the datax common general module is used for dividing the Task Job into a plurality of Task tasks, distributing the Task tasks to different Task clusters TaskGroup through load balance, and scheduling and operating in a cluster Group form; the datax transform conversion module is used for connecting the Reader submodule and the Writer submodule, is a data transmission channel of the Reader submodule and the Writer submodule, and provides functions of data volume statistics, flow control, concurrency, data conversion and the like.

According to an embodiment of the present invention, as shown in fig. 2, a schematic connection relationship diagram of an external interface according to the present invention is shown, where the external interface includes: reader interface, Writer interface and Framework interface.

The Reader interface is connected with the Reader module, and is used for acquiring data of a data source (a data source A in fig. 2) in the Reader module and sending the data to the Framework interface.

The Writer interface is connected with the Writer module, and is used for continuously fetching data from the Framework interface and then writing the obtained data into a target data source (data source B in fig. 2) through the Writer module.

The Framework interface is used for connecting the reader module and the writer module, is used as a data transmission channel of the reader module and the writer module, plays a role in communicating information between the reader module and the writer module by a bridge, and processes buffering, flow control, concurrence and conversion of data.

According to an embodiment of the present invention, a microservice includes: spring Cloud service; the heterogeneous data sources include: the system comprises a relational data source Mysql, a distributed data source Hbase, a data source Elasticissearch, a data source Oracle, a data source SQLServer, a data source MongoDB, a data source TXTFile and a data source FTP; the data-transform-server service module combines plug-ins matched with different data sources in the DataX, provides corresponding interfaces for the plug-ins, and provides a configuration file analysis tool.

A method for heterogeneous synchronization of distributed data sources, the method comprising:

(1) front-end data processing:

as shown in fig. 3, which is a flow chart of front-end data processing of the present invention, configuration information to be filled in by a task is created, and a task type is determined and selected, where a service of the task type includes: full and batch;

when batch import is selected, selecting an original data source and a target data source, and mapping of various configurations and fields to formulate a target object; when the full-scale import is selected, selecting Hbase/Mysql as an original target data source and an elastic search as a target data source to formulate a target object;

calling a service interface according to the selected transfer parameter, returning to the JobId and the task state, checking the states of all the tasks currently running, and finishing the front-end data processing;

(2) as shown in fig. 4, for the task opening flowchart of the present invention, aliases are set for different distributed clusters through the distributed data source configuration center module of the platform, and each alias represents a complete cluster or a single-node data configuration environment. Before the task starts, matched configuration information is found, and then a key is randomly generated to uniquely mark a task. Most data information of the task is stored in the memory cache. And then, establishing a client and connection according to the specific information of the data source, and then putting the client and the connection into a task scheduling thread pool for waiting scheduling. If the file is Mysql, firstly, a table in the Mysql is led into an sql file, and the file is cut, and different data sources have different read-write plug-in management;

(3) and (3) starting a task:

when a task is started, two modes are selected, the first mode is full synchronous data, the full data represents that all data between two data sources are synchronous, if the requirement exists, the efficiency of the mode is higher, and the type and the name of each column do not need to be judged; the second mode is batch synchronous data, which mainly uses plug-ins of a Datax framework to realize unification among more than ten data sources and also comprises designed plug-ins, and batch processors or methods such as data cutting and the like are mainly used in batches;

wherein, the data source adopting the full synchronous data mode comprises: a relational data source Mysql, a distributed data source Hbase and an elastic search.

Wherein, the data source adopting the batch synchronous data mode comprises: relational data sources Mysql, distributed data sources Hbase, Elasticissearch, Oracle, SQLServer, MongoDB, TXTfilm, and FTP.

According to one embodiment of the invention, batch synchronous data is exported or imported by a data batch processor, or data is cut to perform read-write management; the batch synchronous data mode of the relational data source Mysql is realized by exporting data into sql files and adopting data segmentation.

According to an embodiment of the present invention, as shown in fig. 5, which is a flowchart of the suspending and stopping task of the present invention, the method further includes: after the system task is started, the database returns a unique key value representing the task to the client. When a server where the database in the task is located or the database itself has a problem, it is necessary to suspend or stop the task to perform operation and maintenance of the machine. Therefore, when the suspension or the STOP is needed, the key value is transferred by calling the interface, the JobInfo information corresponding to the key value is inquired in the server cache, the state value of Job is modified to be PAUSE or STOP, then the task information in the database is updated, and the message is sent to the message processor, and the message processor sends the message to the client through the webSocket; once the task is stopped, the current task cannot be recovered, so that the judgment of the multi-layer state is needed, and when the task state is changed every time, the task needs to be recorded in the database, the progress of the task is tracked, and the problem of mismatching with the processing configuration information is avoided in time. The task information is mainly stored in a cache of an operating system memory, and each task is cached for at most one day, so that the timely effectiveness of the task state can not be ensured if the time is too long.

According to an embodiment of the present invention, as shown in fig. 6, which is a flowchart of the task resuming method of the present invention, the method further includes: after the off-line synchronization task is started, the mark key and the state information of the current task are returned to the client. After the encountered fault is repaired, the service may still be restarted next to the last shutdown. When faced with large amounts of data, the previously exchanged data is useless if interrupted halfway. After the JobInfo is obtained, the JobState is modified into RUN, and the converted sql file is found through the startKey and endKey values of the cache record so as to accurately find a pause point or a batch processor imports batch data.

In summary, the distributed data source heterogeneous synchronization platform and the synchronization method of the present invention can provide a secure and general platform, and each microservice in the platform is loosely coupled, can be independently deployed, and can be managed in a plug-in manner.

While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种数据同步方法、装置、系统、服务器和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!