Modeling task and data analysis method, device, electronic equipment and system

文档序号:7329 发布日期:2021-09-17 浏览:80次 中文

1. A modeling task processing method, comprising:

acquiring a task configuration file of a mold building task;

scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks;

generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task;

starting a federated learning framework system based on the start script of the modeling task, calling a corresponding federated learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein federated learning algorithm frameworks with different start configurations are configured in the federated learning framework system.

2. The modeling task processing method according to claim 1, wherein the task configuration file includes input/output data, a preset model, and task parameters of the modeling task;

the step of scheduling the modeling subtask based on the configuration file of the modeling subtask to generate the configuration information of the modeling subtask comprises the following steps:

analyzing the task configuration file to obtain the execution sequence and the dependency information of the modeling task;

and scheduling the modeling task based on the execution sequence and the dependency information to generate configuration information of the modeling task.

3. The modeling task processing method according to claim 1, wherein in the federated learning framework system, for federated learning algorithm frameworks of different start-up configurations, data transmission protocols of the federated learning algorithm frameworks are the same.

4. The modeling task processing method according to claim 3, wherein in the federated learning framework system, for federated learning algorithm frameworks of different start-up configurations, a unified data transfer protocol is used by the federated learning algorithm frameworks that implement different start-up configurations by adjusting a server and/or a client of the data transfer protocol in advance.

5. The modeling task processing method according to claim 1, wherein a corresponding federated learning algorithm framework is invoked to execute the modeling subtasks, including:

acquiring a plurality of different types of task parameters corresponding to the modeling subtasks;

and calling a corresponding federated learning algorithm framework to calculate the task parameters of the different types to obtain a task result of the model building task.

6. A modeling task processing apparatus, comprising:

the first acquisition module is used for acquiring a task configuration file of the modeling subtask;

the first scheduling module is used for scheduling the modeling subtasks based on the configuration files of the modeling subtasks and generating configuration information of the modeling subtasks;

the first starting module is used for generating a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located;

the first execution module is used for starting a federated learning framework system based on the starting script of the modeling subtask, calling a corresponding federated learning algorithm framework, executing the modeling subtask and obtaining a target model, wherein the federated learning algorithm frameworks with different starting configurations are configured in the federated learning framework system.

7. A method of data analysis, comprising:

acquiring node data of different data nodes;

performing intersection processing on the node data of each data node to obtain a common data set;

performing feature screening based on the common data set to obtain feature codes;

inputting the feature codes into a preset data analysis model to obtain an analysis result of the node data, wherein the preset data analysis model is obtained by modeling according to the modeling task processing method of any one of claims 1 to 5.

8. A data analysis apparatus, comprising:

the second acquisition module is used for acquiring node data of different data nodes;

the first processing module is used for performing intersection processing on the node data of each data node to obtain a common data set;

the second processing module is used for carrying out feature screening based on the common data set to obtain feature codes;

a second execution module, configured to input the feature code into a preset data analysis model to obtain an analysis result of the node data, where the preset data analysis model is obtained by modeling according to the modeling task processing method of any one of claims 1 to 5.

9. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the modeling task processing method of any of claims 1-5.

10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the modeling task processing method according to any one of claims 1-5.

11. A modeling task processing system, comprising: the system comprises a data analysis module and a federal learning framework system, wherein the federal learning framework system is provided with federal learning algorithm frameworks with different starting configurations;

the data analysis module comprises:

the third acquisition module is used for acquiring a task configuration file of the modeling subtask;

the second scheduling module is used for scheduling the modeling subtasks based on the configuration files of the modeling subtasks and generating configuration information of the modeling subtasks;

the second starting module is used for generating a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located;

and the third execution module is used for starting the federated learning framework system based on the starting script of the modeling subtask, calling a corresponding federated learning algorithm framework and executing the modeling subtask.

Background

Along with the popularization of machine learning, the awareness of data privacy protection is stronger, and federal learning comes up. When product development is carried out by using federal learning, a plurality of open-source frames are generally needed to be used at the same time to realize federal learning calculation, however, at present, incompatible problems exist among different frames, and a plurality of sets of frames are respectively and independently deployed, so that the product development process is relatively complex, and therefore a data prediction method based on a plurality of compatible federal learning frames is urgently needed to be provided, and the complexity of product development is reduced.

Disclosure of Invention

Therefore, the technical problem to be solved by the invention is to overcome the defect that the existing multi-federal learning framework is incompatible, and thus a modeling task processing method, a modeling task processing device and electronic equipment are provided.

According to a first aspect, an embodiment of the present invention discloses a modeling task processing method, including: acquiring a task configuration file of a mold building task; scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks; generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task; starting a federated learning framework system based on the start script of the modeling task, calling a corresponding federated learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein federated learning algorithm frameworks with different start configurations are configured in the federated learning framework system.

Optionally, the task configuration file includes input and output data, a preset model and task parameters of the modeling task; the step of scheduling the modeling subtask based on the configuration file of the modeling subtask to generate the configuration information of the modeling subtask comprises the following steps: analyzing the task configuration file to obtain the execution sequence and the dependency information of the modeling task;

and scheduling the modeling task based on the execution sequence and the dependency information to generate configuration information of the modeling task.

Optionally, in the federal learning framework system, the data transmission protocols of the federal learning algorithm framework are the same for different federal learning algorithm frameworks of the start-up configuration.

Optionally, in the federal learning framework system, for the federal learning algorithm frameworks with different starting configurations, the federal learning algorithm frameworks with different starting configurations use a unified data transmission protocol by adjusting a server and/or a client of the data transmission protocol in advance.

Optionally, a corresponding federated learning algorithm framework is invoked to perform the modeling subtasks, including: acquiring a plurality of different types of task parameters corresponding to the modeling subtasks; and calling a corresponding federated learning algorithm framework to calculate the task parameters of the different types to obtain a task result of the model building task.

According to a second aspect, an embodiment of the present invention further discloses a modeling task processing apparatus, including: the first acquisition module is used for acquiring a task configuration file of the modeling subtask; the first scheduling module is used for scheduling the modeling subtasks based on the configuration files of the modeling subtasks and generating configuration information of the modeling subtasks; the first starting module is used for generating a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located; the first execution module is used for starting a federated learning framework system based on the starting script of the modeling subtask, calling a corresponding federated learning algorithm framework, executing the modeling subtask and obtaining a target model, wherein the federated learning algorithm frameworks with different starting configurations are configured in the federated learning framework system.

According to a third aspect, an embodiment of the present invention further discloses a data analysis method, including: acquiring node data of different data nodes; performing intersection processing on the node data of each data node to obtain a common data set; performing feature screening based on the common data set to obtain feature codes; inputting the feature code into a preset data analysis model to obtain an analysis result of the node data, wherein the preset data analysis model is obtained by modeling through the modeling task processing method according to the first aspect or any optional embodiment of the first aspect.

According to a fourth aspect, an embodiment of the present invention further discloses a data analysis apparatus, including: the second acquisition module is used for acquiring node data of different data nodes; the first processing module is used for performing intersection processing on the node data of each data node to obtain a common data set; the second processing module is used for carrying out feature screening based on the common data set to obtain feature codes; a second execution module, configured to input the feature code into a preset data analysis model to obtain an analysis result of the node data, where the preset data analysis model is obtained by modeling according to the modeling task processing method according to the first aspect or any optional implementation manner of the first aspect.

According to a fifth aspect, an embodiment of the present invention further discloses an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the steps of the method of modelling task processing as described in the first aspect or any one of the optional embodiments of the first aspect.

According to a sixth aspect, the present invention further discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the modeling task processing method according to the first aspect or any one of the optional embodiments of the first aspect.

According to a seventh aspect, an embodiment of the present invention further discloses a modeling task processing system, including: the system comprises a data analysis module and a federal learning framework system, wherein the federal learning framework system is provided with federal learning algorithm frameworks with different starting configurations; the data analysis module comprises: the third acquisition module is used for acquiring a task configuration file of the modeling subtask; the second scheduling module is used for scheduling the modeling subtasks based on the configuration files of the modeling subtasks and generating configuration information of the modeling subtasks; the second starting module is used for generating a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located; and the third execution module is used for starting the federated learning framework system based on the starting script of the modeling subtask, calling a corresponding federated learning algorithm framework and executing the modeling subtask.

The technical scheme of the invention has the following advantages:

the invention provides a method, a device, electronic equipment and a system for modeling tasks and data analysis, wherein the method comprises the following steps: acquiring a task configuration file of a mold building task; scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks; generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task; starting a federated learning framework system based on the start script of the modeling task, calling a corresponding federated learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein federated learning algorithm frameworks with different start configurations are configured in the federated learning framework system. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.

FIG. 1 is a specific application scenario of an embodiment of the present invention;

FIG. 2 is a flowchart of a specific example of a modeling task processing method according to an embodiment of the present invention;

FIG. 3 is a schematic block diagram of a specific example of a modeling task processing apparatus according to an embodiment of the present invention;

FIG. 4 is a diagram illustrating an exemplary data analysis method according to an embodiment of the present invention;

FIG. 5 is a schematic block diagram of a specific example of a data analysis apparatus in an embodiment of the present invention;

FIG. 6 is a diagram of an exemplary electronic device according to an embodiment of the invention;

fig. 7 is a block diagram showing a specific example of a modeling task processing system according to the embodiment of the present invention.

Detailed Description

The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.

In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.

In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.

The federal learning framework mentioned in this embodiment may be particularly used in a system architecture as shown in fig. 1, where the system architecture may include a plurality of partner nodes and target nodes, the partner nodes may correspond to computing nodes constructed by various banks, payment platforms, credit investigation platforms, etc., and the target nodes may correspond to computing nodes constructed by companies providing loans and credits.

The embodiment of the invention discloses a modeling task processing method, as shown in FIG. 2, the method comprises the following steps:

step 201, acquiring a task configuration file of the modeling task.

Illustratively, the modeling subtask is a subtask required for building a model, for example, the subtask may be data loading, data sample security intersection, dataset segmentation, feature preprocessing, feature screening, logistic regression/XGB/DNN model training, model evaluation, and the like, the embodiment of the present invention does not limit the type and number of the subtask, and those skilled in the art can determine the subtask according to actual needs; the task configuration file comprises task information and corresponding configuration parameters of each subtask, and specifically, the task configuration file may include a type of the subtask, a parameter of the subtask, and the like, and the parameter correspondence of the subtask may be an encryption mode for data security intersection, a data set segmentation ratio, a measurement mode for feature screening, a threshold value, and the like.

Step 202, scheduling the modeling subtask based on the configuration file of the modeling subtask, and generating configuration information of the modeling subtask. Illustratively, when a model is constructed, setting and adjusting the execution sequence and execution mode of each modeling subtask, for example, in the modeling process, firstly, the construction of an internal algorithm model needs to be completed, and the construction of the algorithm model needs to acquire training data, and divide the training data into a training set and a test set; the configuration information is the analysis of the types and parameters of the subtasks contained in the configuration file in step 201, and the specific contents of the types and parameters are obtained.

Step 203, generating a start script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located.

Illustratively, the deployment environment information is preparation work before the execution of the modeling subtask, for example, the environment required by the algorithm component is initialized according to the configuration information, which may be parameter instantiation, whether parameter checking is correct, data instantiation, whether a model of a previous step on which the current subtask depends is loaded, model instantiation, and the like.

And 204, starting a federal learning framework system based on the start script of the modeling task, calling a corresponding federal learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein the federal learning framework system is configured with federal learning algorithm frameworks with different start configurations.

Illustratively, the federal learning framework system may be composed of different federal learning algorithm frameworks, wherein the federal learning algorithm framework may be a stateful encryption-based FATE algorithm framework or a Rosetta algorithm framework based on an MPC scheme, and the type of the algorithm framework for federal learning in the embodiment of the present invention is not limited, and can be determined by those skilled in the art according to actual needs; and selecting the federated learning algorithm, and selecting a corresponding federated learning algorithm frame through the starting scripts of different subtasks.

The embodiment of the invention discloses a modeling task processing method, which comprises the steps of obtaining a task configuration file of a modeling task; scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks; generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task; starting a federated learning framework system based on the start script of the modeling task, calling a corresponding federated learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein federated learning algorithm frameworks with different start configurations are configured in the federated learning framework system. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

As an optional implementation manner of the present invention, the task configuration file includes input/output data, a preset model, task parameters, and the like of the modeling task; in the step 202, the process of scheduling the modeling subtask based on the configuration file of the modeling subtask and generating the configuration information of the modeling subtask specifically includes: analyzing the task configuration file to obtain the execution sequence and the dependency information of the modeling task; and scheduling the modeling task based on the execution sequence and the dependency information to generate configuration information of the modeling task.

Illustratively, the configuration file of the task comprises the categories of the subtasks and the input and output data, preset models and corresponding task parameters which are depended by the subtasks; according to the subtask type and parameters in the configuration file, scheduling and arranging the execution sequence of each subtask is completed, and a corresponding Directed Acyclic Graph (DAG Graph) is generated, wherein the DAG Graph contains the execution sequence and the dependency information of the subtask, then the workflow service schedules the subtasks according to the DAG Graph information, and the specific scheduling steps can be as follows: firstly, whether the task which each subtask depends on is finished is checked, secondly, the configuration required by the start of the subtask is analyzed, the configuration information of the subtasks of different partner nodes is generated, and then the configuration information of the subtasks of the different partner nodes is sent to the corresponding target node through a network channel.

As an optional embodiment of the present invention, in the federal learning framework system, the data transmission protocols of the federal learning algorithm framework are the same for different federal learning algorithm frameworks of the start configurations. Illustratively, different mold building tasks correspond to different starting scripts, a proper federated learning algorithm framework is selected according to the different starting scripts, and the most proper federated learning algorithm framework is selected, so that the calculation speed is increased, and the algorithm accuracy is improved.

As an optional embodiment of the present invention, in the federal learning framework system, for the federal learning algorithm frameworks with different start configurations, the unified data transmission protocol is used by the federal learning algorithm frameworks with different start configurations implemented by adjusting the server and/or the client of the data transmission protocol in advance.

Illustratively, a specific operation process involves data exchange of a plurality of nodes, and communication transmission modes between different algorithms are different, for example, one federated learning algorithm is a FATE algorithm framework, and the other federated learning algorithm is a Rosetta algorithm framework, so that a remote/get method for constructing the federated learning framework by using a grpc protocol in the FATE framework is required to realize data interaction, and a send/get method for constructing an io module by using a socket protocol in the Rosetta framework is required to realize data interaction; in the embodiment of the invention, different development languages are used for realizing federated learning frames with different frames, the transmission object definition of the federated learning frames is unified, grpc protocol can be adopted for realizing federated communication interaction, as a federated communication module of the FATE frames uses the grpc protocol, the federated communication module protocol of the FATE is still adopted during the design of a federated communication service end, and the federated communication is divided into the service end and the client end, for a Rosetta federated communication module, the implementation of the client end of socket federated communication needs to be changed into the grpc implementation, the implementation of the socket service end is removed, and the Rosetta federated communication service and the FATE federated communication service are unified.

As an optional embodiment of the present invention, in the step 204, a corresponding federal learning algorithm framework is called, and a process of executing the modeling subtask mainly includes: acquiring a plurality of different types of task parameters corresponding to the modeling subtasks; and calling a corresponding federated learning algorithm framework to calculate the task parameters of the different types to obtain a task result of the model building task. Illustratively, according to different parameter types and subtasks of each subtask, the most appropriate federated learning algorithm frame in the federated learning frame system is selected, and the corresponding federated learning algorithm frame obtains the required training result according to the parameter types and the subtasks.

The embodiment of the invention discloses a modeling task processing device, as shown in fig. 3, comprising:

the first obtaining module 301 is configured to obtain a task configuration file of the modeling subtask. For details, please refer to the related description of step 201 of any of the above embodiments, which is not repeated herein.

The first scheduling module 302 is configured to schedule the modeling subtask based on the configuration file of the modeling subtask, and generate configuration information of the modeling subtask. For details, please refer to the related description of step 202 in any of the above embodiments, which is not repeated herein.

A first starting module 303, configured to generate a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling task is located. For details, please refer to the related description of step 203 of any of the above method embodiments, which is not repeated herein.

The first execution module 304 is configured to start the federal learning framework system based on the start script of the modeling subtask, call a corresponding federal learning algorithm framework, execute the modeling subtask, and obtain a target model, where the federal learning framework system is configured with federal learning algorithm frameworks with different start configurations. For details, please refer to the related description of step 204 of any of the above method embodiments, which is not repeated herein.

The modeling task processing device provided by the embodiment of the invention obtains the task configuration file of the modeling task; scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks; generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task; starting a federated learning framework system based on the start script of the modeling task, calling a corresponding federated learning algorithm framework, and executing the modeling subtask to obtain a target model, wherein federated learning algorithm frameworks with different start configurations are configured in the federated learning framework system. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

The invention discloses a data analysis method, as shown in fig. 4, the method comprises the following steps:

step 401, obtaining node data of different data nodes.

Illustratively, different data nodes are different partner nodes in a federal learning algorithm, each partner node trains and learns local data through the federal learning algorithm to obtain required data, in the process of the federal learning algorithm, the different partner nodes are required to provide data, the data of each partner node is utilized to train and learn the data, for example, the partner node can be various banks, payment platforms, credit investigation platforms and the like, the target node can be a company providing loan, the types of data provided by the partner nodes can be consumption lines, academic calendars, credit card credit line/loan records/overdue/default of users, overdue credit line/records/default of other platforms, real estate and the like, the target node is a model built according to repayment behaviors and tasks of internal users, and completing the prediction of the characteristics of user access, credit line, risk level, credit score, fraud and the like of each partner node.

Step 402, performing intersection processing on the node data of each data node to obtain a common data set. Illustratively, data provided by each data node of the partners are different, and data between each data node of the partners is confidential, so that secure and confidential sample intersection needs to be performed on the data provided by each data node of the partners, a corresponding sample set is obtained after the confidential sample intersection, and before training, the sample set is divided into a training set and a test set according to a certain proportion.

And 403, performing feature screening based on the common data set to obtain feature codes. Illustratively, the feature screening of the data set may include, for example, performing various feature screens such as binning on a training set and a test set, and based on a feature IV/correlation coefficient/sample stability index.

And 404, inputting the feature codes into a preset data analysis model to obtain an analysis result of the node data. Illustratively, the data analysis model is a federal learning framework in the above embodiment, and the data of each node corresponds to the federal learning framework, and a corresponding analysis result is obtained by a federal learning algorithm in the above embodiment.

The embodiment of the invention discloses a data analysis method, which comprises the steps of obtaining node data of different data nodes; performing intersection processing on the node data of each data node to obtain a common data set; performing feature screening based on the common data set to obtain feature codes; and inputting the feature codes into a preset data analysis model to obtain an analysis result of the node data, wherein the preset data analysis model is obtained by modeling through the modeling task processing method in the embodiment. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

The present invention discloses a data analysis apparatus, as shown in fig. 5, the apparatus includes:

a second obtaining module 501, configured to obtain node data of different data nodes. For details, please refer to the related description of step 401 of any of the above method embodiments, which is not repeated herein.

The first processing module 502 is configured to perform intersection processing on node data of each data node to obtain a common data set. For details, please refer to the related description of step 402 of any of the above method embodiments, which is not repeated herein.

And a second processing module 503, configured to perform feature screening based on the common data set to obtain a feature code. For details, please refer to the related description of step 403 in any of the above embodiments, which is not repeated herein.

The second executing module 504 is configured to input the feature code into a preset data analysis model, so as to obtain an analysis result of the node data, where the preset data analysis model is obtained by modeling through the modeling task processing method according to the foregoing embodiment. For details, please refer to the related description of step 404 of any of the above embodiments, which is not repeated herein.

The embodiment of the invention discloses a data analysis device, which is used for acquiring node data of different data nodes; performing intersection processing on the node data of each data node to obtain a common data set; performing feature screening based on the common data set to obtain feature codes; inputting the feature codes into a preset data analysis model to obtain an analysis result of the node data, wherein the preset data analysis model is obtained by modeling according to the modeling task processing method of any one of claims 1 to 5. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

An embodiment of the present invention further provides an electronic device, as shown in fig. 6, the electronic device may include a processor 601 and a memory 602, where the processor 601 and the memory 602 may be connected by a bus or in another manner, and fig. 5 illustrates an example of a connection by a bus.

Processor 601 may be a Central Processing Unit (CPU). The Processor 601 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.

The memory 602, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the modeling task processing method or the data analysis method in the embodiments of the present invention. The processor 601 executes various functional applications and data processing of the processor, namely, a modeling task processing method or a data analysis method in the above method embodiments, by executing the non-transitory software program, instructions and modules stored in the memory 602.

The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 601, and the like. Further, the memory 602 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 602 may optionally include memory located remotely from the processor 601, which may be connected to the processor 601 through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The one or more modules are stored in the memory 602 and, when executed by the processor 601, perform a modeling task processing method as in the embodiment shown in fig. 2 or a data analysis method as shown in fig. 4.

The details of the electronic device may be understood by referring to the relevant description and effects in the embodiments shown in fig. 2 or fig. 4, which are not described herein again.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.

The embodiment of the invention discloses a modeling task processing system, as shown in fig. 7, comprising: the system comprises a data analysis module 701 and a federal learning framework system 702, wherein the federal learning framework system 702 is configured with federal learning algorithm frameworks with different starting configurations;

illustratively, the data parsing module 701 includes:

a third obtaining module 7011, configured to obtain a task configuration file of the modeling subtask. For details, please refer to the related description of step 401 of any of the above method embodiments, which is not repeated herein.

And a second scheduling module 7012, configured to schedule the modeling subtask based on the configuration file of the modeling subtask, and generate configuration information of the modeling subtask. For details, please refer to the related description of step 402 of any of the above method embodiments, which is not repeated herein.

A second starting module 7013, configured to generate a starting script of the modeling subtask based on the configuration information and the deployment environment information where the modeling subtask is located. For details, please refer to the related description of step 403 in any of the above embodiments, which is not repeated herein.

And a third executing module 7014, configured to start the federated learning framework system based on the startup script of the modeling subtask, call a corresponding federated learning algorithm framework, and execute the modeling subtask. For details, please refer to the related description of step 404 of any of the above embodiments, which is not repeated herein.

The invention provides a modeling task processing system, which comprises: acquiring a task configuration file of a mold building task; scheduling the modeling subtasks based on the configuration files of the modeling subtasks to generate configuration information of the modeling subtasks; generating a starting script of the modeling subtask based on the configuration information and the deployment environment information of the modeling task; starting a federated learning framework based on the start script of the modeling task, calling a corresponding federated learning algorithm, and executing the modeling subtask to obtain a target model, wherein federated learning algorithms with different start configurations are configured in the federated learning framework. By implementing the method, the established federated learning framework is combined, various algorithm frameworks are fused, and the corresponding algorithm is called to execute the modeling task according to the modeling task, so that the problem of incompatibility among different algorithm frameworks is solved, and the complex steps in the product development process are simplified.

Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:多线程数据处理方法、装置、终端及采集系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!