Sample enhancement method, device and system for model training
1. A data enhancement method for model training, the method is suitable for AI model training, and is characterized by comprising the following steps:
receiving first data, wherein the first data is sample main data when a target algorithm is used for training;
receiving a second data set containing one or more second data as auxiliary data to the first data, the second data being intended for enhancement of the first data;
processing and determining a third data set, wherein the third data set comprises one or more third data, and the third data is a return value of the first data after the first data passes through the data to be enhanced of the first data;
the process determines fourth data, the fourth data comprising the first data and a fifth data set, the fifth data set comprised in the second data set.
2. The method of claim 1, wherein said determining of said fourth data further comprises:
and training a reinforcement learning model by using the third data set and quasi-reinforcement data corresponding to each element of the third data set.
3. The method of claim 1, wherein the second data is intended to enhance the first data, and further comprising:
and determining an enhancement strategy of the second data to the first data, wherein the enhancement strategy depends on the expansion condition of the second data to the metadata of the first data.
4. The method of claim 3, wherein the second data is an extension of the metadata of the first data, and further comprising:
through the expansion condition of the metadata, the selection between two types of ways to be enhanced is determined: transversely expanding the characteristic space of the first data; or, longitudinally expanding the value space of the first data;
and determining an enhancement strategy through the expansion mode of the metadata.
5. The method of claim 1, wherein the third data processing further comprises:
and training quasi-enhancement data of the first data obtained in a quasi-enhancement mode by using an AI algorithm to obtain sixth data and third data in the third data set, wherein the sixth data is an algorithm model obtained by training the AI algorithm.
6. The method of claim 5, wherein processing determines the fourth data, further comprising:
the process determination procedure uses a sixth data set with the third data set, the sixth data set containing one or more of the sixth data.
7. The method of claim 1, further comprising:
and training the first data and the fifth data set by using an AI algorithm, and outputting a corresponding AI algorithm model.
8. A computer program, characterized in that it comprises means for performing the method according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer storage medium stores program instructions that, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
10. The device for executing the computer program is characterized by comprising a processing component, a storage component and a communication module component, wherein the processing component, the storage component and the communication module component are connected with each other, the storage component is used for storing data processing codes, and the communication module is used for carrying out information interaction with external equipment; the processing component is configured for invoking program code for performing the method according to any one of claims 1-7.
Background
It is currently widely recognized that Artificial Intelligence (AI-Artificial Intelligence) will be one of the most influential technologies in the twenty-first century and beyond. For AI, the core functions are embodied as an AI model, which is derived by training the samples using some AI algorithm. Therefore, the quality of one sample data tends to have a significant impact on the utility and quality of the model.
Reinforcement learning, which evolves from machine learning, uses data reinforcement to improve the quality of the sample. One typical way of enhancing data is to input more a priori knowledge, and the other typical way of enhancing data is to perform self-circulation type breadth combination or depth superposition on the data. In both modes, the sample data is hoped to be dug as much as possible, and the value of the sample data is improved.
However, for any particular sample data, the meaning and pattern included therein is limited, and thus the data enhancement method has a limited effect.
Disclosure of Invention
The present application therefore proposes methods, systems and devices that solve the above-mentioned problems, utilize the assistance data (set) to improve the sample quality and further improve the quality of the model resulting from the training. The methods are applied to unspecified tools, equipment and systems, even a data center or a cloud service center, so that a model training sample enhancement system is formed. Therefore, the invention comprises the following steps:
in one aspect, a sample enhancement method for model training is provided, including:
receiving first data, wherein the first data is sample main data when a target algorithm is used for training; receiving a second data set containing one or more second data as auxiliary data to the first data, the second data being intended for enhancement of the first data; processing and determining a third data set, wherein the third data set comprises one or more third data, and the third data is a return value of the first data after the first data passes through the data to be enhanced of the first data; the process determines fourth data, the fourth data comprising the first data and a fifth data set, the fifth data set comprised in the second data set. Further, the processing and determining process of the fourth data further includes training a reinforcement learning model by using the third data set and quasi-reinforcement data corresponding to each element of the third data set. Further, the second data is to enhance the first data, and the method further comprises determining an enhancement policy of the second data to the first data, wherein the enhancement policy depends on the extension condition of the second data to the metadata of the first data. Further, the second data extends the metadata of the first data, and further includes determining to select between two types of ways to be enhanced through the extension of the metadata: transversely expanding the characteristic space of the first data; or, longitudinally expanding the value space of the first data; and determining an enhancement strategy through the expansion mode of the metadata. Further, the third data processing process further includes training, by using an AI algorithm, pseudo-enhancement data of the first data obtained in a pseudo-enhancement manner to obtain sixth data and a third data in the third data set, where the sixth data is an algorithm model obtained by the AI algorithm training. Further, machining determines fourth data, further comprising the machining determination process using a sixth data set with the third data set, the sixth data set comprising one or more of the sixth data. Further, the method also comprises the steps of training the first data and the fifth data set by using an AI algorithm, and outputting a corresponding AI model.
In an environment with a server cluster or a cloud data center network, the AI platform and the engine are used for providing users with convenient and easy-to-use AI capabilities from a data center, and the AI algorithm with the enhanced sample can provide users with more attractive, highly customizable and high-quality AI models. The data processing process of the enhanced sample is as follows: receiving data serving as a main sample, or called main data, and dividing the main data into training data and verification data; receiving a data set serving as an auxiliary sample, or called an auxiliary data set, wherein the auxiliary data set comprises a plurality of auxiliary data, each auxiliary data can perform quasi-enhancement on main data by taking a training sample as a target through a disassembling mode, and the quasi-enhancement is a trial enhancement process before a stage of training and generating a final algorithm model; the disassembly process comprises the following steps of analyzing the metadata of the main data and the auxiliary data, and determining the quasi-enhancement mode of the auxiliary data to the main data by a specific enhancement strategy: either feature expansion or value expansion; the auxiliary data set determines and generates each auxiliary data according to the calculated state space and action space; obtaining a quasi-enhancement sample by the aid of the obtained auxiliary data and an enhancement strategy each time, wherein all the quasi-enhancement samples form a quasi-enhancement sample set; aiming at the quasi-enhancement sample determined each time, training the quasi-enhancement sample by using a target AI algorithm to obtain a corresponding quasi-AI algorithm model, and verifying the quasi-AI algorithm model by using verification data to obtain a return value corresponding to the quasi-enhancement sample; all the return values form a return value set; and processing the return value set and the quasi-enhancement sample set by using a reinforcement learning model to obtain final enhancement sample data aiming at main sample data, and then training the final enhancement sample by using the target AI algorithm to obtain an enhancement AI model corresponding to the target algorithm.
Thus, the product and service system comprising part or all of the methods and steps can provide a higher-quality AI model through the enhanced sample, and even provide a more flexible and highly customized model output result for the same AI algorithm under the selection of the reinforcement learning algorithm, so that the AI has the capability of boosting more convenient cloud application and big data application, and the popularization of the cloud application and the big data application is accelerated.
In another aspect, a sample enhancement apparatus for model training is provided, the apparatus comprising:
main data: sample main data for AI algorithm training are illustrated;
the auxiliary data set: sample assistance data for AI algorithm training is illustrated. The primary data and the secondary data set may be stored in a data storage device, a memory module, or a memory system providing an external access interface;
a model training unit: the unit is used for algorithm training to output a corresponding model. Specifically, the training process includes two stages: one stage is to train the sample to be enhanced and use the mode of return value as the input of the enhanced training; in the first stage, a final version sample is trained to finally obtain an AI algorithm model meeting the quality requirement;
a reinforced training unit: the unit is used for training a reinforcement learning algorithm to output reinforcement sample data generated by reinforcement learning. Specifically, the reinforcement learning algorithm inputs the sample main data, the auxiliary data set to be reinforced and the return value set corresponding to the auxiliary data to be reinforced, and the final version of the reinforcement auxiliary data is determined through the reinforcement learning algorithm. The auxiliary data to be enhanced has a corresponding relation with the return value;
the data center station: the middle station completes various conversion and processing operations of data to complete the cooperation and enhancement of sample data in a matching manner. Specifically, the middle station comprises a corresponding data access interface, a collecting unit, a disassembling unit, a fusing unit and a correlating unit, and the units respectively provide auxiliary data collection, disassembling, fusing, correlating and the like for sample enhancement.
The interface and the module provided by the invention together realize an enhanced model training process based on limited main data and non-limited auxiliary data with other units, modules, related platforms and related engines required by the actual implementation of a product, thereby realizing a model enhancement device. The expression is as follows: the method comprises the steps that a model enhancement device receives data serving as a main sample, or called main data, and the model enhancement device divides the main data into training data and verification data; the model enhancement device receives a data set which is used as an auxiliary sample, or called an auxiliary data set, wherein the auxiliary data set comprises a plurality of auxiliary data, each auxiliary data can carry out quasi-enhancement on main data by taking a training sample as a target through a disassembly mode, and the quasi-enhancement is a trial enhancement process before a stage of training and generating a final algorithm model; the disassembly process comprises the following steps that the model enhancement device analyzes the main data and the metadata of the auxiliary data, and determines the quasi-enhancement mode of the auxiliary data to the main data according to a specific enhancement strategy: either feature expansion or value expansion; the auxiliary data set determines and generates each auxiliary data according to the calculated state space and action space; the model enhancement device obtains quasi-enhancement samples by the aid of the obtained auxiliary data and enhancement strategies each time, and all the quasi-enhancement samples form a quasi-enhancement sample set; aiming at the quasi-enhancement sample determined each time, a model enhancement device trains the quasi-enhancement sample by using a target AI algorithm, the model enhancement device obtains a corresponding quasi-AI algorithm model, the model enhancement device verifies the quasi-AI algorithm model by using verification data, and the model enhancement device obtains a return value corresponding to the quasi-enhancement sample; all the return values form a return value set; and the model enhancement device processes the return value set and the quasi-enhancement sample set by using a reinforcement learning model, obtains final enhancement sample data aiming at the main sample data, and trains the final enhancement sample by using the target AI algorithm, so that the model enhancement device obtains an enhancement AI model corresponding to the target algorithm.
Therefore, the product and service system with the functional device can provide a higher-quality AI model, and even under the selection of a reinforcement learning algorithm, a highly customized and more flexible model output result is provided for the same AI algorithm, so that the AI has the capability of boosting more convenient cloud application and big data application, and the popularization of the cloud application and the big data application is accelerated.
In another aspect, a computer-readable storage medium is provided, which stores program instructions that, when executed by a processor, the processor (respectively) has implementation procedures to perform the above-described method.
In another aspect, an apparatus for management is provided that includes a storage component, a processing component, and a communication component, the storage component, the processing component, and the communication component being interconnected. The storage component is used for storing data processing codes, and the communication component is used for carrying out information interaction with external equipment; the processing component is configured to invoke program code, each to perform the functions described above with respect to the apparatus.
Drawings
In order to more clearly illustrate the technical solution of the present invention and to more clearly illustrate the elements, modes and processes for achieving the objects of the present invention, the following drawings are provided for illustrating the embodiments of the present invention:
FIG. 1 is one of the system components diagram of the sample enhancement of model training proposed by the present invention;
FIG. 2 is one of the system components diagram of the sample enhancement of model training proposed by the present invention;
FIG. 3 is one of the system components of a sample enhancement of model training proposed by the present invention;
FIG. 4 is one of the data diagrams of sample enhancement of model training proposed by the present invention;
FIG. 5 is one of the data diagrams of sample enhancement of model training proposed by the present invention;
FIG. 6 is one of the operation execution flows of sample enhancement of model training proposed by the present invention;
FIG. 7 is one of the flow of operation execution of sample enhancement for model training proposed by the present invention;
FIG. 8 is one of the operation execution flows of the sample enhancement of the model training proposed by the present invention.
Detailed Description
The embodiments of the present invention will be described below with reference to the drawings.
The terms "first," "second," and "third," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, "include" and "have" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As used in this application, the terms "server," "device," "apparatus," "unit," "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a server may be, but is not limited to, a processor, a data processing platform, a computing device, a computer, two or more computers, or the like; a unit may be, but is not limited to being, a process running on a processor, a runnable object, an executable, a thread of execution, or any other executable computer program. One or more units may reside within a process and/or thread of execution and a unit may be localized on one computer and/or distributed between 2 or more computers. In addition, these units may execute from various computer readable media having various data structures stored thereon. The elements may communicate by way of local and/or remote processes based on a signal having one or more data packets (e.g., data from two elements interacting with another element in a local system, distributed system, and/or across a network, such as the internet with other systems by way of the signal).
First, some terms in the present application are explained so as to be easily understood by those skilled in the art. The terms listed include the following:
(1) cloud computing: namely Cloud Computing, refers to a new Computing paradigm that has the advantages of integration, connectivity in a network environment, and the ability to provide Computing, storage, and even software to users in a service fashion. The difference between the new computing paradigm and the old computing paradigm is that, for the user, the new computing paradigm has no visible fixed form or even no resource-free state, so the new computing paradigm is called cloud computing;
(2) artificial intelligence: the intelligent simulation method is characterized in that the method is an Artificial Intelligence, AI for short, and is a general name of a method, technology, software, hardware and a system for simulating human Intelligence through a computing system;
(3) machine learning: machine learning is an important branching technique in the field of AI. Machine learning extracts data patterns from the sample data in order to make the best possible predictions of the application data. From the current development results, machine learning is divided into supervised learning, unsupervised learning and reinforcement learning;
(4) algorithm-sample-model: this is three important concepts of machine learning. The algorithm is a priori guidance, and different machine learning types determine the amount of priori knowledge of the algorithm; the priori knowledge needs a certain amount of data to convert and verify the obtained prediction capability, and the certain amount of data is called a sample; the algorithm finds some ability to predict and process future data in the value space provided by the sample data, and the machine representation of this ability is the model. In general, a sample is divided into a training sample and a verification sample;
(5) a reinforcement learning model: machine learning algorithms that use closely correlated reward values at each step in the time sequence to mark a reward or penalty for the learning process, and thereby gain predictive power for the future in a continuing improvement process, are all reinforcement learning algorithms. It trains the resulting model, i.e., the reinforcement learning model.
Next, the objective problem of the present invention and a technical method for solving the objective problem are summarized. With the development of AI applications, people have raised demands on AI in terms of high quality, ease of use, and convenience. The traditional method for obtaining the AI model by training based on the specific sample limits the flexibility of AI. Under the realistic condition that sample data is bound to be limited and an AI algorithm is scarce, in order to solve the contradiction, the invention provides a method for enhancing the sample more flexibly to improve the flexibility of an AI model, thereby improving the usability and convenience of AI application and facilitating the popularization of AI in a wider range.
The invention will be further explained with reference to the drawings. Wherein:
fig. 1 is one of the system components of the present invention. The figure illustrates a compositional relationship regarding the implementation of data collaboration and sample enhancement functions. Wherein:
101-main data: sample main data for training the acting AI algorithm are illustrated;
102-auxiliary data set: sample auxiliary data for training the AI algorithm are illustrated;
103-model training unit: the unit is used for algorithm training to output a corresponding model;
104-enhanced training unit: the unit is used for training a reinforcement learning algorithm to output reinforcement sample data generated by reinforcement learning;
105-data station: the middle station completes various conversion and processing operations of data to complete the cooperation and enhancement of sample data in a matching manner.
Fig. 2 is one of the system components of the present invention. The figure illustrates a compositional relationship regarding the implementation of data collaboration and sample enhancement functions. Wherein:
201-main data: sample main data for training the acting AI algorithm are illustrated;
202-auxiliary data set: sample auxiliary data for training the AI algorithm are illustrated;
203-model training unit: the unit is used for algorithm training to output a corresponding model;
204-intensive training unit: the unit is used for training a reinforcement learning algorithm to output reinforcement sample data generated by reinforcement learning;
205-data storage interface: the interface is used for completing the access operation of the required data;
206-data acquisition unit: the unit is used for operations such as original acquisition of data maintained and managed by a data center station;
207-data disassembly unit: the unit is used for disassembling data required by sample enhancement;
208-data fusion unit: the unit is used for enhancing the required data fusion operation on the sample;
209-data association unit: this unit is used to enhance the required data correlation operations on the samples.
Fig. 3 is one of the system components of the present invention. This figure illustrates the division of the aforementioned functional components. Wherein:
301-data interaction unit: the unit is used for providing data interaction and data control between the application layer and the middle station;
302-application acquisition unit: the unit is used for providing acquisition enabling and data acquisition for the application;
303-enhancement policy unit: the unit is used for providing and managing an enhancement strategy required by sample enhancement;
304-model training unit: the unit is used for executing the training process of the algorithm model;
305-data acquisition unit: the unit is used for managing the collected data;
306-data association unit: the unit is used for carrying out relevance analysis on the disassembled data;
307-reinforcement learning unit: the unit is used for performing a reinforcement learning process required by the training process;
308-model evaluation unit: the unit is used for evaluating the sample to be enhanced, and the evaluation uses the verification sample subset;
309-data disassembling unit: the unit is used for disassembling the auxiliary data;
310-data fusion unit: the unit is used for fusing the to-be-enhanced;
311-AI modeling Engine: the engine is used for providing operation support required by algorithm modeling;
312-big data Engine: the engine is used to provide the capability and service support needed for other processing of data.
Fig. 4 is a data diagram of the present invention. The figure illustrates the expression mode of the main data and the auxiliary data used in the invention application in the mapping relationship and the disassembly and transformation. Wherein:
401 — master data schematic: the method comprises the steps of indicating sample data to be algorithmically trained;
402-auxiliary data set schematic: this illustrates an ancillary data set to be sample enhanced;
403-main data metadata item schematic: the metadata items corresponding to the main data are illustrated;
404-auxiliary data metadata entry schematic: this illustrates a metadata item of the secondary dataset;
405-metadata schema for main data: the schematic shows the complete metadata corresponding to the main data;
406 — metadata set schema for secondary data: the metadata subsets corresponding to the auxiliary data subsets are shown, and possibly identical metadata items exist among the metadata of the auxiliary data subsets;
407-authentication data in primary data: the verification data divided from the main data is shown;
408-training data in Master data: the training data divided from the main data is illustrated;
409-assistance data subset in assistance data set: illustrating corresponding partial subsets of the helper data in the helper data set;
it should be noted that: on the one hand, the main data and auxiliary data sets shown in 401 and 402 should be an ambiguous representation, and the other part of the metadata layer and the data layer show a mapping relation; on the other hand, no matter 403-406 or 407-409 are used for limiting the width and depth of the data; in yet another aspect, the data relationship of the figure is simply illustrative and not meant to be a specific limitation on the implementation of the invention.
FIG. 5 is a data diagram of the present invention. This figure illustrates (assuming the existence of) a sample product implementing the core method of the invention: the occupation and skill specialties are directly judged through the photos. The product mainly uses data of a resume website as a sample for training to obtain an algorithm model, and then uses the model to identify the occupation and skill specialties of a master of a target picture; the implementation of the core method of the invention is as follows: the data sample main data of the resume website is used as auxiliary data, the data which can be obtained by other websites is combed to obtain classified information, under the condition that the types can correspond to each other, the auxiliary data and the resume website data are subjected to enhanced fusion, and the fused data is used as a final sample to perform algorithm training, so that a new recognition model of 'picture- > occupation' is obtained. Wherein:
501-main data of resume website, resume data;
502-use as enhanced auxiliary data from process data like news/forums/academia;
503-metadata items of resume data;
504-description items of other kinds of data;
505 — metadata of resume data;
506-other kinds of metadata;
507 validation data in the resume dataset;
508-resume dataset;
509-various types of ancillary data used for enhanced, categorical extraction of resume data.
Fig. 6 is one of the operation execution flow charts for implementing the present invention. This figure illustrates a single augmentation training process. Wherein:
10A-obtain a set of metadata items: the operation is used for acquiring a set of metadata items of main data and certain auxiliary data;
10B-determine the correspondence of metadata items: the operation is for analyzing a correspondence with the determined metadata item;
10C-enhanced policy vs. dataset analysis: the operations are for analyzing the enhanced policy in relation to the data set;
10D-augmentation mode determination: the operation is used to make an enhancement decision between feature augmentation and field augmentation;
10E-formation of the dataset to be enhanced: the operations are for generating, based on the enhancement decision, data to be enhanced;
10F-model to be evaluated Generation: the operation is used for training a quasi-enhancement data set by using a training algorithm so as to obtain a model to be evaluated;
10G-verification get return value: this operation is used to apply the validation set to the model to be evaluated, resulting in a corresponding reward value.
Fig. 7 is one of the operation execution flow charts for implementing the present invention. The figure illustrates a selection process to be enhanced for the main data by the secondary data set. Wherein:
20A-metadata items for extraction of main data: this operation is for extracting metadata of the main data;
20B-metadata items to extract auxiliary data: the operation is for extracting metadata items of the auxiliary data;
20C-unpacking metadata into metadata items: the operation is used for disassembling the metadata of the main data and the auxiliary data;
20D-determining correspondence: the operation is used for calling a policy-related interface to judge the corresponding relation of the main data and the auxiliary data on each metadata item, and the called policy interface is used for analyzing the corresponding relation of the main data, the auxiliary data and each metadata item;
20E-determination of enhancement strategy: the operation is used for calling a strategy related interface to determine an enhancement strategy of the auxiliary data to the main data, and the called strategy determination interface uses the corresponding relation information obtained by the analysis;
and (3) performing data fusion and training by 20F-iteration: the operation is used for obtaining corresponding return values through the model to be evaluated, and all combined return value sets are obtained through iteration of the operation;
20G-selection of combinations: this operation is used to determine the final combination based on the reported values.
Fig. 8 is one of the operation execution flow charts for implementing the present invention. The figure illustrates a process for determining and selecting enhancement samples using a reinforcement learning algorithm. Wherein:
30A-receive and extract main data: the operation is used for receiving and extracting sample main data for training;
30B-divide the main data into training samples and verification samples: the operation is used for dividing the main data so as to obtain a training sample and a verification sample;
30C-receive and extract assistance data set: the operations are for receiving and extracting an ancillary data set for enhancement of the sample;
30D-partitioning subsets of the helper data set: the operations are for partitioning a subset of assistance data by a state space and an action space;
30E-calculating and obtaining the quasi-enhanced return value of each subset of the auxiliary data set to the main data: this operation is used to derive a reward value to be enhanced. Obtaining a quasi-enhanced return value of each subset to the sample data through the training of an AI algorithm and the model verification according to the subsets obtained by dividing the auxiliary data set under the support of an enhancement strategy;
30F-constructing a quasi-enhancement sample set and a return value set: the operation is used for constructing a set of quasi-enhanced samples obtained by dividing the subsets and a set of return values obtained from the quasi-enhanced samples;
30G-training the quasi-enhanced sample set and the return value set: the operation uses a reinforcement learning algorithm to train the set of quasi-enhanced samples and the set of return values. The quasi-enhancement sample set can be represented as a set formed by data obtained by fusing a training sample and auxiliary data corresponding to each return value, or can be represented as a set formed by auxiliary data corresponding to each return value and added with a training sample;
30H-get final enhancement sample: the operation is used for obtaining final enhanced auxiliary data or training samples finally containing auxiliary data through the output of the reinforcement learning algorithm;
30K-get final model of AI algorithm: the operation is used for carrying out AI algorithm training by using the final edition enhancement auxiliary data and the sample main data so as to obtain a final AI algorithm model.
In this application, the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, and may be located in a single network node, or may be distributed on multiple network nodes. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, according to specific constraints and implementation requirements, functional components in the embodiments of the present application may be integrated into one component, or each component may exist alone physically, or two or more components may be integrated into one component. The integrated components can be realized in a form of hardware or a form of software functional units.
The integrated components, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing one or more computer devices (which may be personal computers, servers, or network devices) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It should be understood that, in the various embodiments of the present application, the serial numbers of the above-mentioned processes do not mean a strict order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. While the present application has been described herein in conjunction with various embodiments, other variations to the disclosed embodiments may be understood and effected by those skilled in the art in practicing the present application as claimed herein.