Training method of sequencing learning model, sequencing method, device, equipment and medium
1. A training method of a rank learning model, wherein the method comprises:
collecting a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and the difference between the true affinities of the corresponding two training drugs and the known training targets;
training a sequencing learning model based on the plurality of training samples, so that the sequencing learning model learns the capability of predicting the magnitude relation of the affinities of the two training drugs in each training sample and the known training target protein.
2. The method of claim 1, wherein training a ranking learning model based on the plurality of training samples such that the ranking learning model learns the ability to predict a magnitude relationship of the affinities of the two training drugs in each of the training samples to the known training target protein comprises:
for each training sample, inputting the known training target protein information in the corresponding training sample and the corresponding two pieces of training drug information into the sequencing learning model;
obtaining the difference between the predicted affinities of the two training drugs output by the sequencing learning model and the known training target protein;
adjusting parameters of the ranking learning model based on the difference between the predicted affinities and the corresponding difference between the true affinities, so that the ranking learning model learns the capability of predicting the magnitude relationship between the affinities of the two training drugs in each training sample and the known training target protein.
3. The method of claim 2, wherein adjusting parameters of the ranking learning model based on the difference between the predicted affinities and the corresponding differences between the true affinities such that the ranking learning model learns the ability to predict the magnitude relationship of the affinities of the two training drugs in each of the training samples to the known training target protein comprises:
constructing a loss function based on the difference between the predicted affinities and the corresponding difference between the true affinities;
detecting whether the loss function converges;
and if the parameters of the sequencing learning model are not converged, adjusting the parameters of the sequencing learning model so that the loss function tends to be in the direction of convergence.
4. The method of any one of claims 1-3, wherein collecting a plurality of training samples comprises:
the plurality of training samples are collected from a plurality of data sets.
5. The method of claim 4, wherein the affinity of the training drug to the known training target in different sets of data is characterized using different indicators.
6. A method of drug ordering, wherein the method comprises:
acquiring target point information and a plurality of candidate drug information;
sorting the candidate drugs according to the affinity with the target by adopting a sorting model based on the target information and the information of each candidate drug; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the size relationship of the affinities of any two drugs and the same target protein.
7. A training apparatus for ranking learning models, wherein the apparatus comprises:
the acquisition module is used for acquiring a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and the difference between the true affinities of the corresponding two training drugs and the known training targets;
and the training module is used for training a sequencing learning model based on the plurality of training samples, so that the sequencing learning model learns and predicts the capacity of the size relationship between the affinities of the two training drugs in each training sample and the known training target protein.
8. The apparatus of claim 7, wherein the training module comprises:
the input unit is used for inputting the known training target protein information in the corresponding training sample and the two corresponding training drug information into the sequencing learning model for each training sample;
the obtaining unit is used for obtaining the difference between the predicted affinities of the two training drugs output by the sequencing learning model and the known training target protein;
and the adjusting unit is used for adjusting the parameters of the sequencing learning model based on the difference between the predicted affinities and the corresponding difference between the real affinities, so that the sequencing learning model learns and predicts the capacity of the size relationship between the affinities of the two training drugs and the known training target protein in each training sample.
9. The apparatus of claim 8, wherein the adjusting unit is configured to:
constructing a loss function based on the difference between the predicted affinities and the corresponding difference between the true affinities;
detecting whether the loss function converges;
and if the parameters of the sequencing learning model are not converged, adjusting the parameters of the sequencing learning model so that the loss function tends to be in the direction of convergence.
10. The apparatus of any one of claims 7-9, wherein the collection module is configured to collect the plurality of training samples from a plurality of data sets.
11. The device of claim 10, wherein the affinity of the training drug to the known training target in different sets of data is characterized using different indicators.
12. A medication ordering apparatus, wherein the apparatus comprises:
the acquisition module is used for acquiring target point information and a plurality of candidate drug information;
the sorting module is used for sorting the candidate medicines according to the affinity with the target based on the target information and the candidate medicine information by adopting a sorting model; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the size relationship of the affinities of any two drugs and the same target protein.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6 or 6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of claims 1-5 or 6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-5 or 6.
Background
The Interaction of Drug Target proteins (DTI), which represents the affinity between a Target protein and a Drug compound, is a very important part in the field of Drug development. DTI can help drug developers understand the mechanisms of disease to accelerate drug design processes.
In the traditional biological field, methods for detecting DTI in a laboratory with wet experiments are very expensive and time consuming. Nowadays, as an Artificial Intelligence (AI) -based deep learning algorithm matures, more and more DTI tasks are implemented using Graph Neural Network (GNN), Convolutional Neural Network (CNN), and other Network models.
Disclosure of Invention
The disclosure provides a training method of a sequencing learning model, a sequencing method, a device, equipment and a medium.
According to an aspect of the present disclosure, there is provided a training method of a rank learning model, wherein the method includes:
collecting a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and a real affinity difference value between the corresponding two training drugs and the known training target;
training a sequencing learning model based on the plurality of training samples, so that the sequencing learning model learns the capability of predicting the magnitude relation of the affinities of the two training drugs in each training sample and the known training target protein.
According to another aspect of the present disclosure, there is provided a method of drug ordering, wherein the method comprises:
acquiring target point information and a plurality of candidate drug information;
sorting the candidate drugs according to the affinity with the target by adopting a sorting model based on the target information and the information of each candidate drug; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the size relationship of the affinities of any two drugs and the same target protein.
According to still another aspect of the present disclosure, there is provided a training apparatus for ranking learning models, wherein the apparatus includes:
the acquisition module is used for acquiring a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and a real affinity difference value between the corresponding two training drugs and the known training target;
and the training module is used for training a sequencing learning model based on the plurality of training samples, so that the sequencing learning model learns and predicts the capacity of the size relationship between the affinities of the two training drugs in each training sample and the known training target protein.
According to yet another aspect of the present disclosure, there is provided a medication ordering apparatus, wherein the apparatus comprises:
the acquisition module is used for acquiring target point information and a plurality of candidate drug information;
the sorting module is used for sorting the candidate medicines according to the affinity with the target based on the target information and the candidate medicine information by adopting a sorting model; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the size relationship of the affinities of any two drugs and the same target protein.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of the aspects and any possible implementation described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the above aspect and any possible implementation.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of the aspect and any possible implementation as described above.
According to the technology disclosed by the invention, a more efficient sequencing learning model is provided, and a plurality of medicines corresponding to the same target protein can be sequenced more efficiently and more accurately.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 7 is a schematic diagram according to a seventh embodiment of the present disclosure;
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is to be understood that the described embodiments are only a few, and not all, of the disclosed embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terminal device involved in the embodiments of the present disclosure may include, but is not limited to, a mobile phone, a Personal Digital Assistant (PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), and other intelligent devices; the display device may include, but is not limited to, a personal computer, a television, and the like having a display function.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure; as shown in fig. 1, the present embodiment provides a training method of a rank learning model. As shown in fig. 1, the training method of the rank learning model of this embodiment may specifically include the following steps:
s101, collecting a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and the difference between the true affinities of the corresponding two training drugs and the known training targets;
s102, training the sequencing learning model based on a plurality of training samples, so that the sequencing learning model has the capability of learning and predicting the size relation of the affinity between two training drugs in each training sample and the known training target protein.
The execution subject of the training method of the ranking learning model in this embodiment is a training device of the ranking learning model, and the execution subject of the training device of the ranking learning model is an electronic entity, or may also be an application adopting software integration. The training device of the ranking learning model of the embodiment is used for realizing the training of the ranking learning model.
The ranking learning model of the embodiment is used for learning and predicting the magnitude relation of the affinities of two training drugs and the known training target protein, and further ranking a plurality of training drugs according to the magnitude relation of the affinities of the known training target protein based on the magnitude relation of the affinities of every two training drugs and the known training target protein.
The training samples collected in this embodiment exist in the form of a bar, each training sample includes information of two training drugs, for example, the information of the training drugs may be identified by a Simplified Molecular Input Line Entry Specification (SMILES) sequence of the training drugs, or may also be identified by other unique identification information of the training drugs. The information of known training target proteins can be identified using the FASTA sequence of known training target proteins, or other unique identifying information of known training target proteins can be used.
It should be noted that, since each training sample of this embodiment is used for training the ranking learning model to learn and predict the magnitude relationship between the affinities of the two training drugs in each training sample and the known training target protein, in the supervised training, each training sample of this embodiment further includes the difference between the true affinities of the two training drugs and the known training target protein, that is, the difference between the true affinities can identify the magnitude relationship between the affinities of the two training drugs and the known training target protein. Based on this, optionally, in actual operation, the difference between true affinities may not be a specific difference value, and only the direction of the difference between true affinities may be identified. For example, for two training drugs a and B, if the affinity a of the training drug a to the known training target protein 1 is greater than the affinity B of the training drug B to the known training target protein 1, i.e., when a-B >0, the corresponding difference in true affinities may be identified as 1, and if the affinity a of the training drug a to the known training target protein 1 is less than the affinity B of the training drug B to the known training target protein 1, i.e., when a-B <0, the corresponding difference in true affinities may be identified as 0.
Then, based on the information of the two training drugs in the training samples and the difference between the true affinities of the two corresponding training drugs and the known training target, the sequencing learning model can be supervised trained, so that the sequencing learning model learns the difference between the true affinities of the two training drugs identified in the training samples and the known training target, the sequencing learning model is continuously trained by adopting the training samples, and the sequencing learning model can learn the capability of predicting the magnitude relation between the affinities of the two training drugs in the training samples and the known training target protein.
In this embodiment, the number of training samples collected may be very large, and may include hundreds of thousands or even millions, for example. The greater the number of training samples, the greater the accuracy of the trained ranked learning model.
According to the training method of the ranking learning model, the ranking learning model is trained by adopting the training samples comprising the known training target protein information, the corresponding two training medicine information and the difference between the actual affinities of the two corresponding training medicines and the known training target, so that the ranking learning model can learn and predict the capacity of the relationship between the affinities of the two training medicines in the training samples and the known training target protein.
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure; as shown in fig. 2, the training method of the ranking learning model of the present embodiment further describes the technical solution of the present application in more detail on the basis of the technical solution of the embodiment shown in fig. 1. As shown in fig. 2, the training method of the rank learning model of this embodiment may specifically include the following steps:
s201, collecting a plurality of training samples from a plurality of data sets, wherein each training sample comprises known training target protein information, corresponding two training drug information and the difference between the true affinities of the corresponding two training drugs and the known training targets;
alternatively, in this embodiment, the affinity of the training drug to the known training target in different data sets can be characterized by different indicators. For example, some affinity indicators in the data set are identified by IC50, some affinity indicators in the data set are identified by Kd, and some affinity indicators in the data set are identified by Ki. No matter which affinity index is used in the data set, the training sample of this embodiment only needs to identify the direction of the difference between the actual affinities of the two training drugs and the known training target.
For example, as shown in the schematic diagram of training sample construction shown in fig. 3, a training set formed by a plurality of collected training samples may include m training target proteins, which may be respectively identified as t(1),……,t(m). For each training target protein, n training drugs and the affinity between each corresponding training drug and the training target protein can be collected first, for example, for the training target protein t(1)The collected training drugs can be recorded asFor training target protein t(m)The collected training drugs can be recorded as For a single target protein, all corresponding drugs d can form a pairwise (pair) relationship, for each pairwise paired drugThe difference in the corresponding affinity scores can be written asAs shown in FIG. 3, for the training target protein t(1)Any one training sample can be marked asSimilarly, for the training target protein t(2)Any one training sample can be marked asFor training target protein t(m)Any one training sample can be marked as
Wherein, the training drug and the training target protein can be from a plurality of different data sets, and the affinity of the training drug corresponding to different training target proteins can be identified by using different affinity indexes. Only the difference between the affinities of the two training drugs and the training target protein in any one training sample needs to be identified. In a similar way, the difference of the affinities may not be identified, but only the direction of the difference is identified, i.e., whether the difference is large or small is identified.
The ranking learning model of this embodiment may adopt a Multi-Layer Perceptron (MLP), a Convolutional Neural Network (CNN), a Transformer, or any other Neural Network structure capable of extracting and learning the target protein or drug molecule characterization. The order learning model of the present embodiment is a double tower structure.
202. For each training sample, inputting the known training target protein information and the corresponding two pieces of training drug information in the corresponding training sample into a sequencing learning model;
203. obtaining the difference between the predicted affinities of two training drugs output by the sequencing learning model and the known training target protein;
204. and adjusting parameters of the sequencing learning model based on the difference between the predicted affinities and the corresponding difference between the true affinities, so that the sequencing learning model has the capability of learning and predicting the size relationship between the affinities of the two training drugs in each training sample and the known training target protein.
For example, when this step is implemented, the following steps may be included:
(a) constructing a loss function based on the difference between the predicted affinities and the corresponding difference between the true affinities;
(b) detecting whether the loss function converges; if yes, executing step (d);
(c) if the parameters of the sequencing learning model are not converged, adjusting the parameters of the sequencing learning model to enable the loss function to tend to be in the convergence direction; returning to step 202, selecting the next training sample, and continuing to start training;
(d) detecting whether a training cut-off condition is met, if so, stopping training, determining the parameters of the sequencing learning model at the moment, and ending; if not, the procedure returns to step 202, and selects the next training sample to continue training.
Optionally, the training cutoff condition of this embodiment may be to detect whether the loss function is always converged during training for a continuous preset number threshold, and if so, determine that the training cutoff condition is satisfied. The threshold value of the number of consecutive predictions may be set according to an actual scenario, such as 80 consecutive times, 100 consecutive times, 150 consecutive times, or other consecutive times, which is not limited herein. Or a maximum training time threshold value can be set, and when the training time reaches the maximum training time threshold value, the training is finished. By adopting the training mode, the training effect of the sequencing learning model can be effectively improved.
The sequencing learning model of the embodiment is of a double-tower structure, and sequencing learning is achieved. The parameters of the learned sequencing learning model can be shared by the sequencing model with the single tower structure, so that the sequencing model can sequence a plurality of medicines corresponding to the same target protein according to the affinity.
The training method of the ranking learning model of the embodiment can fully utilize DTI data of different data sets and different indexes, and learn the magnitude relation of the affinities between different drugs and the same target protein by designing a ranking learning algorithm, so that the purpose of ranking a plurality of drugs according to the magnitude of the affinities with the same target protein is achieved. Because this embodiment is to the training of sequencing learning model, the affinity difference of two medicines and target protein that pay more attention to mate, and then can be with the data integration training model that strides data set, many affinity indexes together, can overcome the less limitation of DTI data set in the model training effectively, promote the training effect of sequencing learning model effectively.
The training method of the ranking learning model of this embodiment can obtain the precedence relationship between affinities of different drugs and the same target protein by designing a ranking learning algorithm based on Pairwise, and can effectively improve the accuracy of ranking of affinities of different drugs and the same target protein compared with other existing methods, for example, the weighted consistency index (WeightedCI) and the average consistency index (AverageCI) of a corresponding drug based on a certain target protein can be respectively improved by about 0.03 and 0.05.
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure; as shown in fig. 4, the present embodiment provides a method for sorting drugs, and as shown in fig. 4, the method for sorting drugs in the present embodiment may specifically include the following steps:
s401, acquiring target spot information and a plurality of candidate drug information;
s402, sorting the candidate medicines according to the affinity with the target by adopting a sorting model based on the target information and the candidate medicine information; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the relationship between the affinities of any two drugs and the same target protein.
The execution subject of the medicine sorting method of the embodiment is a medicine sorting device, and the execution subject of the medicine sorting is an electronic entity, or an application adopting software integration. The drug sorting of the embodiment is used for sorting a plurality of candidate drugs according to the affinity with the same target protein, so that drug recommendation can be realized.
The ranking model of this embodiment is a single-tower structure, and the single-tower structure can be implemented by sharing the parameters of the ranking model trained in the embodiments shown in fig. 1 or fig. 2. Because the sequencing learning model learns the size relationship of the affinities of different drugs and the same target, the sequencing of a plurality of drugs according to the size of the affinities of the drugs and the same target can be realized. For example, if it can be predicted that the affinity of the drug a with the target 1 is greater than that of the drug B with the target 1, and at the same time, it can be predicted that the affinity of the drug B with the target 1 is greater than that of the drug C with the target 1, then the drug a, the drug B, and the drug C can be sorted according to the affinity with the target 1, and then the drug recommendation can be realized.
Similarly, the target information of this embodiment can be identified by using a SMILES sequence, and the candidate drug information can be identified by using a FASTA sequence.
When the method is used, the target information and the candidate drug information are input into the sequencing model after being subjected to embedding, and the sequencing model can predict and output the sequencing relation of the candidate drugs which are sequenced according to the affinity of the candidate drugs with the target protein based on the input information. And subsequently, based on the ordering relationship, obtaining the medicine with the highest affinity with the target protein, and further realizing medicine recommendation.
According to the medicine sorting method, the sorting model shares the parameters of the pre-trained sorting learning model, the sorting learning model is used for learning the relationship between the affinities of any two medicines and the same target protein, and the accuracy of medicine sorting can be effectively improved by adopting the sorting model, so that medicine recommendation can be more effectively carried out.
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure; as shown in fig. 5, the present embodiment provides a training apparatus 500 for ranking learning models, including:
the acquisition module 501 is used for acquiring a plurality of training samples; each training sample comprises known training target protein information, corresponding two training drug information and the difference between the true affinities of the corresponding two training drugs and the known training targets;
the training module 502 is configured to train the ranking learning model based on a plurality of training samples, so that the ranking learning model learns and predicts the capability of the relationship between the affinities of the two training drugs in each training sample and the known training target protein.
The training apparatus 500 for the ranking learning model of this embodiment uses the above modules to implement the implementation principle and technical effect of the training of the ranking learning model, which are the same as the implementation of the related method embodiments described above, and reference may be made to the description of the related method embodiments in detail, which is not repeated herein.
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure; as shown in fig. 6, the training apparatus 500 for ranking learning model provided in this embodiment further describes the technical solution of the present disclosure in more detail on the basis of the technical solution of the above embodiment shown in fig. 5.
As shown in fig. 6, in the training apparatus 500 for ranking learning model provided in this embodiment, the training module 502 includes:
the input unit 5021 is used for inputting the known training target protein information and the corresponding two pieces of training drug information in the corresponding training samples into the sequencing learning model for each training sample;
an obtaining unit 5022, configured to obtain a difference between predicted affinities of two training drugs output by the ranking learning model and a known training target protein;
and the adjusting unit 5023 is used for adjusting the parameters of the sequencing learning model based on the difference between the predicted affinities and the corresponding actual affinities, so that the sequencing learning model learns and predicts the capacity of the relationship between the affinities of the two training drugs in each training sample and the known training target protein.
Further optionally, the adjusting unit 5023 is configured to:
constructing a loss function based on the difference between the predicted affinities and the corresponding difference between the true affinities;
detecting whether the loss function converges;
if the parameters of the ordered learning model are not converged, the loss function tends to be in the convergence direction.
Further optionally, in the training apparatus 500 for the rank learning model provided in this embodiment, the acquiring module 501 is configured to acquire a plurality of training samples from a plurality of data sets.
Wherein the affinity of the training drug to the known training target in different data sets is characterized by different indexes.
The training apparatus 500 for the ranking learning model of this embodiment uses the above modules to implement the implementation principle and technical effect of the training of the ranking learning model, which are the same as the implementation of the related method embodiments described above, and reference may be made to the description of the related method embodiments in detail, which is not repeated herein.
FIG. 7 is a schematic diagram according to a seventh embodiment of the present disclosure; as shown in fig. 7, the present embodiment provides a medicine sorting device 700, including:
an obtaining module 701, configured to obtain target point information and information of multiple candidate drugs;
a sorting module 702, configured to sort, by using a sorting model, the plurality of candidate drugs according to the affinity with the target based on the target information and the information of each candidate drug; the sequencing model shares the parameters of a pre-trained sequencing learning model, and the sequencing learning model is used for learning the relationship between the affinities of any two drugs and the same target protein.
The implementation principle and technical effect of the medicine sorting device 700 of this embodiment, which are realized by using the modules, are the same as the implementation of the related method embodiments described above, and reference may be made to the description of the related method embodiments in detail, which is not described herein again.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the electronic device 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the electronic apparatus 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the electronic device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the electronic device 800 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as a training method of a ranking learning model or a medicine ranking method. For example, in some embodiments, the training method of the ranking learning model or the drug ranking method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto the electronic device 800 via the ROM 802 and/or the communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the training method of the ranking learning model or the drug ranking method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the training method of the ranking learning model or the drug ranking method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server that incorporates a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种自动化数据处理以及作图方法及系统