Model training method and device, and image recognition method and device
1. A method of model training, comprising:
training a first recognition model based on first class distribution weight data corresponding to M images to obtain a second recognition model, wherein the M images belong to image sequence samples to be recognized, the M images correspond to N classes, and M and N are positive integers greater than 1;
determining second category distribution weight data corresponding to the M images based on the first category distribution weight data corresponding to the M images and the second recognition model;
and training the second recognition model based on second category distribution weight data corresponding to the M images to obtain an image recognition model, wherein the image recognition model is used for determining respective category recognition results corresponding to the images in the image sequence to be recognized.
2. The model training method according to claim 1, wherein the determining second class assignment weight data corresponding to each of the M images based on the first class assignment weight data corresponding to each of the M images and the second recognition model comprises:
determining a first class identification result corresponding to each of the M images by using the second identification model;
updating the first category distribution weight data corresponding to the M images based on the category label data corresponding to the M images and the first category identification results corresponding to the M images to obtain second category distribution weight data corresponding to the M images.
3. The model training method according to claim 2, wherein the updating the first class distribution weight data corresponding to each of the M images based on the class label data corresponding to each of the M images and the first class recognition result corresponding to each of the M images to obtain the second class distribution weight data corresponding to each of the M images comprises:
determining P images to be weighted in the M images based on the category label data corresponding to the M images and the first category identification results corresponding to the M images, wherein the images to be weighted comprise images with wrong category identification, and P is a positive integer;
incrementally updating the first class distribution weight data corresponding to the P images to be weighted to obtain second class distribution weight data corresponding to the P images to be weighted;
and obtaining second class distribution weight data corresponding to the M images respectively based on the second class distribution weight data corresponding to the P images to be weighted respectively.
4. The model training method according to claim 3, wherein the incrementally updating the first class distribution weight data corresponding to each of the P images to be weighted to obtain the second class distribution weight data corresponding to each of the P images to be weighted comprises:
determining the membership between the P images to be weighted and the N categories based on the category label data corresponding to the P images to be weighted;
dividing the P images to be weighted into Q continuous image subsequences based on the subordination relation, wherein Q is a positive integer;
for each of the Q consecutive image sub-sequences,
and incrementally updating the first class distribution weight data corresponding to the images to be weighted in the continuous image subsequence to obtain second class distribution weight data corresponding to the images to be weighted in the continuous image subsequence.
5. The model training method according to claim 4, wherein the incrementally updating the first class distribution weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence to obtain the second class distribution weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence comprises:
determining incremental weight data corresponding to the images to be weighted in the continuous image subsequence based on the image quantity information of the continuous image subsequence;
and obtaining second category distribution weight data corresponding to the images to be weighted in the continuous image subsequence based on the increment weight data corresponding to the images to be weighted in the continuous image subsequence and the first category distribution weight data.
6. The model training method according to claim 5, wherein the determining incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence based on the image quantity information of the consecutive image sub-sequence comprises:
if it is determined based on the image quantity information that the number of images of the consecutive image sub-sequence is greater than or equal to a preset number of images,
determining a starting end image to be weighted of the continuous image subsequence based on the sequence position information of the continuous image subsequence, wherein the starting end image to be weighted is the image to be weighted which is closest to the sequence edge of the category to which the continuous image subsequence belongs;
and determining increment weight data corresponding to the images to be weighted in the continuous image subsequence in a linear reduction mode based on a preset maximum increment value by taking the images to be weighted at the starting end as an increment starting point.
7. The model training method according to claim 5, wherein the determining incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence based on the image quantity information of the consecutive image sub-sequence comprises:
if it is determined based on the image quantity information that the number of images of the consecutive image sub-sequence is smaller than a preset number of images,
determining increment weight data corresponding to the images to be weighted in the continuous image subsequence based on a preset minimum increment value.
8. The model training method according to any one of claims 1 to 7, wherein the training the second recognition model based on the second class assignment weight data corresponding to each of the M images to obtain an image recognition model comprises:
training the second recognition model based on second class distribution weight data corresponding to the M images to obtain a third recognition model;
determining a second category identification result corresponding to each of the M images by using the third identification model;
obtaining current category distribution weight data corresponding to the M images based on category label data corresponding to the M images and second category identification results corresponding to the M images;
determining third category distribution weight data corresponding to the M images based on the first category distribution weight data, the second category distribution weight data and the current category distribution weight data corresponding to the M images;
and training the third recognition model based on the third category distribution weight data corresponding to the M images to obtain the image recognition model.
9. The model training method according to any one of claims 1 to 7, wherein the training the second recognition model based on the second class assignment weight data corresponding to each of the M images to obtain an image recognition model comprises:
training the second recognition model based on second class distribution weight data corresponding to the M images to obtain a fourth recognition model;
determining fourth class distribution weight data corresponding to the M images based on the second class distribution weight data corresponding to the M images and the fourth identification model;
training the fourth recognition model based on fourth class distribution weight data corresponding to the M images to obtain a fifth recognition model;
distributing weight data and the fifth recognition model based on a fourth category corresponding to each of the M images to obtain a sixth recognition model;
and repeatedly executing the steps of training the model of the current training round based on the class distribution weight data obtained before the current training round starts to obtain a new model, and determining the class distribution weight data obtained before the next training round starts based on the new model until a converged new model is obtained, and determining the converged new model as the image recognition model.
10. An image recognition method, comprising:
determining an image recognition model, wherein the image recognition model is obtained by training based on the model training method of any one of the claims 1 to 9;
and determining the category identification results corresponding to the images in the image sequence to be identified by using the image identification model.
11. A model training apparatus, comprising:
the image recognition system comprises a first training module, a second training module and a third recognition module, wherein the first training module is used for training a first recognition model based on first class distribution weight data corresponding to M images to obtain a second recognition model, the M images belong to image sequence samples to be recognized, the M images correspond to N classes, and M and N are positive integers greater than 1;
a second determining module, configured to determine, based on the first category distribution weight data and the second recognition model corresponding to each of the M images, second category distribution weight data corresponding to each of the M images;
and the second training module is used for training the second recognition model based on second category distribution weight data corresponding to the M images to obtain an image recognition model, wherein the image recognition model is used for determining respective category recognition results corresponding to the images in the image sequence to be recognized.
12. An image recognition apparatus, comprising:
a model determination module, configured to determine an image recognition model, where the image recognition model is obtained by training based on the model training method according to any one of claims 1 to 9;
and the identification result determining module is used for determining the category identification results corresponding to the images in the image sequence to be identified by using the image identification model.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program for performing the method of any of the preceding claims 1 to 10.
14. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1 to 10.
Background
In recent years, with the rapid development of image acquisition technology, three-dimensional image sequences have attracted much attention, especially three-dimensional medical image sequences capable of assisting a doctor in diagnosis, such as Computed Tomography (CT) image sequences.
Generally, a three-dimensional medical image sequence includes a plurality of parts, and for convenience of subsequent processing, it is necessary to determine the parts to which the images in the three-dimensional medical image sequence belong (i.e., determine the categories to which the images in the three-dimensional medical image sequence correspond), so as to extract the images corresponding to the parts based on the parts. However, methods for determining the respective region of an image in a three-dimensional medical image sequence exist in the prior art, which identify the region with a low degree of accuracy.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a model training method and device and an image recognition method and device.
In a first aspect, an embodiment of the present application provides a model training method, where the model training method includes: training a first recognition model based on first class distribution weight data corresponding to M images to obtain a second recognition model, wherein the M images belong to image sequence samples to be recognized, the M images correspond to N classes, and M and N are positive integers larger than 1; determining second category distribution weight data corresponding to the M images based on the first category distribution weight data and the second identification model corresponding to the M images; and training a second recognition model based on second category distribution weight data corresponding to the M images to obtain an image recognition model, wherein the image recognition model is used for determining category recognition results corresponding to the images in the image sequence to be recognized.
With reference to the first aspect, in certain implementations of the first aspect, determining, based on the first class assignment weight data and the second recognition model corresponding to each of the M images, second class assignment weight data corresponding to each of the M images includes: determining a first class identification result corresponding to each of the M images by using a second identification model; and updating the first category distribution weight data corresponding to the M images based on the category label data corresponding to the M images and the first category identification results corresponding to the M images to obtain second category distribution weight data corresponding to the M images.
With reference to the first aspect, in some implementation manners of the first aspect, updating the first category distribution weight data corresponding to each of the M images based on the category label data corresponding to each of the M images and the first category identification result corresponding to each of the M images to obtain second category distribution weight data corresponding to each of the M images includes: determining P images to be weighted in the M images based on the category label data corresponding to the M images and the first category identification results corresponding to the M images, wherein the images to be weighted comprise images with wrong category identification, and P is a positive integer; updating the first class distribution weight data corresponding to the P images to be weighted in an incremental mode to obtain second class distribution weight data corresponding to the P images to be weighted; and distributing weight data based on the second types corresponding to the P images to be weighted to obtain the second type distribution weight data corresponding to the M images.
With reference to the first aspect, in some implementation manners of the first aspect, the incrementally updating the first class distribution weight data corresponding to each of the P to-be-weighted images to obtain the second class distribution weight data corresponding to each of the P to-be-weighted images includes: determining the membership between the P images to be weighted and the N categories based on the category label data corresponding to the P images to be weighted; dividing the P images to be weighted into Q continuous image subsequences based on the subordination relation, wherein Q is a positive integer; and for each continuous image subsequence in the Q continuous image subsequences, updating the first class distribution weight data corresponding to the images to be weighted in the continuous image subsequences in an incremental mode to obtain second class distribution weight data corresponding to the images to be weighted in the continuous image subsequences.
With reference to the first aspect, in some implementations of the first aspect, the incrementally updating first class distribution weight data corresponding to each of images to be weighted in the continuous image sub-sequence to obtain second class distribution weight data corresponding to each of images to be weighted in the continuous image sub-sequence includes: determining incremental weight data corresponding to the images to be weighted in the continuous image sub-sequence based on the image quantity information of the continuous image sub-sequence; and obtaining second category distribution weight data corresponding to the images to be weighted in the continuous image subsequence based on the increment weight data corresponding to the images to be weighted in the continuous image subsequence and the first category distribution weight data.
With reference to the first aspect, in certain implementations of the first aspect, determining, based on the image quantity information of the consecutive image sub-sequences, incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences includes: if the image quantity of the continuous image subsequence is determined to be larger than or equal to the preset image quantity based on the image quantity information, determining a starting end to-be-weighted image of the continuous image subsequence based on the sequence position information of the continuous image subsequence, wherein the starting end to-be-weighted image is the image to be weighted which is closest to the sequence edge of the category to which the continuous image subsequence belongs; and determining the increment weight data corresponding to the images to be weighted in the continuous image subsequence in a linear reduction mode based on the preset maximum increment value by taking the images to be weighted at the starting end as the increment starting point.
With reference to the first aspect, in certain implementations of the first aspect, determining, based on the image quantity information of the consecutive image sub-sequences, incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences includes: and if the image quantity of the continuous image subsequence is determined to be smaller than the preset image quantity based on the image quantity information, determining increment weight data corresponding to the images to be weighted in the continuous image subsequence based on the preset minimum increment value.
With reference to the first aspect, in some implementations of the first aspect, training a second recognition model based on second class assignment weight data corresponding to each of the M images to obtain an image recognition model includes: training a second recognition model based on second class distribution weight data corresponding to the M images to obtain a third recognition model; determining a second category identification result corresponding to each of the M images by using a third identification model; obtaining current category distribution weight data corresponding to the M images based on category label data corresponding to the M images and second category identification results corresponding to the M images; determining third category distribution weight data corresponding to the M images based on the first category distribution weight data, the second category distribution weight data and the current category distribution weight data corresponding to the M images; and training a third recognition model based on the third class distribution weight data corresponding to the M images to obtain the image recognition model.
With reference to the first aspect, in some implementations of the first aspect, training a second recognition model based on second class assignment weight data corresponding to each of the M images to obtain an image recognition model includes: training a second recognition model based on second class distribution weight data corresponding to the M images to obtain a fourth recognition model; determining fourth category distribution weight data corresponding to the M images based on the second category distribution weight data corresponding to the M images and the fourth identification model; training a fourth recognition model based on fourth class distribution weight data corresponding to the M images to obtain a fifth recognition model; distributing weight data and a fifth recognition model based on a fourth category corresponding to each of the M images to obtain a sixth recognition model; and repeatedly executing the steps of training the model of the current training round based on the class distribution weight data obtained before the current training round starts to obtain a new model, and determining the class distribution weight data obtained before the next training round starts based on the new model until a new converged model is obtained, and determining the new converged model as the image recognition model.
In a second aspect, an embodiment of the present application provides an image recognition method, including: determining an image recognition model, wherein the image recognition model is obtained by training based on the model training method mentioned in the first aspect; and determining the category identification results corresponding to the images in the image sequence to be identified by using the image identification model.
In a third aspect, an embodiment of the present application provides a model training apparatus, including: the first training module is used for training a first recognition model based on first class distribution weight data corresponding to M images to obtain a second recognition model, wherein the M images belong to image sequence samples to be recognized, the M images correspond to N classes, and M and N are positive integers greater than 1; the second determining module is used for determining second category distribution weight data corresponding to the M images based on the first category distribution weight data and the second recognition model corresponding to the M images; and the second training module is used for training a second recognition model based on second class distribution weight data corresponding to the M images to obtain an image recognition model, wherein the image recognition model is used for determining class recognition results corresponding to the images in the image sequence to be recognized.
In a fourth aspect, an embodiment of the present application provides an image recognition apparatus, including: a model determining module, configured to determine an image recognition model, where the image recognition model is obtained by training based on the model training method mentioned in the first aspect; and the identification result determining module is used for determining the category identification results corresponding to the images in the image sequence to be identified by using the image identification model.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where the storage medium stores a computer program for executing the model training method mentioned in the first aspect and/or the image recognition method mentioned in the second aspect.
In a sixth aspect, an embodiment of the present application provides an electronic device, including: a processor; a memory for storing processor-executable instructions; the processor is configured to perform the model training method of the first aspect and/or the image recognition method of the second aspect.
In the model training method provided by the embodiment of the application, in consideration of the learning difficulty problem of the image samples relative to the model, first class distribution weight data (which may also be referred to as initial class distribution weight data) is respectively distributed to the images in the image sequence sample to be recognized, the first class distribution weight data corresponding to the images in the image sequence sample to be recognized is corrected by using the intermediate model obtained in the middle of training, second class distribution weight data corresponding to the images in the image sequence sample to be recognized is obtained, and the intermediate model is continuously trained by using the second class distribution weight data corresponding to the images in the image sequence sample to be recognized, so as to obtain the converged image recognition model. By the arrangement, the learning difficulty and weight of the image samples can be redefined for the image samples subjected to the error prediction, so that the model can pay more attention to the image samples subjected to the error prediction, and the recognition accuracy of the image recognition model obtained by training is greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a model training method according to an embodiment of the present application.
Fig. 2 is a schematic flow chart illustrating a process of determining second class assignment weight data corresponding to M images according to an embodiment of the present application.
Fig. 3 is a schematic flow chart illustrating a process of updating first class distribution weight data corresponding to M images to obtain second class distribution weight data corresponding to the M images according to an embodiment of the present application.
Fig. 4 is a schematic flow chart illustrating a process of incrementally updating first class distribution weight data corresponding to P to-be-weighted images to obtain second class distribution weight data corresponding to the P to-be-weighted images according to an embodiment of the present application.
Fig. 5 is a schematic flow chart illustrating a process of incrementally updating first class distribution weight data corresponding to each of to-be-weighted images in a continuous image sub-sequence to obtain second class distribution weight data corresponding to each of to-be-weighted images in the continuous image sub-sequence according to an embodiment of the present application.
Fig. 6 is a schematic flowchart illustrating a process of determining incremental weight data corresponding to images to be weighted in a consecutive image sub-sequence according to an embodiment of the present application.
Fig. 7 is a schematic flow chart illustrating that a second recognition model is trained based on second class distribution weight data corresponding to each of M images to obtain an image recognition model according to an embodiment of the present application.
Fig. 8 is a schematic flowchart illustrating an image recognition method according to an embodiment of the present application.
Fig. 9 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application.
Fig. 10 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present application.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As is well known, typically, a three-dimensional medical image sequence comprises a plurality of sites. For example, the three-dimensional medical image sequence is a medical image sequence of the whole human body, and includes 13 parts, namely, brain nasopharynx, nasopharynx neck, cervico-thoracic region, chest, thoracico-abdominal region, abdomen, abdominopelvic region, pelvic region lower limb and lower limb. For the convenience of subsequent processing, it is necessary to determine the respective regions to which the images in the three-dimensional medical image sequence belong (i.e., determine the respective corresponding categories of the images in the three-dimensional medical image sequence), so as to extract the images corresponding to the respective regions based on the regions. For example, the reader can conveniently perform subsequent operations, such as Quality Control (QC) and the like, based on the divided part image.
The existing part recognition model can solve the technical problem of how to extract an image corresponding to a specific part from a three-dimensional medical image sequence based on the specific part, but the accuracy is low. One of the reasons is the problem of class imbalance, the other is the problem of difficult and easy samples.
In view of the above-mentioned problem of category imbalance, in general, the parts of the human body where the focus is likely to appear are more concentrated, and therefore the images of the relevant parts, such as the head, the chest, and the abdomen, are more frequently collected, while the image collection amount of the parts where the focus is not likely to appear (such as the foot, the lower leg, and the like) is much less. The image data volume difference of different parts is too large, so that the part recognition model is more biased to learn the characteristics of the part with more data volume in the learning process, and the part with smaller data volume has too little contribution to model updating due to the quantity problem, so that the model cannot learn the characteristics of the part recognition model, and the final effect of the model is greatly influenced.
The above-mentioned problem of the difficult and easy samples is exemplified by the above-mentioned medical image sequence of the whole human body including 13 parts in total. Specifically, among the 13 sites enumerated, 7 specific sites and 6 transition sites were included. Wherein, 7 specific parts comprise brain, nasopharynx, neck, chest, abdomen, pelvic cavity and lower limbs, and 6 transition parts comprise brain nasopharynx, nasopharynx neck, cervicothorax, chest and abdomen, abdominopelvic cavity and pelvic cavity lower limbs. That is, there is a transition between each adjacent two specific locations. The transition part is used as a middle area of two adjacent specific parts, and is characterized in that the image structure has the characteristics of the previous specific part and the next specific part, so that the transition part is more difficult to learn in the learning process. In addition, the transition part is only a short transition of two specific parts, and the data volume is relatively small, so that the transition part has the phenomenon of unbalanced category. Thus, in general, a specific site may be referred to as a simple sample, and a transition site may be referred to as a difficult sample.
In order to solve the technical problem of poor recognition accuracy of a part recognition model, embodiments of the present application provide a model training method and apparatus, and an image recognition method and apparatus, so as to achieve the purpose of improving the recognition accuracy of image recognition. It should be noted that the method provided by the embodiment of the present application is not limited to the three-dimensional medical image sequence in the medical scene, and can also be applied to video data in a natural scene. The model training method and the image recognition method according to the embodiment of the present application are described in detail below with reference to fig. 1 to 8.
Fig. 1 is a schematic flow chart of a model training method according to an embodiment of the present application. As shown in fig. 1, the model training method provided in the embodiment of the present application includes the following steps.
Step S200, training a first recognition model based on the first class distribution weight data corresponding to the M images respectively to obtain a second recognition model. The M images belong to an image sequence sample to be identified, the M images correspond to N categories, and M and N are positive integers greater than 1.
For example, the meaning of the M images corresponding to the N classes is that the image sequence sample to be recognized relates to N classes (for example, N part classes), and then each of the M images belongs to one of the N classes.
Illustratively, the class corresponding to the image is assigned with weight data, and the class assigned to the image and capable of representing difficulty in learning is assigned with weight data according to the actual difficulty in learning of the image for the model. Correspondingly, the first category assignment weight data mentioned in step S200 refers to the initial category assignment weight data corresponding to the image. In some embodiments, before the model begins training, a uniform initial class assignment weight data is assigned to each of the M images because the actual learning difficulty of each of the M images for the model is not known.
Although step S200 refers to training the first recognition model based on the first class assignment weight data corresponding to each of the M images, it is understood that some necessary data of the image sequence sample to be recognized is also required for training the first recognition model. For example, in some embodiments, the above-mentioned sample of the image sequence to be identified includes at least one CT image sequence, and each image in each CT image sequence is represented by x (series, slice, data, gt, a, w). Wherein series represents the sequence id of the CT image sequence, slice represents the id of the image in the CT image sequence, data represents the image pixel data of the image, gt represents the true label of the image (i.e. the category label data mentioned below), a represents the category weight of the corresponding category of the image, w represents the redistribution difficulty weight of the image in the training (i.e. the category distribution weight data), and the initial value is 1 (i.e. the first category distribution weight data is 1).
Illustratively, the first recognition model is an established initial recognition model. In the actual training process, the sequence of M images in the image sequence sample to be recognized is disordered, then the first recognition model is trained for n times based on the first class distribution weight data corresponding to the M images, and the second recognition model can be obtained after n times of training. That is, the second recognition model differs from the first recognition model in that some specific model parameters differ. It is noted that the second recognition model is not the final converged image recognition model, but is merely an intermediate model obtained in the training process. The purpose of scrambling the order of M images is to prevent model jitter and speed up model convergence.
In some embodiments, the condition for determining whether the model converges is that the recognition accuracy of the model reaches a preset accuracy and there is no fluctuation exceeding a fluctuation threshold value in a plurality of iteration cycles.
In some embodiments, the first recognition model is a multi-classification convolutional neural network model. The multi-classification convolutional neural network model can be learned through level transfer, reasoning is stored, new learning is conducted on subsequent levels, and feature extraction is not needed to be conducted independently before the model is used, so that the multi-classification convolutional neural network model is more suitable for the image processing scene.
Step S300, determining second category distribution weight data corresponding to the M images based on the first category distribution weight data and the second recognition model corresponding to the M images.
It can be understood that the first category distribution weight data corresponding to each of the M images is modified based on the trained second recognition model, and then the second category distribution weight data corresponding to each of the M images is obtained.
And S400, training a second recognition model based on second class distribution weight data corresponding to the M images to obtain the image recognition model. The image recognition model is used for determining the category recognition results corresponding to the images in the image sequence to be recognized.
Illustratively, the image recognition model mentioned in step S400 is a trained, converged image recognition model.
In the practical application process, first category distribution weight data corresponding to M images are determined based on an image sequence sample to be identified comprising the M images, then a first identification model is trained based on the first category distribution weight data corresponding to the M images to obtain a second identification model, then second category distribution weight data corresponding to the M images are determined based on the first category distribution weight data corresponding to the M images and the second identification model, and the second identification model is trained based on the second category distribution weight data corresponding to the M images to obtain an image identification model.
In the model training method provided by the embodiment of the application, in consideration of the difficulty in learning of the image samples relative to the model, an initial recognition model (which may be a first recognition model) is trained based on first class distribution weight data (which may be referred to as initial class distribution weight data) corresponding to each of M images, the first class distribution weight data corresponding to each of the images in the image sequence sample to be recognized is corrected by using an intermediate model (which may be a second recognition model) obtained in the middle of training, second class distribution weight data corresponding to each of the images in the image sequence sample to be recognized is obtained, and the intermediate model is continuously trained by using second class distribution weight data corresponding to each of the images in the image sequence sample to be recognized, so as to obtain a converged image recognition model. By the arrangement, the learning difficulty and weight of the image samples can be redefined for the image samples subjected to the error prediction, so that the model can pay more attention to the image samples subjected to the error prediction, and the recognition accuracy of the image recognition model obtained by training is greatly improved.
Further, another embodiment of the present application is extended on the basis of the embodiment shown in fig. 1. Specifically, in the embodiment of the present application, before step S200, the following steps are further performed: and determining first class distribution weight data corresponding to the M images respectively based on the image sequence samples to be identified comprising the M images. That is, before step S200, a step of assigning first-class assignment weight data to the M images, that is, determining first-class assignment weight data corresponding to each of the M images, is further performed.
Illustratively, in some embodiments, the loss function used in the first recognition model is as shown in equation (1) below.
(1)
In the formula (1), the first and second groups,indicating the class weight to which the current image corresponds,the probability of the prediction is represented by,representing the difficulty weights (i.e. the conventional difficulty weights determined from the image reality),indicating the assignment of difficult and easy weights (i.e., the category assignment weight data mentioned in the above embodiment).
Fig. 2 is a schematic flow chart illustrating a process of determining second class assignment weight data corresponding to M images according to an embodiment of the present application. The embodiment shown in fig. 2 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 2 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 2, in the embodiment of the present application, the step of determining the second category assignment weight data corresponding to each of the M images based on the first category assignment weight data and the second recognition model corresponding to each of the M images includes the following steps.
Step S310, a first class recognition result corresponding to each of the M images is determined by using the second recognition model.
Step S320, based on the category label data corresponding to each of the M images and the first category identification result corresponding to each of the M images, updating the first category distribution weight data corresponding to each of the M images to obtain the second category distribution weight data corresponding to each of the M images.
It is understood that the second recognition model can output the class recognition result (i.e., the first class recognition result) even if it is not completely converged as the intermediate model. Then, for each image in the M images, the category label data corresponding to the image is compared with the first category identification result, so that whether the first category identification result is correct or not can be verified.
Based on the above, the first category distribution weight data corresponding to the M images can be updated according to whether the first category identification results corresponding to the M images are correct, so as to obtain the second category distribution weight data corresponding to the M images. Illustratively, the first category distribution weight data corresponding to the image with the wrong first category identification result is increased, and the first category distribution weight data corresponding to the image with the correct first category identification result is kept unchanged, so as to obtain the second category distribution weight data corresponding to each of the M images.
According to the model training method provided by the embodiment of the application, the classification distribution weight data corresponding to the images in the image sequence sample to be recognized can be corrected and updated by using the recognition result of the intermediate model obtained through training, and the intermediate model is further trained based on the corrected and updated classification distribution weight data, so that the image recognition model with higher recognition accuracy is obtained.
Fig. 3 is a schematic flow chart illustrating a process of updating first class distribution weight data corresponding to M images to obtain second class distribution weight data corresponding to the M images according to an embodiment of the present application. The embodiment shown in fig. 3 is extended based on the embodiment shown in fig. 2, and the differences between the embodiment shown in fig. 3 and the embodiment shown in fig. 2 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 3, in the embodiment of the present application, the step of updating the first category distribution weight data corresponding to each of the M images based on the category label data corresponding to each of the M images and the first category identification result corresponding to each of the M images to obtain the second category distribution weight data corresponding to each of the M images includes the following steps.
Step S321, determining P images to be weighted in the M images based on the category label data corresponding to the M images and the first category identification result corresponding to the M images. The image to be weighted comprises an image with wrong category identification, and P is a positive integer.
In some embodiments, the images to be weighted may be considered difficult samples, requiring increased weights for model emphasis learning. That is, an image with a wrong category identification is regarded as a difficult sample.
Step S322, the first class distribution weight data corresponding to the P images to be weighted are updated in an incremental mode, and second class distribution weight data corresponding to the P images to be weighted are obtained.
For example, the incremental updating mentioned in step S322 refers to increasing the first class distribution weight data corresponding to each of the P images to be weighted, and then updating to obtain the second class distribution weight data corresponding to each of the P images to be weighted.
Step S323, distributing weight data based on the second type corresponding to the P images to be weighted to obtain the second type distributing weight data corresponding to the M images.
According to the model training method provided by the embodiment of the application, the contribution of the difficult samples in the model training process is increased and the attention of the model to the difficult samples is improved by a mode of updating the first class distribution weight data corresponding to the P images to be weighted in an incremental mode, so that the problem that the difficult samples are difficult to learn is well solved. In addition, because data with fewer samples can be regarded as difficult samples in the learning process, the embodiment of the application can solve the problem of class imbalance in a phase-changing manner.
Fig. 4 is a schematic flow chart illustrating a process of incrementally updating first class distribution weight data corresponding to P to-be-weighted images to obtain second class distribution weight data corresponding to the P to-be-weighted images according to an embodiment of the present application. The embodiment shown in fig. 4 is extended based on the embodiment shown in fig. 3, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 4, in the embodiment of the present application, the step of incrementally updating the first category distribution weight data corresponding to each of the P images to be weighted to obtain the second category distribution weight data corresponding to each of the P images to be weighted includes the following steps.
Step S3221, determining the affiliation between the P images to be weighted and the N categories based on the category label data corresponding to the P images to be weighted.
Step S3222, based on the membership, divides the P images to be weighted into Q consecutive image sub-sequences.
Illustratively, the specific way of dividing P images to be weighted into Q consecutive image sub-sequences based on the dependency relationship is to divide the consecutive images to be weighted belonging to the same category into one consecutive image sub-sequence.
Step S3223, for each of the Q consecutive image sub-sequences, incrementally update the first category distribution weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences to obtain the second category distribution weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences.
That is to say, the embodiment of the present application can determine the second class distribution weight data corresponding to the image to be weighted, on the basis of fully considering the class to which the image to be weighted belongs. Specifically, according to each continuous image subsequence obtained by division based on the membership, the first category distribution weight data corresponding to each image in the continuous image subsequence is updated in an independent incremental manner, so that second category distribution weight data with higher accuracy and fine granularity are obtained, and a precondition is provided for training to obtain an image recognition model with higher recognition accuracy.
A specific implementation manner of step S3223 is further illustrated in conjunction with fig. 5.
Fig. 5 is a schematic flow chart illustrating a process of incrementally updating first class distribution weight data corresponding to each of to-be-weighted images in a continuous image sub-sequence to obtain second class distribution weight data corresponding to each of to-be-weighted images in the continuous image sub-sequence according to an embodiment of the present application. The embodiment shown in fig. 5 is extended based on the embodiment shown in fig. 4, and the differences between the embodiment shown in fig. 5 and the embodiment shown in fig. 4 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 5, in this embodiment of the application, the step of incrementally updating the first category distribution weight data corresponding to each of the images to be weighted in the continuous image sub-sequence to obtain the second category distribution weight data corresponding to each of the images to be weighted in the continuous image sub-sequence includes the following steps.
Step S32231 determines incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence based on the image quantity information of the consecutive image sub-sequence.
Illustratively, if the number of images in the consecutive image sub-sequence is determined to be less than the preset number of images based on the number of images information of the consecutive image sub-sequence, the number of images in the consecutive image sub-sequence may be considered to be too small. In this case, it is not necessary to consider the time-series characteristics between the images, for example, the incremental weight data corresponding to the images to be weighted in the continuous image sub-sequence are set to be the same.
Correspondingly, if the number of images in the continuous image sub-sequence is determined to be greater than or equal to the preset number of images based on the image number information of the continuous image sub-sequence, it can be considered that the number of images in the continuous image sub-sequence is larger. In this case, the incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence may be set in consideration of the time-series characteristics between the images, for example, the incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence may be set to be linearly increased or linearly decreased based on the time-series.
Step S32232, based on the incremental weight data and the first class distribution weight data corresponding to the images to be weighted in the continuous image sub-sequence, obtain second class distribution weight data corresponding to the images to be weighted in the continuous image sub-sequence.
Illustratively, for each image to be weighted in the continuous image subsequence, the first class assignment weight data and the incremental weight data corresponding to the image to be weighted are added and summed to obtain the second class assignment weight data corresponding to the image to be weighted.
According to the embodiment of the application, the actual situation of the continuous image subsequence is fully considered, and then the second class distribution weight data which are more matched with the weighted image to be weighted are obtained, so that a data basis is provided for obtaining a high-precision image recognition model for subsequent training.
Fig. 6 is a schematic flowchart illustrating a process of determining incremental weight data corresponding to images to be weighted in a consecutive image sub-sequence according to an embodiment of the present application. The embodiment shown in fig. 6 is extended based on the embodiment shown in fig. 5, and the differences between the embodiment shown in fig. 6 and the embodiment shown in fig. 5 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 6, in the embodiment of the present application, the step of determining, based on the image quantity information of the continuous image sub-sequence, the incremental weight data corresponding to each of the images to be weighted in the continuous image sub-sequence includes the following steps.
In step S331, it is determined whether the number of images of the consecutive image sub-sequence is smaller than a preset number of images based on the image number information.
Illustratively, in the practical application, if the judgment result of step S331 is no, that is, the number of images of the continuous image sub-sequence is greater than or equal to the preset number of images, step S333 and step S334 are executed. If the result of the determination in step S331 is yes, i.e. the number of images of the consecutive image sub-sequence is less than the preset number of images, step S332 is performed.
Step S332, determining the image to be weighted at the starting end of the continuous image subsequence based on the sequence position information of the continuous image subsequence. Wherein, the image to be weighted at the starting end is the image to be weighted which is closest to the sequence edge of the category to which the continuous image subsequence belongs.
Step S333, determining, by taking the image to be weighted at the start end as an increment starting point, increment weight data corresponding to each image to be weighted in the continuous image subsequence in a linear reduction manner based on a preset maximum increment value.
Step S334 is to determine incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence based on the preset minimum incremental value.
The embodiment shown in fig. 6 is further explained below in conjunction with table 1. As shown in table 1, the image sequence sample to be identified includes 29 images in total, and the sequence information (also referred to as sequence position information) corresponding to the 29 images is 1 to 29. For convenience of description, each image is directly expressed in the form of image + sequence information as follows. The category label data refers to real category information of the image, and includes four categories, which are categories 1 to 4. The first class recognition result refers to a class recognition result of an image obtained using an intermediate model (such as the second recognition model). The first category assigned weight data refers to category assigned weight data before update (the above-mentioned first category may be assigned weight data). The first category assigned weight data refers to updated category assigned weight data (the second category may be assigned weight data).
TABLE 1 Category assignment weight data comparison Table
By comparing the category label data with the first category identification result, it can be known that the category identification of the images 3, 4, 8 to 12, 21 to 26 is incorrect (i.e. the image to be weighted). Before updating, the class assignment weight data corresponding to each of the images 1 to 29 is 1. As can be seen from the above description of the consecutive image sub-sequences, the images 3 and 4 belong to the same consecutive image sub-sequence (which may be referred to as a first consecutive image sub-sequence), the images 8 to 12 belong to the same consecutive image sub-sequence (which may be referred to as a second consecutive image sub-sequence), and the images 21 to 26 belong to the same consecutive image sub-sequence (which may be referred to as a third consecutive image sub-sequence).
In the embodiment of the present application, the number of preset images is 3, the preset minimum increment value is 0.1, and the preset maximum increment value is 1. Then, since the number of images of the first continuous image sub-sequence is 2 (less than 3), the assignment weights for the categories of the image 3 and the image 4 are respectively increased by 0.1, that is, after the update, the assignment weight data for the second category corresponding to each of the image 3 and the image 4 are both 1.1. Since the number of images in the second consecutive image sub-sequence is 5 (greater than 3) and image 12 is the image to be weighted at the beginning, the incremental weight data corresponding to each of images 12 to 8 is determined in a linear decreasing manner with image 12 as the increment starting point (i.e. increasing by 1) and image 8 as the increment ending point (i.e. increasing by 0.1), i.e. after updating, the second category assignment weight data corresponding to images 8 to 12 are 1.1, 1.325, 1.55, 1.775 and 2. The calculation of the third consecutive image sub-sequence is similar to the second consecutive image sub-sequence and will not be described again.
Relatively speaking, images to be weighted closer to the edge of the category are more difficult to learn well by the model. Therefore, the embodiment of the application assigns the preset maximum increment value to the image to be weighted which is closest to the sequence edge of the category to which the continuous image subsequence belongs, so that the model can well learn the characteristics of the image to be weighted. In addition, in the embodiment of the application, a linear decreasing mode is used to assign the weight increment value to other images to be weighted in the continuous image sub-sequence, so that the increment weight data corresponding to the images to be weighted in the continuous image sub-sequence fully considers the time sequence characteristics of the continuous image sub-sequence, and the reasonability of the increment weight data corresponding to the images to be weighted in the continuous image sub-sequence is improved.
Fig. 7 is a schematic flow chart illustrating that a second recognition model is trained based on second class distribution weight data corresponding to each of M images to obtain an image recognition model according to an embodiment of the present application. The embodiment shown in fig. 7 is extended based on the embodiment shown in fig. 1, and the differences between the embodiment shown in fig. 7 and the embodiment shown in fig. 1 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 7, in the embodiment of the present application, the step of training the second recognition model based on the second class assignment weight data corresponding to each of the M images to obtain the image recognition model includes the following steps.
Step S410, training a second recognition model based on second class distribution weight data corresponding to the M images respectively to obtain a third recognition model.
Step S420, determining a second category identification result corresponding to each of the M images by using the third identification model.
Step S430, obtaining current category distribution weight data corresponding to each of the M images based on the category label data corresponding to each of the M images and the second category identification result corresponding to each of the M images.
Step S440, determining third category distribution weight data corresponding to the M images based on the first category distribution weight data, the second category distribution weight data and the current category distribution weight data corresponding to the M images.
For each of the M images, the third class assignment weight data corresponding to the image is obtained by performing an exponential moving average calculation on the first class assignment weight data, the second class assignment weight data, and the current class assignment weight data corresponding to the image. In other words, the third category assigned weight data corresponding to the image is obtained by performing an exponential moving average calculation on the historical category assigned weight data corresponding to the image.
Illustratively, the exponential moving average calculation is performed based on the following formula (2). Wherein the content of the first and second substances,final class assignment weight data representing the calculated current training turn,can be determined according to the actual situation, such as。
(2)
And S450, training a third recognition model based on third class distribution weight data corresponding to the M images to obtain the image recognition model.
According to the embodiment of the application, the influence of the randomness of the training turns on weight distribution during weight redistribution can be prevented, and the robustness of the obtained image recognition model is further improved.
It should be noted that, in the practical application process, the specific implementation manner of step S450 may be to obtain a fourth recognition model, a fifth recognition model, and the like, and then continuously modify the category distribution weight data corresponding to each of the M images by using the obtained intermediate model until the converged image recognition model is obtained.
Illustratively, in some embodiments, training the second recognition model based on the second class assignment weight data corresponding to each of the M images to obtain the image recognition model includes: training a second recognition model based on second class distribution weight data corresponding to the M images to obtain a fourth recognition model; determining fourth category distribution weight data corresponding to the M images based on the second category distribution weight data corresponding to the M images and the fourth identification model; training a fourth recognition model based on fourth class distribution weight data corresponding to the M images to obtain a fifth recognition model; distributing weight data and a fifth recognition model based on a fourth category corresponding to each of the M images to obtain a sixth recognition model; and repeatedly executing the steps of training the model of the current training round based on the class distribution weight data obtained before the current training round starts to obtain a new model, and determining the class distribution weight data obtained before the next training round starts based on the new model until a new converged model is obtained, and determining the new converged model as the image recognition model.
As will be clearly understood by those skilled in the art, in the embodiment of the present application, the aforementioned training of the second recognition model based on the second class assignment weight data corresponding to each of the M images results in a fourth recognition model; the method comprises the steps of determining fourth class distribution weight data corresponding to M images based on second class distribution weight data and fourth recognition models corresponding to the M images, training a model of a current training turn based on the class distribution weight data obtained before the current training turn starts to obtain a new model, and determining the class distribution weight data obtained before the next training turn starts based on the new model.
Fig. 8 is a schematic flowchart illustrating an image recognition method according to an embodiment of the present application. As shown in fig. 8, an image recognition method provided in an embodiment of the present application includes the following steps.
Step S500, determining an image recognition model.
Illustratively, the image recognition model is trained based on the model training method mentioned in the above embodiments.
Step S600, determining respective corresponding category identification results of the images in the image sequence to be identified by using the image identification model.
Illustratively, the image sequence to be recognized is a three-dimensional medical image sequence, and correspondingly, the category recognition result is a part recognition result.
According to the image recognition method and device, the purpose of accurately determining the category recognition results corresponding to the images in the image sequence to be recognized is achieved by using the image recognition model.
The method embodiment of the present application is described in detail above with reference to fig. 1 to 8, and the apparatus embodiment of the present application is described in detail below with reference to fig. 9 to 11. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 9 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application. As shown in fig. 9, the model training apparatus provided in the embodiment of the present application includes a first training module 200, a second determining module 300, and a second training module 400. Specifically, the first training module 200 is configured to train the first recognition model based on the first class distribution weight data corresponding to each of the M images, so as to obtain the second recognition model. The second determining module 300 is configured to determine second category assignment weight data corresponding to each of the M images based on the first category assignment weight data and the second recognition model corresponding to each of the M images. The second training module 400 is configured to train a second recognition model based on second class distribution weight data corresponding to each of the M images, so as to obtain an image recognition model.
In some embodiments, the model training apparatus further includes a first determining module, configured to determine, based on a sample of the image sequence to be recognized that includes the M images, first class assignment weight data corresponding to each of the M images.
In some embodiments, the second determining module 300 is further configured to determine, by using the second identification model, a first class identification result corresponding to each of the M images, and update the first class distribution weight data corresponding to each of the M images based on the class label data corresponding to each of the M images and the first class identification result corresponding to each of the M images, so as to obtain second class distribution weight data corresponding to each of the M images.
In some embodiments, the second determining module 300 is further configured to determine, based on the category label data corresponding to each of the M images and the first category identification result corresponding to each of the M images, P images to be weighted, update, in an incremental manner, the first category distribution weight data corresponding to each of the P images to be weighted to obtain second category distribution weight data corresponding to each of the P images to be weighted, and obtain the second category distribution weight data corresponding to each of the M images based on the second category distribution weight data corresponding to each of the P images to be weighted.
In some embodiments, the second determining module 300 is further configured to determine, based on the category label data corresponding to each of the P images to be weighted, a subordinate relationship between the P images to be weighted and the N categories, divide the P images to be weighted into Q consecutive image sub-sequences based on the subordinate relationship, and, for each of the Q consecutive image sub-sequences, incrementally update the first category assignment weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences, so as to obtain the second category assignment weight data corresponding to each of the images to be weighted in the consecutive image sub-sequences.
In some embodiments, the second determining module 300 is further configured to determine, based on the image quantity information of the consecutive image sub-sequence, incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence, and obtain, based on the incremental weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence and the first class distribution weight data, second class distribution weight data corresponding to each of the images to be weighted in the consecutive image sub-sequence.
In some embodiments, the second determining module 300 is further configured to determine whether the number of images of the consecutive image sub-sequence is less than a preset number of images based on the number of images information. And if the number of the images of the continuous image subsequence is greater than or equal to the preset number of the images, determining an initial end weighted image of the continuous image subsequence based on the sequence position information of the continuous image subsequence, and determining incremental weight data corresponding to the weighted images in the continuous image subsequence based on the initial end weighted image as an incremental starting point and a preset maximum incremental value in a linear reduction mode. And if the number of the images of the continuous image subsequence is less than the preset number of the images, determining increment weight data corresponding to the images to be weighted in the continuous image subsequence based on the preset minimum increment value.
In some embodiments, the second training module 400 is further configured to train the second recognition model based on the second class distribution weight data corresponding to each of the M images to obtain a third recognition model, determine a second class recognition result corresponding to each of the M images by using the third recognition model, obtain current class distribution weight data corresponding to each of the M images based on the class label data corresponding to each of the M images and the second class recognition result corresponding to each of the M images, determine a third class distribution weight data corresponding to each of the M images based on the first class distribution weight data, the second class distribution weight data, and the current class distribution weight data corresponding to each of the M images, and train the third recognition model based on the third class distribution weight data corresponding to each of the M images to obtain the image recognition model.
Fig. 10 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present application. As shown in fig. 10, the image recognition apparatus provided in the embodiment of the present application includes a model determination module 500 and a recognition result determination module 600. In particular, the model determination module 500 is used to determine an image recognition model. The recognition result determining module 600 is configured to determine, by using the image recognition model, a category recognition result corresponding to each of the images in the image sequence to be recognized.
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 11. Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 11, the electronic device 70 includes one or more processors 710 and memory 720.
Processor 710 may be a Central Processing Unit (CPU) or other form of Processing Unit having data Processing capabilities and/or instruction execution capabilities, and may control other components in electronic device 70 to perform desired functions.
Memory 720 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile Memory may include, for example, Random Access Memory (RAM), cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a hard disk, a flash Memory, and the like. One or more computer program instructions may be stored on a computer-readable storage medium and executed by processor 710 to implement the model training methods, the image recognition methods, and/or other desired functions of the various embodiments of the present application mentioned above. Various content such as a sample of the image sequence to be recognized may also be stored in the computer readable storage medium.
In one example, the electronic device 70 may further include: an input device 730 and an output device 740, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 730 may include, for example, a keyboard, a mouse, and the like.
The output device 740 may output various information including the result of the category recognition to the outside. The output devices 740 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for the sake of simplicity, only some of the components of the electronic device 70 relevant to the present application are shown in fig. 11, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 70 may include any other suitable components, depending on the particular application.
In addition to the above methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps of the model training method, the image recognition method according to various embodiments of the present application described above in this specification.
The computer program product may include program code for carrying out operations for embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the model training method, the image recognition method according to various embodiments of the present application described above in the present specification.
A computer-readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM or flash Memory), an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.
The block diagrams of devices, apparatuses, systems referred to in this application are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It should also be noted that in the devices, apparatuses, and methods of the present application, the components or steps may be decomposed and/or recombined. These decompositions and/or recombinations are to be considered as equivalents of the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种基于联邦学习的卸载智能优化方法和系统