Data processing method, model training method, device and equipment

文档序号:8556 发布日期:2021-09-17 浏览:31次 中文

1. A method of data processing, the method comprising:

acquiring an image to be processed;

inputting the image to be processed into a trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

2. The method of claim 1, wherein the trained neural network model is trained in the following manner:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

3. The method of claim 2, wherein optimizing network parameters in the initial feature extraction network using an unsupervised learning technique based on the first training sample set to obtain a trained feature extraction network comprises:

respectively performing first preprocessing and second preprocessing on a plurality of sample images in a first training sample set to obtain a first sub-sample image and a second sub-sample image of each sample image;

inputting a first sub-sample image and a second sub-sample image of the sample image to an initial feature extraction network, respectively, to extract sample features of the first sub-sample image and the second sub-image, respectively;

determining a loss value of a first loss function based on sample features of the first and second sub-sample images;

and iteratively adjusting the network parameters of the feature extraction network until the loss value of the first loss function meets a preset requirement.

4. The method of claim 2, wherein optimizing model parameters of the initial neural network model using a meta-learning technique based on the second set of training samples to obtain the trained neural network model comprises:

generating a plurality of subtasks for meta-learning according to the second training sample set, wherein each subtask comprises a support set and a query set; the sample images included in the support set and the query set are disjoint, but the sample categories of the sample images are the same;

respectively inputting a plurality of sample images in a support set of the subtask into an initial neural network model aiming at one subtask in the plurality of subtasks, and determining the description information of each sample category in a plurality of sample categories corresponding to the support set; respectively inputting a plurality of sample images to be classified in the query set of the subtask into the initial neural network model to obtain the distances between the sample characteristics of the sample images to be classified and the description information of the sample categories; determining a loss value of a second loss function based on the distance; iteratively adjusting the network parameters of the initial neural network model until the loss value of the second loss function meets a preset requirement;

wherein the trained neural network model is obtained by training based on the plurality of subtasks, respectively.

5. The method of any of claims 2-4, wherein the first set of training samples is the same as the second set of training samples.

6. The method of any of claims 2-4, wherein the sample images in the first set of training samples are different from the sample images in the second set of training samples, and wherein the number of samples in the first set of training samples is substantially greater than the number of samples in the second set of training samples.

7. The method of claim 1, wherein the feature extraction network comprises an enhanced multi-scale depth information maximum (AMDIM) network.

8. A method of model training, the method comprising:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

9. A method of data processing, the method comprising:

acquiring an object to be processed;

inputting the object to be processed into a trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

10. A data processing apparatus, characterized in that the apparatus comprises:

the acquisition module is used for acquiring an image to be processed;

the processing module is used for inputting the image to be processed into the trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

11. A model training apparatus, the apparatus comprising:

the first training module is used for optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on a first training sample set to obtain a trained feature extraction network;

and the second training module is used for optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, and the initial neural network model comprises the trained feature extraction network.

12. A data processing apparatus, characterized in that the apparatus comprises:

the acquisition module is used for acquiring an object to be processed;

the processing module is used for inputting the object to be processed into the trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

13. A computer device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of claims 1-7.

14. A computer device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of claim 8.

15. A computer device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of claim 9.

Background

With the continuous development of neural network technology, neural network models have been widely applied in various fields to solve various problems, such as image classification.

In the scene with a deficient sample number, the risk pictures with rich categories but a small number exist, and are called long-tailed risk pictures. In order to improve the generalization of the neural network model and improve the accuracy of the neural network model in the scenario with a low sample number (i.e., the small sample scenario), a Meta Learning (Meta Learning) technique may be used to train the neural network model in the scenario with a low sample number. In the model training process using the meta-learning technique, the entire neural network model is usually trained using sample data with labels. However, in a small sample (Few-shot) scene, the number of samples corresponding to a label is very limited, and the learning ability of the neural network model based on the labeled sample data is limited only by the identification ability of the corresponding features of the label, so that the generalization is poor.

Therefore, in a small sample scene, how to improve the generalization of the neural network model to improve the accuracy of the neural network model becomes a problem to be solved at present.

Disclosure of Invention

The embodiment of the application provides a data processing method, a model training method, a device and equipment, which are used for solving the problem of how to improve the generalization of a neural network model so as to improve the accuracy of the neural network model in the prior art.

In a first aspect, an embodiment of the present application provides a data processing method, where the method includes:

acquiring an image to be processed;

inputting the image to be processed into a trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

In a second aspect, an embodiment of the present application provides a model training method, where the method includes:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

In a third aspect, an embodiment of the present application provides a data processing method, where the method includes:

acquiring an object to be processed;

inputting the object to be processed into a trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

In a fourth aspect, an embodiment of the present application provides a data processing apparatus, where the apparatus includes:

the acquisition module is used for acquiring an image to be processed;

the processing module is used for inputting the image to be processed into the trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

In a fifth aspect, an embodiment of the present application provides a model training apparatus, where the apparatus includes:

the first training module is used for optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on a first training sample set to obtain a trained feature extraction network;

and the second training module is used for optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, and the initial neural network model comprises the trained feature extraction network.

In a sixth aspect, an embodiment of the present application provides a data processing apparatus, where the apparatus includes:

the acquisition module is used for acquiring an object to be processed;

the processing module is used for inputting the object to be processed into the trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

In a seventh aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of the first aspects.

In an eighth aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of the second aspect.

In a ninth aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of the third aspect.

Embodiments of the present application also provide a computer-readable storage medium storing a computer program, the computer program comprising at least one code, which is executable by a computer to control the computer to perform the method according to any one of the first aspect.

Embodiments of the present application also provide a computer-readable storage medium storing a computer program, the computer program comprising at least one code, which is executable by a computer to control the computer to perform the method according to any one of the second aspect.

Embodiments of the present application also provide a computer-readable storage medium storing a computer program, the computer program comprising at least one code, which is executable by a computer to control the computer to perform the method according to any one of the third aspects.

Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the first aspect when the computer program is executed by a computer.

Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the second aspect when the computer program is executed by a computer.

Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the third aspect when the computer program is executed by a computer.

According to the data processing method, the model training method, the device and the equipment, the image to be processed is input into the trained neural network model for processing, and an image processing result is obtained, wherein the trained neural network model is obtained by training the initial neural network model based on the meta-learning technology, the initial neural network model comprises the feature extraction network obtained based on the self-supervision pre-training mode, the feature extraction network is pre-trained by adopting the self-supervision learning technology, so that the feature extraction network obtained by pre-training can learn the features of the sample image, but not the features corresponding to the labels, the generalization of the feature extraction network in the trained neural network model is improved, the generalization of the neural network model is improved, and the accuracy of the neural network model is improved.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.

Fig. 1-2 are schematic diagrams of application scenarios according to embodiments of the present application;

fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;

FIG. 4 is a schematic flowchart of a neural network model training process provided in an embodiment of the present application;

FIG. 5 is a schematic diagram illustrating pre-training a feature classification network according to an embodiment of the present disclosure;

FIG. 6 is a schematic diagram of neural network model training based on a support set and a query set according to an embodiment of the present application;

fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application;

FIG. 8 is a schematic flow chart illustrating a model training method according to an embodiment of the present disclosure;

fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;

FIG. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application;

fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application;

FIG. 12 is a schematic diagram of a computer device according to another embodiment of the present application;

FIG. 13 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application;

fig. 14 is a schematic structural diagram of a computer device according to yet another embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a" and "an" typically include at least two, but do not exclude the presence of at least one.

It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.

The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.

It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.

In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.

For the convenience of those skilled in the art to understand the technical solutions provided in the embodiments of the present application, a technical environment for implementing the technical solutions is described below.

In the related art, a neural network model used in a data processing method which is commonly used for a small sample scene is obtained by taking the neural network model as a whole and training the whole neural network model by adopting sample data with labels, and because the number of samples corresponding to a label is very limited in the small sample scene, the learning ability of the neural network model based on the sample data with labels is only limited by the identification ability of the corresponding characteristics of the label, so that the generalization performance is poor, and a data processing method which can improve the generalization performance of the neural network model to improve the accuracy rate of the neural network model is urgently needed in the related art.

Based on the actual technical requirements similar to those described above, the data processing method provided by the application can improve the generalization of the neural network model in a small sample scene by using a technical means.

The following specifically describes the data processing method provided in each embodiment of the present application through two exemplary service scenarios.

Scene one

In one scenario, as shown in fig. 1, the data acquisition device 11 may acquire data, such as image data, text data, voice data, etc., and it is understood that the specific type of the data acquisition device may correspond to the data type, for example, the image data acquisition device 11 may be specifically an image acquisition device. In different application fields, the type of the image acquisition device may be different, and the specific type of the acquired image data may also be different. For example, in the medical field, the image acquiring apparatus may specifically be a medical imaging apparatus, and the acquired image data may specifically be medical image data, and of course, in other fields, the image acquiring apparatus may also be other types of apparatuses for acquiring images, which is not limited in this application.

The data acquisition means 11 may be coupled to the data processing means 12, fig. 1 is only an exemplary coupling, and the data acquisition means 11 and the data processing means 12 may also be integrated in the same device. The data acquired by the data acquisition device 101 may be processed by the data processing device as the object to be processed, and the data processing device 12 may process the object to be processed by using the method provided in any of the following embodiments of the present application, so as to obtain the object processing result for the object to be processed.

It should be noted that, in practical applications, the object to be processed may be any type of object that can be processed based on a neural network model. For example, the object to be processed may specifically be an image to be processed, a voice to be processed, a text to be processed, or the like. Taking the object to be processed as the image to be processed as an example, the object processing result may be, for example, an image classification result, taking the object to be processed as the voice to be processed as an example, the object processing result may be, for example, a voice classification result, taking the object to be processed as the text to be processed as an example, and the object processing result may be, for example, a text classification result. Of course, in other embodiments, the object to be processed may also be another type of object, and the object processing result of the object to be processed may also be another type of result.

After the data processing device 12 obtains the object processing result, as shown in fig. 1, the object processing result may be output through the output device 13 so that the user may know the object processing result. The output device 13 may be, for example, a display, a speaker, or the like. Of course, in other embodiments, the output device 13 may also be other types of devices, which is not limited in this application.

Scene two

In another scenario, the data acquired by the data acquisition device 11 may be uploaded to a server as an object to be processed for processing. This scenario is suitable for a portable scenario, and the object to be processed can be acquired by using the portable data acquisition device 11 at any place such as home and office, and the portable data acquisition device 11 in fig. 2 is merely exemplary. As shown in fig. 2, the data acquiring means 101 may be coupled with the data transceiving means 14, and the data acquiring means 101 and the data transceiving means 14 may also be integrated in the same device. After the data acquisition device 101 acquires the object to be processed, the data transceiver 14 may transmit the object to be processed to the server 15. The server 15 may comprise any form of data processing server such as a cloud server, a distributed server, or the like. After receiving the object to be processed, the server 15 may process the object to be processed by using the data processing method provided in the embodiments of the present application, so as to obtain an object processing result.

After obtaining the object processing result, the server 15 may transmit the object processing result to the data-transceiving equipment 14, as shown in fig. 2. After receiving the object processing result, the data transceiver 14 may output the object processing result through an output device, such as a display, which may be a client display of a user, such as a mobile phone or a computer.

It should be noted that the application scenarios shown in fig. 1-2 are merely examples of the data processing method provided in the present application, and are not limited thereto. The data processing method provided by the application can be applied to any scene for object processing based on the neural network model.

Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.

For convenience of description, fig. 3 mainly illustrates an image to be processed as an object to be processed.

Fig. 3 is a flowchart illustrating a data processing method according to an embodiment of the present application, where an execution subject of the embodiment may be the data processing apparatus 12 in fig. 1 or the server in fig. 2. As shown in fig. 3, the method of this embodiment may include:

step 301, acquiring an image to be processed;

step 302, inputting the image to be processed into a trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

For convenience of description, the following description mainly uses the neural network model to classify the image to be processed, that is, the image processing result is the classification result.

In the embodiment of the present application, an image acquisition device may be utilized to acquire an image to be processed, and the image acquisition device may be, for example, a camera, a medical imaging device, or the like. Of course, in other embodiments, the image capturing device may also be other types of devices, which is not limited in this application.

In the embodiment of the application, after the image to be processed is obtained, the image to be processed may be input to a trained neural network model for processing, so as to obtain an image processing result. The trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on an automatic supervision pre-training mode.

In the embodiment of the present application, as shown in fig. 4, a feature extraction network is obtained by training using an auto-supervised learning technique, and then a neural network model including the feature extraction network obtained by training is trained using a meta-learning technique, so as to obtain a trained neural network model for processing an image to be processed. The neural network model in fig. 4 may further include, in addition to the feature extraction network, other networks for further processing the features extracted by the feature extraction network, and for an image classification scene, the other networks may be specifically a classification result for determining an image to be classified according to image features of the image to be classified. Therefore, the model training mode that the feature extraction network in the neural network model is pre-trained by adopting the self-supervision learning mode and then the whole neural network model is used as a whole to be trained by adopting the meta-learning technology is realized.

Can be used for

The feature extraction network is used for extracting image features. Self-supervised learning (self-supervised learning) is a learning mode that self-learns from unlabeled data and does not need to label data, and learning based on unlabeled sample images can be realized. By adopting the self-supervision learning technology to pre-train the feature extraction network, the feature extraction network obtained by pre-training can learn the features of the sample image, but not the features corresponding to the labels, the generalization of the feature extraction network in the trained neural network model is improved, the generalization of the neural network model is improved, and the accuracy of the neural network model is improved.

In the embodiment of the present application, the trained neural network model may be obtained by training according to the following steps a and B. And step A, optimizing network parameters in the initial feature extraction network by adopting a self-supervision learning technology based on the first training sample set to obtain the trained feature extraction network. And B, optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

In the embodiment of the application, when the self-supervised learning is performed, in order to better learn the characteristics of a single sample image, the sample image can be preprocessed, and the sample image and the preprocessed image thereof are used as training samples to perform the self-supervised learning training mode on the characteristic extraction network. Or different pre-processing can be performed on the same sample image, and the pre-processed images obtained by different pre-processing are used as training samples to perform a training mode of self-supervision learning on the feature extraction network. The latter is mainly exemplified below for specific explanation.

Illustratively, the step a may specifically include the following steps a 1-a 4. Step A1, respectively performing first preprocessing and second preprocessing on a plurality of sample images in a first training sample set to obtain a first sub-sample image and a second sub-sample image of each sample image; step a2, inputting a first sub-sample image and a second sub-sample image of the sample image to an initial feature extraction network respectively, so as to extract sample features of the first sub-sample image and the second sub-image respectively; a step a3 of determining a loss value of a first loss function based on sample features of the first and second sub-sample images; step A4, iteratively adjusting the network parameters of the feature extraction network until the loss value of the first loss function meets the preset requirement.

In an embodiment of the present application, the first training sample set may include a plurality of sample images. For each sample image in the plurality of sample images, first preprocessing and second preprocessing may be performed, respectively, to obtain a first sub-sample image and a second sub-sample image of the sample image. The sub-sample image obtained by performing the first preprocessing on the sample image may be referred to as a first sub-sample image, and the sub-sample image obtained by performing the second preprocessing on the sample image may be referred to as a second sub-sample image. The first pretreatment and the second pretreatment are different pretreatment modes. The type of the first pre-processing and the second pre-processing may be the same, and the type of pre-processing may be, for example, changing image size, color disturbance, picture rotation, region sampling, and the like. Of course, in other embodiments, the pretreatment may be of other types, which is not limited in this application. Taking the type of preprocessing as area sampling as an example, as shown in fig. 4, a first sub-sample image Xa may be obtained after the first preprocessing is performed on the sample image X, and a second sub-sample image Xb may be obtained after the second preprocessing is performed on the sample image X. It should be noted that the sample image X may represent any sample image in the first training sample set.

After obtaining the first subsample image, the second subsample image, an initial feature extraction network may be constructed. The feature extraction network may be a network supporting self-supervised learning, and in the present application, the feature extraction network selects an enhanced Multiscale depth information maximum (AMDIM) network to achieve the best effect. In the case where the feature extraction network is an AMDIM network, the structure of the feature extraction network may be, for example, as shown in table 1 below.

TABLE 1

In table 1, ndf is the output channel parameters (output channels' parameters) of the network; underpth is used to control the depth of the model (model's depth); nrkhs is the embedding dimension (embedding dimension); each convolution block of conv2_ x through conv6_ x contains 2 depth convolution layers.

It should be noted that the structure of the feature extraction network given in table 1 is only an example, and in other embodiments, the feature extraction network may also adopt other structures, which is not limited in this application.

After obtaining the first and second sub-sample images and constructing the initial feature extraction network, as shown in fig. 5, the first and second sub-sample images Xa and Xb of the sample image X may be input into the initial feature extraction network, respectively, to extract sample features of the first and second sub-sample images Xa and Xb, respectively. It should be noted that the feature extraction networks shown by the upper and lower dashed boxes in fig. 5 represent the same feature extraction network, and the first sub-sample image Xa and the second sub-sample image Xb share the same feature extraction network for processing.

In the embodiment of the present application, as shown in fig. 5, the sample features extracted by the initial Feature extraction network may include Local features (Local features) and Image Global features (Image Global features), where the Local features may refer to features obtained by extraction in an intermediate layer of the Feature extraction network, and the Image Global features may refer to features obtained by extraction in an output layer of the Feature extraction network.

Based on this, for example, the first loss function may include a loss function F1, the loss function F1 may be used to represent the degree of difference between the global feature of the first sub-sample image Xa and the local feature of the second sub-sample image Xb, and the relationship thereof may be as shown by a dashed double-headed arrow labeled 1 in fig. 5, and a larger degree of difference may represent a larger loss value of the loss function F1, so that the global feature of the first sub-sample image Xa may be close to the local feature of the second sub-sample image Xb.

For example, the first loss function may further include a loss function F2, the loss function F2 may be used to indicate a degree of difference between the global feature of the second sub-sample image Xb and the local feature of the first sub-sample image Xa, and the relationship may be as indicated by a dashed double-headed arrow labeled 2 in fig. 5, and a larger degree of difference may indicate a larger loss value of the loss function F2, so that the global feature of the second sub-sample image Xb may be close to the local feature of the first sub-sample image Xa.

For example, the first loss function may further include a loss function F3, and the loss function F3 may be used to indicate a degree of difference between the local feature of the first sub-sample image Xa and the local feature of the second sub-sample image Xb, and a relationship thereof may be as indicated by a dashed double-headed arrow labeled 3 in fig. 5, and a larger degree of difference may indicate a larger loss value of the loss function F3, so that the local feature of the first sub-sample image Xa may be close to the local feature of the second sub-sample image Xb.

Of course, in other embodiments, the first loss function may also include loss functions with other meanings, which is not limited in this application.

In the embodiment of the present application, after the trained feature extraction network is obtained through the training in step a, step B may be further performed. It should be noted that, the first training sample set is the same as the second training sample set, that is, the same training set may be used to train the feature extraction network and the neural network model including the trained feature extraction network, which is beneficial to simplifying implementation. Or the sample images in the first training sample set and the second training sample set are different, and the number of samples in the first training sample set is much larger than that of samples in the second training sample set, so that the generalization of the feature extraction network is further improved.

The meta learning can also be called learning of academic society, the idea is that the existing priori knowledge can be quickly adapted to a new learning task, a new idea is provided for the learning of few samples by the proposed meta learning idea, and the learning method of few samples based on the meta learning draws wide attention. It should be noted that the conventional meta learning image classification methods can be roughly classified into two types: a metric-based image classification method and a gradient-based image classification method. Among them, the metric-based image classification method aims at minimizing the intra-class distance of images while maximizing the inter-class distance of images, and classical algorithms such as Matching networks (Matching networks), relationship networks (relationship networks) and prototype networks (Prototypical networks).

In the embodiments of the present application, a prototype network is mainly taken as an example for specific description. The prototype network only needs a few sample images for each category, maps a plurality of sample images corresponding to each category into a space, and represents a prototype (prototype) of the category according to a plurality of sample features corresponding to the category, that is, the description of the category. And (3) using Euclidean distance (Euclidean distance) or cosine distance (cosine distance) as distance measurement, and training to enable the distance from the image of the type to the original shape to be nearest and the distance from the image of the type to other original shapes to be farther. During testing, the distance from the sample image to be classified to the original shape of each category is subjected to normalized index softmax processing, so that the category label of the sample image to be classified is judged.

Based on this, before training, a Meta training Set (Meta Train Set) and a Meta Test Set (Meta Test Set) can be assigned according to the image types of the sample images, and the image types in the Meta training Set and the Meta Test Set do not overlap with each other. The meta training set is used in a training stage of the neural network model to train the neural network model, and the meta test set is used in a testing stage of the neural function network model to test the neural network model obtained based on the training of the meta training set, where the meta training set is the second training sample set. Before assigning the meta-training set and the meta-testing set, the sample image to be assigned may be labeled, where the labeled content is a category corresponding to the image, for example, (fig. 1000, dog) is used to indicate that the category of the image fig. 1000 is a dog.

After obtaining the meta-training Set, a plurality of subtasks can be further obtained based on the meta-training Set, each subtask includes a Support Set (Support Set) and a Query Set (Query Set), sample images in the Support Set and the Query Set are extracted from the training Set, and the subtasks are used for training the neural network model. For example: the support set comprises N sample categories, and each sample category comprises M sample images, namely an N-way-M-shot mode; accordingly, the query set should also include the N sample classes, and each sample class includes M sample images that are disjoint from the support set. The neural network model may be trained by multiple random sampling, resulting in multiple subtasks, for example: sampling is performed 600 times. Wherein N is greater than 1 and M is greater than or equal to 1.

In the embodiment of the present application, after obtaining a plurality of subtasks, training of an initial neural network model may be implemented based on the plurality of subtasks. In the following, for convenience of description, sample images in the query set are recorded as sample images to be classified, and a specific description is given by taking model training based on a single subtask as an example.

For a subtask among the plurality of subtasks, the following steps B1-B3 may be specifically adopted for model training. Step B1, inputting the sample images in the support set of the subtask into an initial neural network model, and determining the description information of each sample category in the plurality of sample categories corresponding to the support set. Specifically, the respective sample features of the plurality of sample images in the support set may be extracted through a feature extraction network in the initial neural network model, and further, the determination of the description information of each sample category in the plurality of sample categories corresponding to the support set according to the respective sample features of the plurality of sample images in the support set may be realized through a network after the feature extraction network, such as another network in fig. 3.

Then, in step B2, a plurality of sample images to be classified in the query set of the subtask may be respectively input into the initial neural network model, so as to obtain distances between the sample features of the sample images to be classified and the description information of each of the plurality of sample categories. Specifically, the respective sample features of the plurality of sample images to be classified in the query set may be extracted through a feature extraction network in the initial neural network model, and further, the distances between the sample features of the sample images to be classified and the respective description information of the plurality of sample categories may be determined according to the respective sample features of the plurality of sample images to be classified in the query set through a network after the feature extraction network, such as another network in fig. 4.

Then, in step B3, determining a loss value of a second loss function according to the distance, and iteratively adjusting the network parameters of the initial neural network model until the loss value of the second loss function meets a preset requirement. For example cross-entropy loss may be used as the second loss function to calculate the loss value. In addition, regularization may be added to normalize the training of network parameters, thereby fine-tuning (fine-tune) network parameters in the neural network model. Regularization may prevent overfitting.

An implementation of neural network model training based on a support set and a query set may be as shown in fig. 6, for example. In fig. 6, a sample image 1 is a sample image in the query set, i.e., a sample image to be classified, and is correspondingly classified as a bird; the sample images 2-4 are sample images in the support set corresponding to the query set, wherein the sample images 2 are classified into ducks, the sample images 3 are classified into dogs, and the sample images 4 are classified into birds. As shown in fig. 6, the four sample images are first respectively processed by a pre-trained feature extraction network to obtain respective sample features of the four sample images (where the sample features may specifically correspond to the image global features in fig. 5), where the sample feature of the sample image 1 is sample feature 1, the sample feature of the sample image 2 is sample feature 2, the sample feature of the sample image 3 is sample feature 3, and the sample feature of the sample image 4 is sample feature 4. After obtaining the sample features of the four sample images respectively, and further processing the sample features through another network (e.g., a metric module), the prediction result of the sample image 1 may be obtained, that is, the distances between the sample features of the sample image 1 and the description information of the sample images 2, 3, and 4 are 0.1, and 0.8, where a larger distance may indicate that the class of the sample image 1 is a higher probability of being the corresponding sample class, and a smaller distance may indicate that the class of the sample image 1 is a lower probability of being the corresponding sample class. Since the distances 0.1, 0.8 are consistent with the true category of the sample image, the distances 0.1, 0.8 may be considered that the loss value of the second loss function has satisfied the preset requirement for the combination of the sample image 1-the sample image 4. Thereafter, the neural network model may be further trained based on a combination of other sample images.

It should be noted that the subtasks in the steps B1 to B4 may be specifically the first subtask used for model training in the plurality of subtasks. It can be understood that, assuming that the model is trained separately according to the sequence of subtask 1, subtask 2, subtask 3, and subtask 4, the subtask 2 is further trained on the basis that the subtask 1 is trained to obtain the model, the subtask 3 is further trained on the basis that the subtask 2 is trained to obtain the model, … …, and so on, until the training of all subtasks is completed, thereby completing the training of the neural network model.

According to the data processing method provided by the embodiment of the application, the image to be processed is input into the trained neural network model to be processed, and an image processing result is obtained, wherein the trained neural network model is obtained by training the initial neural network model based on the meta-learning technology, the initial neural network model comprises the feature extraction network obtained in the self-supervision pre-training mode, and the feature extraction network is pre-trained by adopting the self-supervision learning technology, so that the feature extraction network obtained by pre-training can learn the features of the sample image, but not the features corresponding to the labels, the generalization of the feature extraction network in the trained neural network model is improved, the generalization of the neural network model is improved, and the accuracy of the neural network model is improved.

Fig. 7 is a flowchart illustrating a data processing method according to another embodiment of the present application, where an execution subject of the embodiment may be the data processing apparatus 12 in fig. 1 or the server in fig. 2. As shown in fig. 7, the method of this embodiment may include:

step 701, acquiring an object to be processed;

step 702, inputting the object to be processed into a trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

In this embodiment of the present application, the object to be processed may specifically be any type of object that can be processed based on a neural network model. Taking the neural network model for image processing as an example, the object to be processed may be an image to be input; taking speech processing by a neural network model as an example, the object to be processed can be specifically speech to be processed; taking the neural network model to perform text processing as an example, the object to be processed may be a text to be processed.

Optionally, the trained neural network model is obtained by training in the following manner:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

Optionally, the optimizing a network parameter in the initial feature extraction network by using an auto-supervised learning technique based on the first training sample set to obtain a trained feature extraction network includes:

respectively carrying out first pretreatment and second pretreatment on a plurality of sample objects in a first training sample set to obtain a first sub-sample object and a second sub-sample object of each sample object;

inputting a first sub-sample object and a second sub-sample object of the sample object into an initial feature extraction network respectively to extract sample features of the first sub-sample object and the second sub-sample object respectively;

determining a loss value of a first loss function based on sample features of the first and second subsample objects;

and iteratively adjusting the network parameters of the feature extraction network until the loss value of the first loss function meets a preset requirement.

Optionally, the optimizing the model parameters of the initial neural network model by using a meta-learning technique based on the second training sample set to obtain the trained neural network model includes:

generating a plurality of subtasks for meta-learning according to the second training sample set, wherein each subtask comprises a support set and a query set; sample objects included in the support set and the query set are disjoint, but sample classes of sample objects are the same;

for one subtask of the plurality of subtasks, respectively inputting a plurality of sample objects in a support set of the subtask into an initial neural network model, and determining description information of each sample class in a plurality of sample classes corresponding to the support set; respectively inputting a plurality of sample objects to be classified in the query set of the subtask into the initial neural network model to obtain the distances between the sample characteristics of the sample objects to be classified and the description information of the sample categories; determining a loss value of a second loss function based on the distance; iteratively adjusting the network parameters of the initial neural network model until the loss value of the second loss function meets a preset requirement;

wherein the trained neural network model is obtained by training based on the plurality of subtasks, respectively.

Optionally, the first set of training samples is the same as the second set of training samples.

Optionally, the sample objects in the first training sample set and the second training sample set are different, and the number of samples in the first training sample set is much larger than that in the second training sample set.

Optionally, the feature extraction network includes an enhanced multi-scale depth information maximum AMDIM network.

It should be noted that, in this embodiment, a specific implementation manner of the data processing manner in the case that the object to be processed is an object of another type than the image to be processed is similar to the specific implementation manner of the data processing for the image to be processed in the embodiment shown in fig. 3, and specific contents may refer to relevant contents of the embodiment shown in fig. 3, and are not described again here.

According to the data processing method provided by the embodiment of the application, the object to be processed is input into the trained neural network model to be processed, and an object processing result is obtained, wherein the trained neural network model is obtained by training the initial neural network model based on the meta-learning technology, the initial neural network model comprises the feature extraction network obtained in the self-supervision pre-training mode, and the feature extraction network is pre-trained by adopting the self-supervision learning technology, so that the feature extraction network obtained by pre-training can learn the features of the sample object and the features corresponding to the labels are not included, the generalization of the feature extraction network in the trained neural network model is improved, the generalization of the neural network model is improved, and the accuracy of the neural network model is improved.

FIG. 8 is a schematic flow chart illustrating a model training method according to an embodiment of the present disclosure; the method provided by this embodiment may be executed by another device than those in fig. 1 and fig. 2, and after the other device trains and obtains the trained neural network model, the trained neural network model may be deployed to the data processing apparatus 12 in fig. 1 or the server in fig. 2. As shown in fig. 8, the method of this embodiment may include:

step 801, optimizing network parameters in an initial feature extraction network by adopting a self-supervision learning technology based on a first training sample set to obtain a trained feature extraction network;

and 802, optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

It should be noted that, for specific descriptions of step 801 and step 802, reference may be made to the related descriptions of the foregoing embodiments, and details are not described here again.

By the model training method provided by the embodiment of the application, the purpose that the feature extraction network is obtained by adopting the self-supervision learning technology for training is achieved, then the neural network model comprising the feature extraction network obtained by training is trained by adopting the meta-learning technology, and the trained neural network model for processing the image to be processed is obtained, so that the trained neural network model is obtained by training the initial neural network model based on the meta-learning technology, and the initial neural network model comprises the feature extraction network obtained based on the self-supervision pre-training mode, therefore, the generalization of the neural network model is improved, and the accuracy of the neural network model is improved.

Fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application; referring to fig. 9, the present embodiment provides a data processing apparatus, which may execute the data processing method shown in fig. 3, and specifically, the data processing apparatus 90 may include:

an obtaining module 91, configured to obtain an image to be processed;

the processing module 92 is configured to input the image to be processed into the trained neural network model for processing, so as to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

Optionally, the trained neural network model is obtained by training in the following manner:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

Optionally, the optimizing a network parameter in the initial feature extraction network by using an auto-supervised learning technique based on the first training sample set to obtain a trained feature extraction network includes:

respectively performing first preprocessing and second preprocessing on a plurality of sample images in a first training sample set to obtain a first sub-sample image and a second sub-sample image of each sample image;

inputting a first sub-sample image and a second sub-sample image of the sample image to an initial feature extraction network, respectively, to extract sample features of the first sub-sample image and the second sub-image, respectively;

determining a loss value of a first loss function based on sample features of the first and second sub-sample images;

and iteratively adjusting the network parameters of the feature extraction network until the loss value of the first loss function meets a preset requirement.

Optionally, the optimizing the model parameters of the initial neural network model by using a meta-learning technique based on the second training sample set to obtain the trained neural network model includes:

generating a plurality of subtasks for meta-learning according to the second training sample set, wherein each subtask comprises a support set and a query set; the sample images included in the support set and the query set are disjoint, but the sample categories of the sample images are the same;

respectively inputting a plurality of sample images in a support set of the subtask into an initial neural network model aiming at one subtask in the plurality of subtasks, and determining the description information of each sample category in a plurality of sample categories corresponding to the support set; respectively inputting a plurality of sample images to be classified in the query set of the subtask into the initial neural network model to obtain the distances between the sample characteristics of the sample images to be classified and the description information of the sample categories; determining a loss value of a second loss function based on the distance; iteratively adjusting the network parameters of the initial neural network model until the loss value of the second loss function meets a preset requirement;

wherein the trained neural network model is obtained by training based on the plurality of subtasks, respectively.

Optionally, the first set of training samples is the same as the second set of training samples.

Optionally, the sample images in the first training sample set and the second training sample set are different, and the number of samples in the first training sample set is much larger than that in the second training sample set.

Optionally, the feature extraction network includes an enhanced multi-scale depth information maximum AMDIM network.

The apparatus shown in fig. 9 can perform the method of the embodiment shown in fig. 3, and reference may be made to the related description of the embodiment shown in fig. 3 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 3, and are not described herein again.

In one possible implementation, the structure of the data processing apparatus shown in FIG. 9 may be implemented as a computer device. As shown in fig. 10, the computer apparatus may include: a processor 101 and a memory 102. Wherein the memory 102 is used for storing a program for supporting a computer device to execute the data processing method provided in the embodiment shown in fig. 3, and the processor 101 is configured for executing the program stored in the memory 102.

The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the processor 101, are capable of performing the steps of:

acquiring an image to be processed;

inputting the image to be processed into a trained neural network model for processing to obtain an image processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

Optionally, the processor 101 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 3.

The structure of the computer device may further include a communication interface 103, which is used for the computer device to communicate with other devices or a communication network.

Fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment of the present application; referring to fig. 11, the present embodiment provides a data processing apparatus, which may execute the data processing method shown in fig. 7, and specifically, the data processing apparatus 110 may include:

an obtaining module 111, configured to obtain an object to be processed;

the processing module 112 is configured to input the object to be processed into the trained neural network model for processing, so as to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

The apparatus shown in fig. 11 can execute the method of the embodiment shown in fig. 7, and reference may be made to the related description of the embodiment shown in fig. 7 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution are described in the embodiment shown in fig. 7, and are not described herein again.

In one possible implementation, the structure of the data processing apparatus shown in FIG. 11 may be implemented as a computer device. As shown in fig. 12, the computer apparatus may include: a processor 121 and a memory 122. Wherein the memory 122 is used for storing programs that support the computer device to execute the data processing method provided in the embodiment shown in fig. 7, and the processor 121 is configured to execute the programs stored in the memory 122.

The program comprises one or more computer instructions which, when executed by the processor 121, are capable of performing the steps of:

acquiring an object to be processed;

inputting the object to be processed into a trained neural network model for processing to obtain an object processing result; the trained neural network model is obtained by training an initial neural network model based on a meta-learning technology, and the initial neural network model comprises a feature extraction network obtained based on a self-supervision pre-training mode.

Optionally, the processor 121 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 7.

The structure of the computer device may further include a communication interface 123, which is used for the computer device to communicate with other devices or a communication network.

FIG. 13 is a schematic structural diagram of a model training apparatus according to an embodiment of the present application; referring to fig. 13, the present embodiment provides a model training apparatus, which may perform the above model training method, and specifically, the model training apparatus 130 may include:

the first training module 131 is configured to optimize network parameters in the initial feature extraction network by using an auto-supervised learning technique based on the first training sample set to obtain a trained feature extraction network;

and a second training module 132, configured to optimize model parameters of an initial neural network model by using a meta-learning technique based on a second training sample set, to obtain the trained neural network model, where the initial neural network model includes the trained feature extraction network.

The apparatus shown in fig. 13 can execute the method of the embodiment shown in fig. 8, and reference may be made to the related description of the embodiment shown in fig. 8 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 8, and are not described herein again.

In one possible implementation, the structure of the model training apparatus shown in fig. 13 may be implemented as a computer device. As shown in fig. 14, the computer apparatus may include: a processor 141 and a memory 142. Wherein the memory 142 is used for storing a program for supporting a computer device to execute the model training method provided in the embodiment shown in fig. 8, and the processor 141 is configured for executing the program stored in the memory 142.

The program comprises one or more computer instructions which, when executed by processor 141, enable the following steps to be performed:

optimizing network parameters in the initial feature extraction network by adopting an automatic supervision learning technology based on the first training sample set to obtain a trained feature extraction network;

and optimizing model parameters of an initial neural network model by adopting a meta-learning technology based on a second training sample set to obtain the trained neural network model, wherein the initial neural network model comprises the trained feature extraction network.

Optionally, processor 141 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 8.

The computer device may further include a communication interface 143 for communicating with other devices or a communication network.

In addition, the present application provides a computer storage medium for storing computer software instructions for a computer device, which includes a program for executing the method embodiment shown in fig. 3.

Embodiments of the present application provide a computer storage medium for storing computer software instructions for a computer device, which includes a program for executing the method embodiment shown in fig. 7.

The present application provides a computer storage medium for storing computer software instructions for a computer device, which includes a program for executing the method embodiment shown in fig. 8.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.

The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.

Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:图像注册方法、装置、电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!