Cell image small sample classification system based on distance measurement

文档序号:8465 发布日期:2021-09-17 浏览:19次 中文

1. A cell image small sample classification system based on distance measurement is characterized by comprising an image conversion module, a pre-training module and a small sample classification module; the cell image data set A is converted into a cell image data set B by adopting a trained image conversion network, and then the cell image data set A and the cell image data set B are combined to obtain a cell small sample data set;

the pre-training module pre-trains the constructed Resnet18 classification model, and when the classification accuracy of the pre-trained classification model is not increased along with the increasing of the training times, the model reaches a convergence state to obtain a pre-trained Resnet18 classification model;

the small sample classification module replaces the number of final output channels of the pre-trained Resnet18 classification model with the number of cell categories to be identified in the training process of the small sample classification model, so that the Resnet18 classification model is constructed into a cell small sample classification model, and cells in the cell small sample image are classified by training the cell small sample classification model;

the training process of the cell small sample classification model is as follows:

step one, dividing a cell small sample training set obtained by an image conversion module into a training set and a test set according to the ratio of category ratio 6: 4; wherein the training set is divided into a support set and a query set, and the categories of cells in the training set and the test set are not repeated;

step two, randomly extracting n types of cell pictures from the training set in the step one, randomly extracting m cell pictures in each type to serve as a support set, and selecting z cell pictures from the n types of remaining cell pictures to serve as a query set; and finally, the support set and the query set are sent into a small cell sample classification model for training, and the specific process is as follows:

obtaining a characteristic vector f (X) of a cell picture through a cell small sample classification modeli) Calculating the feature vector f (X) of each cell picture in the support seti) Respectively recording Euclidean distances between the feature vectors and other similar cell pictures in the support set, and recording the sum of the Euclidean distances as d; then calculating the characteristic vector f (X) of each cell picture in the support seti) Feature vectors f (X) of 5 heterogeneous cell images respectively closest to the query seti) The sum of the Euclidean distances is recorded as D; therefore, with a ═ D + D, the a values for each cell in the support set were normalized to give a confidence a for each cell in the support seti

Respectively calculating the probability that each cell picture in the support set belongs to each of several cell classes randomly extracted in the current training process:meanwhile, in the training process, the cell type of the current cell picture is defined as the type with the maximum probability value through a random gradient descent minimization loss function;

where k is the true label of the cell training sample and the prototype representation of each cell class ckSk={(X1,y1),…,(XN,yN) Representing a cell data set with a category of k, wherein X is a certain cell, namely original data, and y is a category corresponding to the X cell;

and step three, inputting the test set into the cell small sample classification model trained in the step two, and obtaining the trained cell small sample classification model when the percentage of the number of the images correctly classified in the test set by the cell small sample classification model to the number of all the images in the test set is not increased along with the increase of the times of the training process.

2. The system for classifying cell images small samples based on distance measurement according to claim 1, wherein the Resnet18 classification model constructed in the pre-training module has the following structure:

the first layer is as follows: the convolution layer has 3 channels as input and 64 channels as output, convolution kernel size of 7 × 7 and step size of 2; the number of expansion layers is 3; a BN layer; a ReLU activation layer; the maximum pooling layer has the area size of 3 × 3, the step length of 2 and the number of filling layers of 3;

first residual block:

wherein, the second layer structure is in turn: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the third layer structure is the same as the second layer; adding the second layer input and the third layer output, passing through two continuous ReLU active layers, and sending to the fourth layer;

the fourth layer structure is as follows in sequence: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the fifth layer has the same structure as the fourth layer; adding the fourth layer input and the fifth layer output, passing through two continuous ReLU active layers, and sending into a second residual block;

second residual block:

the sixth layer structure is as follows in sequence: and (3) rolling layers: the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the seventh layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the sixth layer of the convolution layer and the output of the seventh layer of the convolution layer, passing through two continuous ReLU active layers and then sending the sum into the eighth layer of the convolution layer;

the eighth layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a ninth layer of convolutional layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the eighth layer input and the ninth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

third residual block:

the tenth layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the eleventh layer of structure is as follows: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the tenth convolutional layer and the output of the eleventh convolutional layer, passing through two continuous ReLU active layers, and then sending into the twelfth convolutional layer;

the twelfth layer structure is as follows in sequence: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a thirteenth layer of convolutional layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the twelfth layer input and the thirteenth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

fourth residual block:

the fourteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the fifteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the fourteenth convolutional layer and the output of the fifteenth convolutional layer, passing through two continuous ReLU active layers, and then sending into the sixteenth convolutional layer;

the sixteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

seventeenth layer of convolutional layer: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the sixteenth layer input and the seventeenth layer output, passing through two continuous ReLU active layers, and sending to the next layer;

the last two layers are sequentially: averaging the pooled layers, with convolution kernel size of 7 x 7, step size of 7, no padding; and (4) fully connecting layers, wherein the number of output channels is 64 during pre-training.

3. The system according to claim 2, wherein the cell small sample classification model constructed in the small sample classification module is a model that replaces the last full connectivity layer output channel in the Resnet18 classification model with the number of cell classes to be identified.

4. A cell image small sample classification system based on distance measurement according to claim 1, characterized in that the image conversion module is configured as follows:

the image conversion module comprises two generators and two discriminators;

the generator comprises 6 residual blocks and 1 output block, wherein:

a first volume block, in order: a mirror image extension layer with a filling width of 3; the convolution layer has 3 channels as input and 64 channels as output, and the convolution kernel size is 7 × 7; an InstanceNorm normalization layer, ReLU layer;

a second volume block, in order: the convolution layer has 64 input channels and 128 output channels, convolution kernel size is 3 x 3, step length is 2 and expansion layer number is 1; an InstanceNorm normalization layer, ReLU layer;

a third rolling block, sequentially: the convolution layer has 128 channels as input and 256 channels as output, convolution kernel size of 3 x 3, step length of 2 and expansion layer number of 1; an InstanceNorm normalization layer; a ReLU layer;

the fourth convolution block is 6 residual blocks, where each residual block is: the first layer of convolution layer has 256 channels as input and 64 channels as output, and convolution kernel size is 1 x 1; the second layer is a ReLU layer; the third layer of convolution layer has 64 channels as input and 64 channels as output, and convolution kernel size of 3 x 3; the fourth layer is a ReLU layer; a fifth convolution layer with 64 channels as input and 256 channels as output, and convolution kernel size of 1 x 1; adding the output of the fifth convolutional layer in each residual block with the input of the residual block and then passing through a ReLU layer;

the fifth rolling block is a reverse rolling block and sequentially comprises: the convolution layer has 256 input channels and 128 output channels, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; an InstanceNorm normalization layer; a ReLU layer;

the sixth rolling block is a reverse rolling block, and sequentially comprises: the convolution layer has 128 channels as input and 64 channels as output, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; an InstanceNorm normalization layer ReLU layer;

the seventh rolling block is an output block, and sequentially comprises: a mirror image extension layer with a filling width of 3; the convolution layer has 64 channels as input and 3 channels as output, and the convolution kernel size is 7 × 7; a Tanh layer;

the discriminator; including four convolution layers and full tie layer, wherein:

the first convolutional layer is, in order: the input is 3 channels, the output is 64 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; a LeakyReLU layer;

the second convolutional layer is, in order: the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the third convolutional layer is, in order: the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the fourth convolutional layer is, in order: the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 4 x 4, the step length is 1, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the fifth layer is a full connecting layer: the input is 512 channels, the output is 1 channel, the size of the convolution kernel is 4 x 4, and the number of expansion layers is 1; and averaging the pooling layers.

5. The system according to claim 1, wherein the image transformation network is trained by the following steps: and inputting the existing public image data sets maps into an image conversion network for training to obtain the trained image conversion network.

6. The system of claim 1, wherein the image transformation module preprocesses the data set by:

all pictures in the obtained cell image data set are respectively subjected to horizontal turning, rotation, shifting, conversion into gray-scale images, color dithering, scaling and cutting processing, and all pictures are unified into a fixed size of 224 x 224.

Background

There are three types of blood cells in the human body, red blood cells, granulocytes, and platelets. The recognition classification of granulocytes is considered to be an active research area compared to other cell types, as granulocytes are responsible for the immunity of the human body. The classification of granulocytes provides valuable information to physicians and aids in many important diagnoses, such as leukemia and aids. Traditional classification of granulocytes is performed manually under a microscope, which is not only time consuming but also has a high error rate.

At present, the clinical examination method for the granulocytes is manual microscopic examination, the accuracy of the manual microscopic examination can reach more than 95%, but the manual microscopic examination efficiency is low, the classification speed is slow, and the accuracy is influenced by the experience and the state of an inspector. In the field of medical image processing, with the development of deep learning technology, the deep learning aided medical diagnosis becomes a great trend, and accurate computer aided engineering is helpful for accelerating the diagnosis of diseases, reducing the workload of doctors, improving the working efficiency and bringing more accurate and efficient diagnosis results.

However, the deep learning algorithm usually needs thousands of supervised samples to ensure the learning effect, and when the problem of small samples with little labeled data is processed, each index of the deep learning algorithm cannot reach the level of human. In the medical field, there is not a large amount of labeling data available regarding the cell sorting problem, and the cost of obtaining labeling data is also very large.

Human beings have the ability to learn quickly from a small number of samples, i.e. can easily build up the knowledge of new things by just one or a few examples, thus giving birth to a small sample classification method based on deep learning, aiming at solving the classification task with a small number of labeled sample data sets. Aiming at the problems that the number of cell labeling samples is small, how to learn good characteristics, generalization of rare categories in cells, accurate classification of cells under the condition of small samples is a hotspot problem in the field of cell classification and identification, and the method has strong practical significance.

Disclosure of Invention

In order to overcome the problems, the invention provides a cell image small sample classification system based on distance measurement, which learns and classifies a blood cell small sample data set by establishing a classification model comprising an image conversion module, a pre-training module and a small sample classification module, assists a clinician in rapidly classifying blood cells so as to reduce workload, improve cell classification capability and model generalization capability, and has important significance for processing the blood cell small sample classification problem in the medical field.

A cell image small sample classification system based on distance measurement comprises an image conversion module, a pre-training module and a small sample classification module; the cell image data set A is converted into a cell image data set B by adopting a trained image conversion network, and then the cell image data set A and the cell image data set B are combined to obtain a cell small sample data set;

the pre-training module pre-trains the constructed Resnet18 classification model, and when the classification accuracy of the pre-trained classification model is not increased along with the increasing of the training times, the model reaches a convergence state to obtain a pre-trained Resnet18 classification model;

the small sample classification module replaces the number of final output channels of the pre-trained Resnet18 classification model with the number of cell categories to be identified in the training process of the small sample classification model, so that the Resnet18 classification model is constructed into a cell small sample classification model, and cells in the cell small sample image are classified by training the cell small sample classification model;

the training process of the cell small sample classification model is as follows:

step one, dividing a cell small sample training set obtained by an image conversion module into a training set and a test set according to the ratio of category ratio 6: 4; wherein the training set is divided into a support set and a query set, and the categories of cells in the training set and the test set are not repeated;

step two, randomly extracting n types of cell pictures from the training set in the step one, randomly extracting m cell pictures in each type to serve as a support set, and selecting z cell pictures from the n types of remaining cell pictures to serve as a query set; and finally, the support set and the query set are sent into a small cell sample classification model for training, and the specific process is as follows:

obtaining cell pictures through cell small sample classification modelFeature vector f (X)i) Calculating the feature vector f (X) of each cell picture in the support seti) Respectively recording Euclidean distances between the feature vectors and other similar cell pictures in the support set, and recording the sum of the Euclidean distances as d; then calculating the characteristic vector f (X) of each cell picture in the support seti) Feature vectors f (X) of 5 heterogeneous cell images respectively closest to the query seti) The sum of the Euclidean distances is recorded as D; therefore, with a ═ D + D, the a values for each cell in the support set were normalized to give a confidence a for each cell in the support seti

Respectively calculating the probability that each cell picture in the support set belongs to each of several cell classes randomly extracted in the current training process:meanwhile, in the training process, the cell type of the current cell picture is defined as the type with the maximum probability value through a random gradient descent minimization loss function;

where k is the true label of the cell training sample and the prototype representation of each cell class ck Sk={(X1,y1),…,(XN,yN) Representing a cell data set with a category of k, wherein X is a certain cell, namely original data, and y is a category corresponding to the X cell;

and step three, inputting the test set into the cell small sample classification model trained in the step two, and obtaining the trained cell small sample classification model when the percentage of the number of the images correctly classified in the test set by the cell small sample classification model to the number of all the images in the test set is not increased along with the increase of the times of the training process.

The structure of the Resnet18 classification model constructed in the pre-training module is as follows:

the first layer is as follows: the convolution layer has 3 channels as input and 64 channels as output, convolution kernel size of 7 × 7 and step size of 2; the number of expansion layers is 3; a BN layer; a ReLU activation layer; the maximum pooling layer has the area size of 3 × 3, the step length of 2 and the number of filling layers of 3;

first residual block:

wherein, the second layer structure is in turn: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the third layer structure is the same as the second layer; adding the second layer input and the third layer output, passing through two continuous ReLU active layers, and sending to the fourth layer;

the fourth layer structure is as follows in sequence: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the fifth layer has the same structure as the fourth layer; adding the fourth layer input and the fifth layer output, passing through two continuous ReLU active layers, and sending into a second residual block;

second residual block:

the sixth layer structure is as follows in sequence: and (3) rolling layers: the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the seventh layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the sixth layer of the convolution layer and the output of the seventh layer of the convolution layer, passing through two continuous ReLU active layers and then sending the sum into the eighth layer of the convolution layer;

the eighth layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a ninth layer of convolutional layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the eighth layer input and the ninth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

third residual block:

the tenth layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the eleventh layer of structure is as follows: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the tenth convolutional layer and the output of the eleventh convolutional layer, passing through two continuous ReLU active layers, and then sending into the twelfth convolutional layer;

the twelfth layer structure is as follows in sequence: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a thirteenth layer of convolutional layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the twelfth layer input and the thirteenth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

fourth residual block:

the fourteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the fifteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the fourteenth convolutional layer and the output of the fifteenth convolutional layer, passing through two continuous ReLU active layers, and then sending into the sixteenth convolutional layer;

the sixteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

seventeenth layer of convolutional layer: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the sixteenth layer input and the seventeenth layer output, passing through two continuous ReLU active layers, and sending to the next layer;

the last two layers are sequentially: averaging the pooled layers, with convolution kernel size of 7 x 7, step size of 7, no padding; and (4) fully connecting layers, wherein the number of output channels is 64 during pre-training.

The cell small sample classification model constructed in the small sample classification module is to replace the last full connection layer output channel in the Resnet18 classification model with the number of cell classes to be identified.

The image conversion module has the following structure:

the image conversion module comprises two generators and two discriminators;

the generator comprises 6 residual blocks and 1 output block, wherein:

a first volume block, in order: a mirror image extension layer with a filling width of 3; the convolution layer has 3 channels as input and 64 channels as output, and the convolution kernel size is 7 × 7; an InstanceNorm normalization layer, ReLU layer;

a second volume block, in order: the convolution layer has 64 input channels and 128 output channels, convolution kernel size is 3 x 3, step length is 2 and expansion layer number is 1; an InstanceNorm normalization layer, ReLU layer;

a third rolling block, sequentially: the convolution layer has 128 channels as input and 256 channels as output, convolution kernel size of 3 x 3, step length of 2 and expansion layer number of 1; an InstanceNorm normalization layer; a ReLU layer;

the fourth convolution block is 6 residual blocks, where each residual block is: the first layer of convolution layer has 256 channels as input and 64 channels as output, and convolution kernel size is 1 x 1; the second layer is a ReLU layer; the third layer of convolution layer has 64 channels as input and 64 channels as output, and convolution kernel size of 3 x 3; the fourth layer is a ReLU layer; a fifth convolution layer with 64 channels as input and 256 channels as output, and convolution kernel size of 1 x 1; adding the output of the fifth convolutional layer in each residual block with the input of the residual block and then passing through a ReLU layer;

the fifth rolling block is a reverse rolling block and sequentially comprises: the convolution layer has 256 input channels and 128 output channels, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; an InstanceNorm normalization layer; a ReLU layer;

the sixth rolling block is a reverse rolling block, and sequentially comprises: the convolution layer has 128 channels as input and 64 channels as output, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; an InstanceNorm normalization layer ReLU layer;

the seventh rolling block is an output block, and sequentially comprises: a mirror image extension layer with a filling width of 3; the convolution layer has 64 channels as input and 3 channels as output, and the convolution kernel size is 7 × 7; a Tanh layer;

the discriminator; including four convolution layers and full tie layer, wherein:

the first convolutional layer is, in order: the input is 3 channels, the output is 64 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; a LeakyReLU layer;

the second convolutional layer is, in order: the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the third convolutional layer is, in order: the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the fourth convolutional layer is, in order: the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 4 x 4, the step length is 1, and the number of expansion layers is 1; an InstanceNorm normalization layer; a LeakyReLU layer;

the fifth layer is a full connecting layer: the input is 512 channels, the output is 1 channel, the size of the convolution kernel is 4 x 4, and the number of expansion layers is 1; and averaging the pooling layers.

The training process of the image conversion network is as follows: and inputting the existing public image data sets maps into an image conversion network for training to obtain the trained image conversion network.

The image conversion module preprocesses the data set in the following specific process:

all pictures in the obtained cell image data set are respectively subjected to horizontal turning, rotation, shifting, conversion into gray-scale images, color dithering, scaling and cutting processing, and all pictures are unified into a fixed size of 224 x 224.

The invention has the beneficial effects that:

the invention adopts a data enhancement strategy, combines the generation of a countermeasure network and the pre-training of the network to construct a new feature extractor, improves the distance-based small sample classification model prototype calculation method through a RelifF algorithm, classifies and labels cells under a microscope, effectively reduces the labeling workload of a clinician, and has certain accuracy and generalization.

Detailed Description

The invention is further described below. The following examples are only for illustrating the technical solutions of the present invention more clearly, and should not be taken as limiting the scope of the present invention.

A cell image small sample classification system based on distance measurement comprises an image conversion module, a pre-training module and a small sample classification module; the cell image data set A is converted into a cell image data set B by adopting a trained image conversion network, and then the cell image data set A and the cell image data set B are combined to obtain a cell small sample data set;

the pre-training module pre-trains the constructed Resnet18 classification model by adopting all cell base class data (training set), and when the classification accuracy (about 78% at this time) of the pre-trained classification model is increased along with the increasing of the training times but not increased, the model reaches a convergence state to obtain a pre-trained Resnet18 classification model;

the small sample classification module replaces the number of final output channels of the pre-trained Resnet18 classification model with the number of cell categories to be identified in the training process of the small sample classification model, so that a traditional classification model, namely Resnet18 classification model is constructed into a small cell sample classification model, and cells in a small cell sample image are classified by training the small cell sample classification model;

the training process of the cell small sample classification model is as follows:

step one, dividing a cell small sample training set obtained by an image conversion module into a training set and a test set according to the ratio of category ratio 6: 4; wherein the training set is divided into a support set and a query set, and the categories of cells in the training set and the test set are not repeated;

step two, randomly extracting n-5 cell pictures from the training set in the step one, randomly extracting m-5 cell pictures from each type to serve as a support set, and selecting z-16 cell pictures from the n-5 remaining cell pictures to serve as a query set; the support set containing 5 × 5 ═ 25 cell pictures and the query set containing 16 × 5 ═ 80 cell pictures are sent to a cell small sample classification model for training, and the specific process is as follows:

obtaining a characteristic vector f (X) of each cell picture through a cell small sample classification modeli) The confidence coefficient calculation method based on the RelifF algorithm is as follows: after the 25 support set cells obtained in the step two are represented by the vectors, calculating the characteristic vector f (X) of each cell picture in the support seti) Respectively recording Euclidean distances between the feature vectors and other similar cell pictures in the support set, and recording the sum of the Euclidean distances as d; then calculating the characteristic vector f (X) of each cell picture in the support seti) Feature vectors f (X) of 5 heterogeneous cell images respectively closest to the query seti) The sum of the Euclidean distances is recorded as D; the smaller D is, the larger D is, the closer to the same class is, the farther from the different class is, the higher confidence coefficient of the cell is, the better the picture quality is relatively, and the higher the reference is, all a ═ D + D, the normalization processing is performed on the a value of each cell in the support set, namely, the calculation is performed from 25 values, and the softmax normalization processing is performed on 5 values of the same class, so that the confidence coefficient a of each cell in the support set is obtained, namely, the normalization processing is performed, and the confidence coefficient a of each cell in the support set is obtainedi(i.e., weight);

the cell small sample classification model obtains the feature vector representation of each cell image through ResNet18 (feature extraction network), at this time, a weight parameter (i.e. the confidence coefficient of the above calculation) is added to the feature map of each extracted cell image in the model network, the higher the confidence coefficient is, the higher the weight is, the weighted sum of the feature vectors of each cell type in the support set sample is calculated as the class prototype (center point) of the class. The Euclidean distance between the query set and the prototype (central point) is calculated through iterative training each time, the training of the small cell sample image classification model is carried out in a distance measuring mode, and network parameters are continuously optimized through a following loss function, so that the similar cell samples are close to each other as much as possible, and the different cell samples are far away from each other as much as possible in the model training process.

Respectively calculating the probability that each cell picture in the support set belongs to each of several cell classes (the number of randomly extracted classes is the number of cell classes to be identified) randomly extracted in the current training process:

i.e. using the softmax function to act on the query set vector points to ckD () is the euclidean distance metric while the training process minimizes the loss function by random gradient descent (i.e., SGD); the loss function is: j ═ logp (y ═ k | X); defining the cell category of the current cell picture as the category with the maximum probability value;

where k is the true label of the cell training sample and the prototype representation of each cell class ckWherein the prototype representation of the cell class is a weighted sum of all feature vectors in the support set; in the task of cell small sample classification, Sk={(X1,y1),…,(XN,yN) Representing a cell data set with a category of k, namely a group of N-labeled cell support sets, wherein X is a certain cell, namely original data, and y is a category corresponding to the X cell;

wherein, after the training set randomly extracts 5 classes each time, the training set randomly extracts 100 times in the 5 classes, each time is 5 samples, 16 query images adjust network parameters through a loss function,

the traditional image classification is how to learn and classify, the small sample learning means how to learn and classify by the model, and during testing, the data classes are never trained, and each class only contains 5 samples, which cannot be trained to have a good effect on the traditional classification model.

And step three, inputting the test set into the cell small sample classification model trained in the step two, and obtaining the trained cell small sample classification model (the accuracy of the classification can reach about 70%) when the percentage of the number of the images correctly classified in the test set in the cell small sample classification model to the number of all the images in the test set is not increased along with the increase of the times of the training process.

The structure of the Resnet18 classification model constructed in the pre-training module is as follows:

the first layer is as follows: the convolution layer has 3 channels as input and 64 channels as output, convolution kernel size of 7 × 7 and step size of 2; the number of expansion layers is 3; a BN layer; a ReLU activation layer; the maximum pooling layer has the area size of 3 × 3, the step length of 2 and the number of filling layers of 3;

first residual block:

wherein, the second layer structure is in turn: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the third layer structure is the same as the second layer; adding the second layer input and the third layer output, passing through two continuous ReLU active layers, and sending to the fourth layer;

the fourth layer structure is as follows in sequence: and (3) rolling layers: the input is 64 channels, the output is 64 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

the fifth layer has the same structure as the fourth layer; adding the fourth layer input and the fifth layer output, passing through two continuous ReLU active layers, and sending into a second residual block;

second residual block:

the sixth layer structure is as follows in sequence: and (3) rolling layers: the input is the output of the first residual block, the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the seventh layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the sixth layer of the convolution layer and the output of the seventh layer of the convolution layer, passing through two continuous ReLU active layers and then sending the sum into the eighth layer of the convolution layer;

the eighth layer structure is as follows in sequence: and (3) rolling layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a ninth layer of convolutional layers: the input is 128 channels, the output is 128 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the eighth layer input and the ninth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

third residual block:

the tenth layer structure is as follows in sequence: and (3) rolling layers: the input is the output of the first residual block, the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the eleventh layer of structure is as follows: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the tenth convolutional layer and the output of the eleventh convolutional layer, passing through two continuous ReLU active layers, and then sending into the twelfth convolutional layer;

the twelfth layer structure is as follows in sequence: and (3) rolling layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

a thirteenth layer of convolutional layers: the input is 256 channels, the output is 256 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the twelfth layer input and the thirteenth layer output, passing through two continuous ReLU active layers, and sending into a next residual block;

fourth residual block:

the fourteenth layer structure is as follows in sequence: and (3) rolling layers: the input is the output of the first residual block, the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 2, and the number of filling layers is 1; a BN layer;

the fifteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

adding the input of the fourteenth convolutional layer and the output of the fifteenth convolutional layer, passing through two continuous ReLU active layers, and then sending into the sixteenth convolutional layer;

the sixteenth layer structure is as follows in sequence: and (3) rolling layers: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; a BN layer;

seventeenth layer of convolutional layer: the input is 512 channels, the output is 512 channels, the size of a convolution kernel is 3 x 3, the step length is 1, and the number of filling layers is 1; adding the sixteenth layer input and the seventeenth layer output, passing through two continuous ReLU active layers, and sending to the next layer;

the last two layers are sequentially: averaging the pooled layers, with convolution kernel size of 7 x 7, step size of 7, no padding; and (4) fully connecting layers, wherein the number of output channels is 64 during pre-training.

The cell small sample classification model constructed in the small sample classification module is obtained by replacing the output channel of the last full connection layer in the Resnet18 classification model in the classification task with the number of cell classes to be identified, which is 5 in this embodiment.

The image conversion module has the following structure:

the image conversion module comprises two generators (namely converting data A into data B and converting data B into data A) and two discriminators (judging whether the result of converting data A into data B and converting data B into data A is true or not);

the generator network as a whole goes through a process of down-sampling (convolution) and up-sampling (deconvolution), comprising 6 residual blocks and 1 output block, where:

a first volume block, in order: a mirror image extension layer with a filling width of 3; the convolution layer has 3 channels as input and 64 channels as output, and the convolution kernel size is 7 × 7; an InstanceNorm normalization layer for normalizing each output channel; a ReLU layer;

a second volume block, in order: the convolution layer has 64 input channels and 128 output channels, convolution kernel size is 3 x 3, step length is 2 and expansion layer number is 1; an InstanceNorm normalization layer for normalizing each output channel; a ReLU layer;

a third rolling block, sequentially: the convolution layer has 128 channels as input and 256 channels as output, convolution kernel size of 3 x 3, step length of 2 and expansion layer number of 1; an InstanceNorm normalization layer for normalizing each output channel; a ReLU layer;

the fourth convolution block is 6 residual blocks, where each residual block is: the first layer of convolution layer has 256 channels as input and 64 channels as output, and convolution kernel size is 1 x 1; the second layer is a ReLU layer; the third layer of convolution layer has 64 channels as input and 64 channels as output, and convolution kernel size of 3 x 3; the fourth layer is a ReLU layer; a fifth convolution layer with 64 channels as input and 256 channels as output, and convolution kernel size of 1 x 1; adding the output of the fifth convolutional layer in each residual block with the input of the residual block and then passing through a ReLU layer;

the fifth rolling block is a reverse rolling block and sequentially comprises: the convolution layer has 256 input channels and 128 output channels, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; an InstanceNorm normalization layer, which normalizes each channel (128); a ReLU layer;

the sixth rolling block is a reverse rolling block, and sequentially comprises: the convolution layer has 128 channels as input and 64 channels as output, the convolution kernel size is 3 x 3, the step size is 2, the input filling width is 1, and the output filling width is 1; InstanceNorm normalization layer, normalizing each channel (64); a ReLU layer;

the seventh rolling block is an output block, and sequentially comprises: a mirror image extension layer with a filling width of 3; the convolution layer has 64 channels as input and 3 channels as output, and the convolution kernel size is 7 × 7; a Tanh layer;

the discriminator reduces the number of channels to 1 through 5 layers of convolution, finally averagely pools, reduces the size to 1 × 1, and finally performs reshape operation; including four convolution layers and full tie layer, wherein:

the first convolutional layer is, in order: the input is 3 channels, the output is 64 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; a LeakyReLU layer;

the second convolutional layer is, in order: the input is 64 channels, the output is 128 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer, which normalizes each channel (128); a LeakyReLU layer;

the third convolutional layer is, in order: the input is 128 channels, the output is 256 channels, the size of a convolution kernel is 4 x 4, the step length is 2, and the number of expansion layers is 1; an InstanceNorm normalization layer, which normalizes each channel (256); a LeakyReLU layer;

the fourth convolutional layer is, in order: the input is 256 channels, the output is 512 channels, the size of a convolution kernel is 4 x 4, the step length is 1, and the number of expansion layers is 1; an InstanceNorm normalization layer, which normalizes each channel (512); a LeakyReLU layer;

the fifth layer is a full connecting layer: the input is 512 channels, the output is 1 channel, the size of the convolution kernel is 4 x 4, and the number of expansion layers is 1; and averaging the pooling layers.

The training process of the image conversion network is as follows: and inputting the existing public image data sets maps into an image conversion network for training to obtain the trained image conversion network.

The training data A is sent to an image conversion network, the network is composed of a traditional convolution neural network and a deconvolution network, the output is data B, the data B is sent to the image conversion network, the network is composed of a traditional convolution neural network and a deconvolution network, the output is data A, and the data A and the data B are classified by adopting one convolution neural network.

The image conversion module preprocesses the data set in the following specific process:

all pictures in the obtained cell image data set (the cell image data set is composed of a plurality of single cell pictures) are respectively subjected to horizontal turning, rotation, shifting, conversion into gray level pictures, color dithering (adjusting the brightness, saturation and contrast of the images), scaling and clipping (randomly clipping fixed-size pictures), and all the pictures are unified into a fixed size of 224 × 224;

specifically, the method comprises the following steps: the rotation is randomly rotated by 90 degrees to the left or the right; scaling to scale out or in by 10%; cropping to randomly sample a portion from the original image; the shift is to move the image in the X or Y direction.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种行为检测方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!