Method and device for detecting appearance defects of liquid crystal display and storage medium

文档序号:9292 发布日期:2021-09-17 浏览:41次 中文

1. A method for detecting appearance defects of a liquid crystal display is characterized by comprising the following steps:

importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all the preprocessed images to obtain an image training set;

constructing a training model, and performing model optimization on the training model according to the image training set to obtain a detection model;

and importing an image to be detected, and detecting the image to be detected according to the detection model to obtain a detection result of whether the appearance of the liquid crystal display has defects.

2. The method for detecting the appearance defects of the liquid crystal display according to claim 1, wherein the image preprocessing is performed on the images to be processed in the image set one by one, and a process of collecting all the preprocessed images to obtain an image training set comprises the following steps:

performing image rotation processing on the images to be processed in the image set to be processed one by one to obtain a plurality of rotated images;

carrying out image mirror image processing on the rotated images one by one to obtain mirrored images corresponding to the rotated images;

adjusting the image brightness of each mirrored image one by one to obtain an adjusted image corresponding to each mirrored image;

performing data enhancement on each adjusted image one by one through Gaussian noise to obtain an enhanced image corresponding to each adjusted image;

and compressing the enhanced images one by one, and collecting all the compressed images to obtain an image training set.

3. The method for detecting the appearance defects of the liquid crystal display according to claim 1, wherein the process of performing model optimization on the training model according to the image training set to obtain the detection model comprises the following steps:

s1: initializing the cycle number to 0;

s2: inputting the image training set into the training model for training to obtain a feature vector;

s3: calculating a loss function of the feature vector by using a cross entropy function algorithm to obtain a loss value, and storing the loss value;

s4: updating the cycle number through a first equation to obtain the updated cycle number, wherein the first equation is as follows:

N’=N+1,

wherein N is the cycle number, and N' is the updated cycle number;

s5: judging whether the updated cycle number is greater than or equal to a preset cycle number, if so, executing S6; if not, go to S7;

s6: calculating pairwise difference values in all stored loss values to obtain differences of a plurality of loss values;

s7: updating parameters of the training model by using a random gradient descent method to obtain an updated training model, taking the updated training model as a next training model, taking the updated cycle number as a next cycle number, and returning to the step S2;

s8: judging whether all the loss value differences meet a stopping condition, wherein the stopping condition comprises that the magnitude of the loss value differences of a continuous preset number is smaller than the difference of preset loss values, and the loss value stored finally is smaller than a preset loss threshold value, and if so, taking the training model as a detection model; if not, the process returns to S7.

4. The method for detecting the appearance defects of the liquid crystal display according to claim 3, wherein the process of inputting the image training set into the training model for training to obtain the feature vectors comprises:

the constructed training model comprises a plurality of sequentially arranged 3 × 3 convolutional layers, a plurality of sequentially arranged 2 × 2 maximum pooling layers and a plurality of sequentially arranged full-connection layers, every two layers from the second 3 × 3 convolutional layer to the last 3 × 3 convolutional layer are combined into a group to obtain a plurality of 3 × 3 convolutional layer groups, and all the full-connection layers are combined into a group to obtain a full-connection layer group;

inputting the image training set into the first 3 × 3 one-dimensional convolutional layer for first feature analysis to obtain a first image feature set;

inputting the first image feature set into the first 2 x 2 maximum pooling layer to perform first downsampling processing to obtain a second image feature set;

inputting the second image feature set into the first 3 × 3 convolution layer group for second feature analysis to obtain a third image feature set;

inputting the third image feature set into the last 2 × 2 maximum pooling layer for second downsampling processing to obtain a fourth image feature set;

inputting the fourth image feature set into the next 3 × 3 convolution layer group for third-time feature analysis to obtain a fifth image feature set;

and inputting the fifth image feature set into the fully-connected layer group for dimensionality reduction processing to obtain a feature vector.

5. The method for detecting the appearance defects of the liquid crystal display according to claim 4, wherein the process of inputting the image training set into the first 3 x 3 one-dimensional convolutional layer for the first feature analysis to obtain the first image feature set comprises:

inputting the image training set into the first 3 × 3 one-dimensional convolutional layer for first feature extraction to obtain a first image training set;

filling the images in the first image training set according to preset pixel values, and collecting the filled images to obtain a filled first image training set;

and respectively carrying out normalization processing on the images in the filled first image training set, and collecting the processed images to obtain a first image feature set.

6. The method for detecting the appearance defects of the liquid crystal display according to claim 5, wherein the step of inputting the second image feature set into the first 3 x 3 convolution layer set for the second feature analysis to obtain a third image feature set comprises:

inputting the second image feature set into the first 3 × 3 convolution layer group for second feature extraction to obtain a second image training set;

filling the images in the second image training set according to the preset pixel values, and collecting the filled images to obtain a filled second image training set;

and respectively carrying out normalization processing on the images in the filled second image training set, and collecting the processed images to obtain a third image feature set.

7. The method for detecting the appearance defects of the liquid crystal display according to claim 5, wherein the step of inputting the fourth image feature set into the next 3 x 3 convolution layer set for third-time feature analysis to obtain a fifth image feature set comprises:

inputting the fourth image feature set into the next 3 × 3 convolutional layer group for third feature extraction to obtain a third image training set;

filling the images in the third image training set according to the preset pixel values, and collecting the filled images to obtain a filled third image training set;

and respectively carrying out normalization processing on the images in the filled third image training set, and collecting the processed images to obtain a fifth image feature set.

8. The utility model provides a LCD screen appearance imperfections detection device which characterized in that includes:

the image preprocessing module is used for importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all preprocessed images to obtain an image training set;

the model optimization module is used for constructing a training model and carrying out model optimization on the training model according to the image training set to obtain a detection model;

and the detection result obtaining module is used for leading in the image to be detected, detecting the image to be detected according to the detection model and obtaining the detection result of whether the appearance of the liquid crystal display has defects.

9. A device for detecting appearance defects of a liquid crystal display, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the method for detecting appearance defects of a liquid crystal display according to any one of claims 1 to 5 is implemented when the computer program is executed by the processor.

10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the method for detecting the appearance defect of the liquid crystal panel according to any one of claims 1 to 5.

Background

At present, an appearance defect method based on machine vision widely replaces artificial vision detection and is applied to various industrial fields. The appearance defect detection based on the traditional image processing algorithm usually uses image processing methods including histogram equalization, filtering denoising, gray level binarization and the like to obtain simplified image information with separated foreground and background; and then, algorithms such as mathematical morphology, Fourier transform, Gabor transform and the like and a machine learning model are utilized to complete defect marking and detection, and although the traditional algorithms have better effects in certain specific applications, the traditional algorithms still have many defects. For example: the image processing steps are various and have strong pertinence, the algorithm iteration speed is low, and the universality is poor; and 3, algorithm developers with strong speciality are required to manually extract the characteristics of specific defects, so that the research and development cost is high.

Disclosure of Invention

The invention aims to solve the technical problem of the prior art and provides a method and a device for detecting the appearance defects of a liquid crystal display and a storage medium.

The technical scheme for solving the technical problems is as follows: a method for detecting appearance defects of a liquid crystal display comprises the following steps:

importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all the preprocessed images to obtain an image training set;

constructing a training model, and performing model optimization on the training model according to the image training set to obtain a detection model;

and importing an image to be detected, and detecting the image to be detected according to the detection model to obtain a detection result of whether the appearance of the liquid crystal display has defects.

Another technical solution of the present invention for solving the above technical problems is as follows: a liquid crystal display appearance defect detection device comprises:

the image preprocessing module is used for importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all preprocessed images to obtain an image training set;

the model optimization module is used for constructing a training model and carrying out model optimization on the training model according to the image training set to obtain a detection model;

and the detection result obtaining module is used for leading in the image to be detected, detecting the image to be detected according to the detection model and obtaining the detection result of whether the appearance of the liquid crystal display has defects.

Another technical solution of the present invention for solving the above technical problems is as follows: a liquid crystal display appearance defect detection device comprises a memory, a processor and a computer program which is stored in the memory and can run on the processor, and when the processor executes the computer program, the liquid crystal display appearance defect detection method is realized.

Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium storing a computer program which, when executed by a processor, implements a liquid crystal panel appearance defect detection method as described above.

The invention has the beneficial effects that: the method comprises the steps of obtaining an image training set through all images after image preprocessing set preprocessing of images to be processed in an image set to be processed one by one, obtaining a detection model through model optimization of the image training set on the training model, and obtaining a detection result of whether defects exist in the appearance of the liquid crystal display through detection of the detection model on the images to be detected, so that a complex algorithm process of manual design is avoided, research and development difficulty is greatly reduced, and high robustness and precision are achieved.

Drawings

Fig. 1 is a schematic flow chart of a method for detecting an appearance defect of a liquid crystal display according to an embodiment of the present invention;

fig. 2 is a block diagram of an apparatus for detecting an apparent defect of a liquid crystal display according to an embodiment of the present invention.

Detailed Description

The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.

Fig. 1 is a schematic flow chart of a method for detecting an appearance defect of a liquid crystal display according to an embodiment of the present invention.

As shown in fig. 1, a method for detecting an appearance defect of a liquid crystal display includes the following steps:

importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all the preprocessed images to obtain an image training set;

constructing a training model, and performing model optimization on the training model according to the image training set to obtain a detection model;

and importing an image to be detected, and detecting the image to be detected according to the detection model to obtain a detection result of whether the appearance of the liquid crystal display has defects.

It should be understood that the image to be processed and the image to be detected are both obtained by picking up images by a camera and matting the defect positions of the images.

It should be understood that the image set to be processed is used for an image set for optimizing a model, and the image to be detected is used for detecting the detection model, and the images are processed images.

It will be appreciated that the set of images to be processed comprises 2 million images with real defects and 8 million images without defects but with interference patterns (such as negligible smearing).

It will be appreciated that in order to reduce the computational cost of the model, the training set of images is first bilinearly interpolated to a size of 128 x 128 pixels and then fed into the training model as input.

In the embodiment, the image training set is obtained by preprocessing all the images in the image preprocessing set of the images to be processed in the image set to be processed one by one, the detection model is obtained by optimizing the model of the training model by the image training set, and the detection result of whether the appearance of the liquid crystal display has defects is obtained by detecting the images to be detected by the detection model, so that the complicated algorithm process of manual design is avoided, the research and development difficulty is greatly reduced, and the robustness and the precision are extremely high.

Optionally, as an embodiment of the present invention, the image preprocessing is performed on the images to be processed in the image set to be processed one by one, and a process of collecting all the preprocessed images to obtain an image training set includes:

performing image rotation processing on the images to be processed in the image set to be processed one by one to obtain a plurality of rotated images;

carrying out image mirror image processing on the rotated images one by one to obtain mirrored images corresponding to the rotated images;

adjusting the image brightness of each mirrored image one by one to obtain an adjusted image corresponding to each mirrored image;

performing data enhancement on each adjusted image one by one through Gaussian noise to obtain an enhanced image corresponding to each adjusted image;

and compressing the enhanced images one by one, and collecting all the compressed images to obtain an image training set.

It should be understood that, respectively performing data enhancement on each adjusted image through gaussian noise is to add a random number which conforms to gaussian distribution, wherein the value range of the random number is 0 to 255, and the random number is used for data enhancement, and interference of noise is added, so that the model can be correctly classified under the condition of interference.

Specifically, in order to prevent overfitting of the model, the invention performs image rotation (three angles of 90 degrees, 180 degrees and 270 degrees), image mirroring (two types of horizontal mirroring and vertical mirroring), image brightness increase, image brightness reduction, Gaussian noise addition in the image, and jpg format compression on the image to change the image quality by a data enhancement method (namely the data enhancement algorithm), so that the number of training set samples (namely the images to be processed in the image set to be processed) is increased by 8 times. In order to relieve the imbalance of the number of samples between non-defective samples and defective samples, training sampling probabilities are set for the non-defective samples and different types of defective samples, the total probability of all sample sets is 1, the probability of a sample set with a large number is set to be smaller, and the probability of a sample set with a small number is set to be larger, so that the weight of the sample set with a small number of samples is increased, and the recognition effect of the small samples is improved.

In the embodiment, the image training set is obtained by preprocessing all the images of the image preprocessing set of the images to be processed in the image set to be processed one by one, so that the recognition effect of few samples is improved, data is provided for subsequent processing, overfitting of a model is prevented, the image collecting condition is reduced, the complicated algorithm process of manual design is avoided, the research and development difficulty is greatly reduced, and the robustness and the precision are extremely high.

Optionally, as an embodiment of the present invention, the performing model optimization on the training model according to the image training set to obtain a detection model includes:

s1: initializing the cycle number to 0;

s2: inputting the image training set into the training model for training to obtain a feature vector;

s3: calculating a loss function of the feature vector by using a cross entropy function algorithm to obtain a loss value, and storing the loss value;

s4: updating the cycle number through a first equation to obtain the updated cycle number, wherein the first equation is as follows:

N’=N+1,

wherein N is the cycle number, and N' is the updated cycle number;

s5: judging whether the updated cycle number is greater than or equal to a preset cycle number, if so, executing S6; if not, go to S7;

s6: calculating pairwise difference values in all stored loss values to obtain differences of a plurality of loss values;

s7: updating parameters of the training model by using a random gradient descent method to obtain an updated training model, taking the updated training model as a next training model, taking the updated cycle number as a next cycle number, and returning to the step S2;

s8: judging whether all the loss value differences meet a stopping condition, wherein the stopping condition comprises that the magnitude of the loss value differences of a continuous preset number is smaller than the difference of preset loss values, and the loss value stored finally is smaller than a preset loss threshold value, and if so, taking the training model as a detection model; if not, the process returns to S7.

Preferably, the preset number of cycles is generally set to 15.

It should be understood that the feature vector includes 6 values.

It should be appreciated that the present invention selects the cross-entropy function as the loss function of the training model. The cross entropy function is used at the end of the training of the model to evaluate the closeness of the output of the model (i.e., the training model) and the class labels. The training model finally outputs 6 one-dimensional vectors, each vector represents the confidence of classification, meanwhile, the label of an input image is known, the type corresponding to the label is 1, the other types are 0, for example, the label of one input image is [1,0,0,0,0,0] represents that the image is of the 1 st class, each vector comprises 6 units, the 6 units correspond to the class number of image samples, the 6 values respectively represent the estimation of the class membership probability of each image through a softmax regression function, the estimation result is the classification confidence of each class, and the class with the highest confidence is the classification result.

It should be appreciated that, at the same time, the L2 regularization is introduced, and a weight decay term is added to the loss function to penalize large weights in the training and avoid overfitting.

Specifically, during the training process, the invention adopts a random gradient descent method. The random gradient descent method is an optimization algorithm, gradient difference exists between an unoptimized loss function and optimal loss, and the gradient descent method is to optimize the loss function along the gradient descent direction so as to find out optimal parameters and minimize the value of the loss function. The weight parameters are updated with 50 samples for a small batch. We also incorporate momentum and learning rate decay into the stochastic gradient descent optimizer, where the original stochastic gradient descent method steps at each descent are identical and the directions are all the directions of the current gradient, which makes convergence unstable. The added momentum is inertia of the motion of a simulated object, and a part of the last updating direction is reserved when the parameters are updated each time, so that the capability of getting rid of local optimum is realized to a certain extent, and the optimization of the loss function is towards the global optimum direction. The learning rate attenuation is added because the step size of the gradient should be smaller and smaller along with the gradient optimization, rather than always optimizing with the same gradient step size, the loss can be better optimized to the global optimum by adding the learning rate attenuation.

In the embodiment, the detection model is obtained by optimizing the model of the training model through the image training set, so that the accuracy of model detection is improved, the complicated algorithm process of manual design is avoided, the research and development difficulty is greatly reduced, and the method has extremely high robustness and precision.

Optionally, as an embodiment of the present invention, the process of inputting the image training set to the training model for training to obtain the feature vector includes:

the constructed training model comprises a plurality of sequentially arranged 3 × 3 convolutional layers, a plurality of sequentially arranged 2 × 2 maximum pooling layers and a plurality of sequentially arranged full-connection layers, every two layers from the second 3 × 3 convolutional layer to the last 3 × 3 convolutional layer are combined into a group to obtain a plurality of 3 × 3 convolutional layer groups, and all the full-connection layers are combined into a group to obtain a full-connection layer group;

inputting the image training set into the first 3 × 3 one-dimensional convolutional layer for first feature analysis to obtain a first image feature set;

inputting the first image feature set into the first 2 x 2 maximum pooling layer to perform first downsampling processing to obtain a second image feature set;

inputting the second image feature set into the first 3 × 3 convolution layer group for second feature analysis to obtain a third image feature set;

inputting the third image feature set into the last 2 × 2 maximum pooling layer for second downsampling processing to obtain a fourth image feature set;

inputting the fourth image feature set into the next 3 × 3 convolution layer group for third-time feature analysis to obtain a fifth image feature set;

and inputting the fifth image feature set into the fully-connected layer group for dimensionality reduction processing to obtain a feature vector.

Preferably, the number of the 3 × 3 convolutional layers may be 5, the number of the maximum pooling layers may be 2, and the number of the fully-connected layers may be 3.

It should be understood that the network structure of the present invention has 11 layers, 1 input layer, 5 convolutional layers, 2 max pooling layers, 2 full-link layers, and 1 output layer. Taking an image with the size of 128 × 128 pixels (namely, an image in the image training set) as an input of the model, and after 5 convolutional layers, 2 maximum pooling layers and 2 full-connection layers, outputting 6 neurons in an output layer, wherein each neuron represents a corresponding class membership probability.

It should be understood that the first time feature analysis, the second time feature analysis and the third time feature analysis are only different processed data, and the processing procedure is the same.

It should be understood that the first downsampling process and the second downsampling process are only different data, and the processing procedure is the same.

In particular, a simplified representation of the present network may be: input layer-convolution layer-activation layer-pooling layer-full-connection layer.

An input layer: three channel or single channel images.

And (3) rolling layers: the convolutional layer applies a set of filters, each connected to only a small region of the output of the previous layer, called the field. The filter is typically a matrix that can be learned during training, and is of a size such as 3 x 3 or 5 x 5. The parameter sharing scheme is applied in a convolution operation, where a filter is convolved over the spatial dimension of the entire image input to extract a feature.

An active layer: ReLU (rectified Linear units) is the most widely used activation function that adds a non-Linear transformation to the output response of the convolutional or fully-connected layer. The ReLU can effectively prevent gradient saturation, accelerate convergence of a training process and simultaneously keep an original value to the maximum extent, and the ReLU activation function proves to have better effect than a traditional sigmoid activation function.

A pooling layer: the pooling layer performs a non-linear down-sampling along two spatial dimensions of the image, reducing the spatial size of the output. Its purpose is to reduce the number of network parameters and the computational cost. The pooling layer is typically placed between two consecutive convolutional layers, the most common pooling strategy being max pooling, which outputs a maximum from a neighborhood of the input feature map.

Full connection layer: the fully-connected layer is the last part of the neural network. All neurons of the fully connected layer are connected to all cells of the last layer. The last fully connected layer generates the outputs of the whole network with K neurons, the number of which is the same as the number of input labels. With the help of the softmax function, each value of the K-dimensional output represents the probability of the corresponding tag.

It should be appreciated that the pooling strategy employed in all pooling layers is maximum pooling, which is robust to small distortions.

It should be understood that a dropout strategy with a probability of 0.5 is applied to the fully connected layers of the penultimate and penultimate layers, and that dropout is a strategy that randomly closes some neurons in the fully connected layers with a certain probability to make them inoperative. A dropout of 0.5 means that 50% of the neurons are allowed to function, and the remaining neurons do not contribute to the result, which also helps to avoid overfitting.

Specifically, the global framework of the present invention can be simply expressed as follows:

C(32,3,3)-S(2,2,2)-C(64,3,3)-C(64,3,3)-S(2,2,2)-C(128,3,3)-C(128,3,3)--FC(1024)-FC(1024)-FC(6),

wherein, C (n,3,3) represents a convolution layer of a filter with n cores and a size of 3 × 3, and after a graph passes through C (n,3,3), the number of channels becomes n, and the width and the height are not changed; s (2,2,2) represents that the downsampling pooling layer (i.e. the maximum pooling layer) having 2 × 2 is downsampled, the span is 2 in both dimensions of width and height, and after a graph passes through S (2,2,2), the number of channels is unchanged, and the width and height are reduced by 0.5 times; FC (n) represents that the full-connection layer with n neurons is subjected to dimensionality reduction, after one graph is subjected to FC (n), the three-dimensional characteristic graph with width, height and channels is transformed into one-dimensional characteristic vectors, and the number of the vectors is n;

therefore, the above expression can be interpreted as that, for an input image with a single channel and a size of 128 × 128 (i.e., the image in the image training set), the local feature information of the image is extracted by performing a convolution operation on the image (i.e., the image in the image training set) by 3 × 3 through C (32,3,3), so as to obtain a feature map with a size of 128 × 128 and a channel of 32 (i.e., the image in the first image feature set); through S (2,2,2), the width and the height of the image (namely the image in the first image feature set) are reduced to 0.5 times of the original width and the height of the image are reduced to reduce the parameter number, and the method is a downsampling process and prevents model overfitting caused by excessive parameter quantity; extracting local feature information of an image (namely the image in the second image feature set) by performing convolution operation of 3 × 3 through C (64,3,3), wherein the number of channels is increased to 64, and the increase of the number of channels also means that richer local feature information such as color, texture, contour and the like are extracted through convolution operation; the width and the height and the number of channels output by a layer C (64,3,3) are not changed, so that the nonlinear mapping capability of the model is improved; after S (2,2,2) or the operation of pooling downsampling is performed, the width and height of the image (i.e., the image in the third image feature set) are further reduced by 0.5 times, the number of channels is unchanged, and the parameter amount is further reduced, where at this time, the width and height of the feature map (i.e., the image in the fourth image feature set) is 32, and the number of channels is 64; the width and height output by C (128,3,3) are unchanged, the number of channels is increased to 128, and the extracted local feature information is richer; the width and the height of the output of C (128,3,3) and the number of channels are not changed, so that the nonlinear mapping capability of the model is improved; after S (2,2,2) performing pooling downsampling, the width and height of the image (i.e., the image in the fourth image feature set) are further reduced by 0.5 times, the number of channels is unchanged, and the parameter amount is further reduced, where the width and height of the feature map (i.e., the image in the fifth image feature set) is 16 and the number of channels is 128 at this time; mapping a 128 × 16 × 16 three-dimensional feature map (i.e., the image in the fifth image feature set) into a one-dimensional vector with a length of 1024 through FC (1024), wherein all neurons in a fully connected layer are connected to all units in a next layer, so that the parameter quantity of the feature map (i.e., the image in the fifth image feature set) can be rapidly reduced; and finally, through FC (6), mapping the 1024 one-dimensional vectors into 6 one-dimensional vectors, and corresponding to 6 classifications.

In the embodiment, the image training set is input into the training model to obtain the feature vector, the filter parameters can be automatically updated in the training process of massive training data, well-ranked features can be learned from the input image, the parameter quantity is large, the fitting capacity to the training sample is strong, the model universality is higher, meanwhile, useful robust features can be automatically extracted, the model working speed is extremely high, and a satisfactory correct defect detection rate can be achieved.

Optionally, as an embodiment of the present invention, the process of inputting the image training set into the first 3 × 3 one-dimensional convolutional layer for performing the first feature analysis to obtain a first image feature set includes:

inputting the image training set into the first 3 × 3 one-dimensional convolutional layer for first feature extraction to obtain a first image training set;

filling the images in the first image training set according to preset pixel values, and collecting the filled images to obtain a filled first image training set;

and respectively carrying out normalization processing on the images in the filled first image training set, and collecting the processed images to obtain a first image feature set.

Preferably, the preset pixel value may be 0.

It should be understood that, since the convolution operation cannot be performed at the boundary of the image, and the size of the image after the convolution is smaller than that of the image before the convolution, the present patent also adopts a filling strategy to fill the boundary of the feature map after each convolution operation (i.e. the image in the first image training set) with zeros, so that the sizes of the feature maps before and after the convolution are kept unchanged.

It should be understood that, in order to speed up the training process, the present invention also employs batch normalization after each convolutional layer, and normalizes the output result of the convolutional layer to a normal distribution with a mean value of 0 and a variance of 1, which can solve the problem of internal convolution parameter distribution transfer.

In the above embodiment, the first image training set is obtained by inputting the image training set into the first 3 × 3 one-dimensional convolutional layer and extracting the first features, the filled images in the filling set of images in the first image training set are respectively filled according to the preset pixel values to obtain the filled first image training set, the images in the filled first image training set are respectively subjected to normalization processing, and the set-processed images are collected to obtain the first image feature set, so that the problem of internal convolution parameter distribution transfer is solved, meanwhile, the defects that the traditional algorithm is complex in design, high in work complexity of research and development personnel, high in requirements on image acquisition conditions and the like are overcome, and the difficulty in algorithm research and development is greatly reduced.

Optionally, as an embodiment of the present invention, the step of inputting the second image feature set into the first 3 × 3 convolutional layer group for the second feature analysis to obtain a third image feature set includes:

inputting the second image feature set into the first 3 × 3 convolution layer group for second feature extraction to obtain a second image training set;

filling the images in the second image training set according to the preset pixel values, and collecting the filled images to obtain a filled second image training set;

and respectively carrying out normalization processing on the images in the filled second image training set, and collecting the processed images to obtain a third image feature set.

Preferably, the preset pixel value may be 0.

In the above embodiment, the second image feature set is input to the first 3 × 3 convolution layer group for the second feature analysis to obtain the third image feature set, so that the problem of internal convolution parameter distribution transfer is solved, the disadvantages of complex design, high work complexity of research personnel, high requirements on image acquisition conditions and the like of the conventional algorithm are overcome, and the difficulty in algorithm research and development is greatly reduced.

Optionally, as an embodiment of the present invention, the step of inputting the fourth image feature set into the next 3 × 3 convolutional layer group for third feature analysis to obtain a fifth image feature set includes:

inputting the fourth image feature set into the next 3 × 3 convolutional layer group for third feature extraction to obtain a third image training set;

filling the images in the third image training set according to the preset pixel values, and collecting the filled images to obtain a filled third image training set;

and respectively carrying out normalization processing on the images in the filled third image training set, and collecting the processed images to obtain a fifth image feature set.

Preferably, the preset pixel value may be 0.

In the above embodiment, the fourth image feature set is input to the third feature analysis in the next 3 × 3 convolution layer group to obtain the fifth image feature set, so that the problem of internal convolution parameter distribution transfer is solved, meanwhile, the disadvantages of tedious design, high work complexity of research personnel, high requirements on image acquisition conditions and the like of the conventional algorithm are also solved, and the difficulty in algorithm research and development is greatly reduced.

Optionally, as an embodiment of the present invention, verification is performed on 1000 defect verification sets, as shown in table one, which is a result of comparing the method of the present invention with VGG _16, the method of extracting features by conventional image processing, and the method of SIFT feature + artificial neural network classification, wherein the percentage representing correct classification into non-defective images — True Positive Rate (TPR) and false negative rate (TNR) representing the percentage classifying into defective images.

Table one:

as can be seen from the table I, the method of the present invention is advantageous over other methods, and the overall detection accuracy of the method reaches 99.4%, which is more than that of VGG _16, the method of extracting features by traditional image processing, and the method of SIFT feature + artificial neural network classification.

Fig. 2 is a block diagram of an apparatus for detecting an apparent defect of a liquid crystal display according to an embodiment of the present invention.

Optionally, as another embodiment of the present invention, as shown in fig. 2, an apparatus for detecting an appearance defect of a liquid crystal display includes:

the image preprocessing module is used for importing an image set to be processed, performing image preprocessing on the images to be processed in the image set to be processed one by one, and collecting all preprocessed images to obtain an image training set;

the model optimization module is used for constructing a training model and carrying out model optimization on the training model according to the image training set to obtain a detection model;

and the detection result obtaining module is used for leading in the image to be detected, detecting the image to be detected according to the detection model and obtaining the detection result of whether the appearance of the liquid crystal display has defects.

Optionally, another embodiment of the present invention provides an apparatus for detecting an appearance defect of a liquid crystal display, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein when the processor executes the computer program, the method for detecting an appearance defect of a liquid crystal display is implemented. The device may be a computer or the like.

Alternatively, another embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for detecting the appearance defect of the liquid crystal display panel is implemented.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种肺部4D-CT医学图像配准方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!