Solder joint detection method based on convolutional neural network

文档序号:9252 发布日期:2021-09-17 浏览:27次 中文

1. A welding spot detection method based on a convolutional neural network is characterized by comprising the following steps:

step 1, collecting a PCB welding spot data set, preprocessing the data, and then labeling and storing the data;

step 2, establishing a convolutional neural network model based on computer vision;

and 3, training the established convolutional neural network model by using the welding spot data training set, and testing the established model by using the welding spot data testing set.

2. The method for detecting welding spots based on the convolutional neural network as claimed in claim 1, wherein the step 1 is implemented as follows:

1-1, data acquisition, namely firstly, acquiring an original image of a PCB (printed Circuit Board) by using AOI (automatic optical inspection) equipment; for each PCB, shooting a plurality of local view fields by a camera, and splicing into a complete image by an image splicing method;

1-2, preprocessing data, wherein the size of an original image is too large, and a welding spot is too small, so that the image is uniformly divided into 416 x 416 sizes in order to ensure that the position of the welding spot can be accurately detected and positioned;

1-3, data labeling, namely, because a large amount of picture data is needed for training a convolutional neural network model, manually labeling the preprocessed image by using a labeling tool LabelImg, and labeling an image welding spot target by using a rectangular frame of the preprocessed image;

1-4, data storage; after labeling by using a LabelImg labeling tool, generating an xml file, wherein the stored key information comprises a target name and coordinates xmin, xmax, ymin and ymax of a target frame; the marked data format is stored according to the VOC data format, one image corresponds to one label file, the image storage format is jpg, and the label file storage format is xml.

3. The method for detecting welding spots based on the convolutional neural network as claimed in claim 2, wherein the step 2 is implemented as follows:

detecting and positioning the welding spot by using the convolution neural network model as a welding spot detection model; aiming at the small-scale target detection of the welding spot detection, the model is improved by multi-scale detection on the basis of the existing YOLOv3, and a Darknet53 network improved on the basis of a residual neural network is used as a feature extractor, so that a convolutional neural network model can better detect the small target.

4. The method for detecting welding spots based on the convolutional neural network as claimed in claim 3, wherein the specific steps of step 3 are as follows:

3-1, dividing the data set; the acquired data set is as follows 7: 3, dividing the ratio into a training set and a test set;

3-2, network input and data enhancement; due to convolutionThere are 5 downsampling processes in the neural network model, i.e. 2532, so the network input image should be a multiple of 32; dividing each image of a training set into 13 multiplied by 13 unit grids; since 13 × 32 is 416, the input image size is finally required to be 416 × 416;

carrying out data enhancement on a limited training set, wherein the data enhancement comprises horizontal turning, vertical turning, random rotation and color dithering on a training set picture;

3-3. network architecture; the backbone network (backbone) of the convolutional neural network model adopts a Darknet53 network structure to extract the characteristics of the input image; dividing the characteristic layer of the whole convolutional neural network model into 5 scales, wherein the sizes of characteristic images produced by the 5 scales are 13 × 13, 26 × 26, 52 × 52, 104 × 104 and 208 × 208 respectively; the improved YOLOv3 can better perform local feature interaction with a small-scale target by using the 5 feature images with different scales;

3-4, network output; acquiring 15 anchor frames on a training set by using a K-means clustering method, and selecting the anchor frame with the largest IOU (input output) value as a prediction frame during output; aiming at an input image, the convolutional neural network model can map the input image to 5 output tensors with different scales, and the output tensors represent the probability that targets exist at each position of the image; for a 416 × 416 input image, each prediction is a 4+1+1 ═ 6-dimensional vector, and each 6-dimensional vector contains a prediction bounding box coordinate (c)x,cy,pw,ph) A confidence of the bounding box and a probability of the object class, where cx,cyIs to predict the center coordinates, p, of the bounding box on the feature imagew,phIs to predict the width and height of the bounding box on the feature map; when the confidence of the frame is higher than 0.3, the frame is judged to be a suspected target, when the intersection ratio of the two prediction frames is larger than a set threshold value, the two prediction frames are considered to mark the same target, at the moment, the prediction frame with the highest confidence is selected as a final result by a non-maximum suppression method, and the coordinate information and the category information of the prediction frame are output;

3-5, a loss function; the loss function consists of four parts, lossall=lossxy+lossbox×losswh+lossconf+losscls(ii) a Wherein lossxyLoss, representing a shift in position of coordinates x and ywhError loss, representing height and width of the target boxconfRepresents the loss of confidence, loss, of the target boxclsRepresents the loss of classification category, and finally introduces lossboxThe method has the advantages that the regression loss of the target frame with the size scale is reduced in the process of calculating the height and the width of the target frame, so that the height and width loss of the small target is larger in the total loss function, and the method is more beneficial to the detection of the small target;

the loss function is specifically calculated as follows:

wherein the parameters are defined as follows:

S2: the total number of the network units is S multiplied by S;

b: each network unit is provided with B prediction frames;

a value of 0 or 1; when the target exists in the network unit i, the value 1 is taken to prove that the prediction of the jth prediction box is effective; when no target exists in the network unit i, the value is 0;

a value of 0 or 1; the meaning thereof andon the contrary; when the network unit i has no target, taking the value of 0; when the target exists in the network unit i, the value is 1;

x, y: predicting the center position of the bounding box;

the center position of the actual bounding box;

w, h: predicting the width and height of the bounding box;

the width and height of the actual bounding box;

c: representing a confidence score representing a magnitude of a probability of an object being present;

predicting the intersection part of the boundary box and the actual boundary box;

λcoord: coordinate prediction weight and an adjustable weight parameter, wherein the adjustable weight parameter is used for adjusting the proportion of each part of loss function in the total loss function;

λnoobj: a confidence prediction weight to balance confidence loss for the presence of the target and confidence loss for the absence of the target;

λ: an adjustable weight coefficient for adjusting lossboxThe weight in the total loss function;

pi(c) the method comprises the following steps A predicted target class;

the actual label category;

3-6, testing the model; testing the trained model by using a test set; using a plurality of evaluation parameters including Precision, Recall, F1 values, and cross-over ratio IOU; in addition, the frame rate fps is also used for detecting the detection speed of the model;

the formula for calculating the above five parameters is as follows:

the precision ratio is as follows: precision ═ TP/(TP + FP);

the recall ratio is as follows: recall is TP/(TP + FN);

f1 value: f1 ═ 2 Precision Recall/(Precision + Recall);

cross-over ratio:

frame rate:

wherein TP is a true positive example representing the number of samples predicted to be 1 and having a true value of 1, FP is a false positive example representing the number of samples predicted to be 1 and having a true value of 0, and FN is a false negative example representing the number of samples predicted to be 0 and having a true value of 1; the intersection ratio IOU is the overlapping rate of the prediction frame DR output by the model and the original real marking frame GT, and the optimal condition is that the prediction frame DR and the original real marking frame GT are completely overlapped, and the IOU is 1; finally, frame rate fps is used herein to evaluate the speed of the algorithm for processing images, n is the total number of images processed, and T is the time required for processing, and the unit is frames per second (f/s).

Background

In recent years, the demand of people for networks is rapidly increased, the internet traffic is rapidly increased, and thus the increase of the demand of the electronic manufacturing industry is greatly pulled, so that the electronic manufacturing industry becomes one of the most important strategic industries in the world. In the internet era, electronic products are widely used not only in products such as calculators, mobile phones, computers, and the like, but also in products such as artificial intelligence devices, large-scale industrial devices, automobiles, and aviation devices. The electronics manufacturing industry is an important element that represents the level of productivity in a country and also an important factor in distinguishing developing and developed countries. The scale of the electronic manufacturing industry in China is steadily growing in recent years, and the electronic manufacturing industry is an important supporting industry of national economy.

Surface Mounted Technology (SMT) is one of the most popular techniques and processes in the electronics manufacturing assembly industry. SMT technology can enable "automated, high-tech" processing of some Printed Circuit Boards (PCBs). In the last decade, the SMT technology has been soaked in China to various industries and fields, and has a rapid development and a wide application range. Although the SMT technology has its own features and advantages, many non-standard components are available in the electronics manufacturing and assembly industry, and due to their special structural configuration, it is difficult to achieve full-automatic soldering.

Although modern enterprises generally use automatic insertion and automatic welding production processes at present, some non-standard components are welded by hands at present, so that no welding method can completely replace manual welding at present. The traditional manual welding-based mode of manual welding has the advantages of low production efficiency, high labor intensity, high requirement on the experience of workers, easiness in emotion influence and the like, so that the delivery time of products and the quality of the products cannot be guaranteed by manual welding.

After the PCB is welded, fault detection is required to be carried out on the PCB, and the important premise for carrying out the fault detection is to accurately identify a welding spot to be detected. The traditional method is mainly a visual method, but is easily influenced by subjective factors, unstable, slow in speed and low in efficiency, and easily influences the market competitiveness of products. In order to assist in realizing automatic welding, the positions of welding points can be automatically identified by using computer vision and other technologies.

Disclosure of Invention

The invention provides a welding spot detection method based on a convolutional neural network, which aims to accurately identify welding spots on a PCB and position the welding spots.

In order to achieve the purpose, the technical scheme of the invention comprises the following steps:

step 1, collecting a PCB welding spot data set, preprocessing the data, and then labeling and storing the data.

And 2, establishing a convolutional neural network model based on computer vision.

And 3, training the established convolutional neural network model by using the welding spot data training set, and testing the established model by using the welding spot data testing set.

In the step 1, a welding spot data set is collected, and the data is preprocessed, labeled and stored, and the specific implementation steps are as follows:

1-1, data acquisition, firstly, carrying out original image acquisition on a PCB by using AOI automatic optical detection equipment. For each PCB, a camera shoots a plurality of local view fields, and a complete image is spliced by an image splicing method.

1-2. data preprocessing, because the original image size is too large and the welding spot is too small, in order to ensure that the welding spot position can be accurately detected and positioned, the image is uniformly divided into 416 x 416 sizes.

And 1-3, carrying out data annotation, wherein a large amount of picture data is needed for training a convolutional neural network model, manually annotating the preprocessed image by using an annotation tool LabelImg, and marking out an image welding spot target by using a rectangular frame of the preprocessed image.

And 1-4, data storage. After labeling by using a LabelImg labeling tool, an xml file is generated, wherein the stored key information comprises a target name and coordinates xmin, xmax, ymin and ymax of a target frame. The marked data format is stored according to the VOC data format, one image corresponds to one label file, the image storage format is jpg, and the label file storage format is xml.

In the step 2, a convolutional neural network model is established based on computer vision, and the method is specifically realized as follows:

and detecting and positioning the welding spot by using the convolutional neural network model as a welding spot detection model. Aiming at the small-scale target detection of the welding spot detection, the model is improved by multi-scale detection on the basis of the existing YOLOv3, and a Darknet53 network improved on the basis of a residual neural network is used as a feature extractor, so that the improved convolutional neural network model can better detect the small target.

The process of training the convolutional neural network model by using the welding spot data set in the step 3 comprises the following steps:

3-1, dividing the data set. The acquired data set is as follows 7: the scale of 3 is divided into a training set and a test set.

3-2, network input and data enhancement. Due to 5 downsampling processes in the convolutional neural network model, namely 25The network input image should therefore be a multiple of 32. The training set was segmented into 13 × 13 cells per image. Since 13 × 32 is 416, the input image size is finally required to be 416 × 416.

And performing data enhancement on the limited training set, wherein the data enhancement comprises horizontal turning, vertical turning, random rotation and color dithering on the images of the training set.

3-3. network structure. The backbone network (backbone) of the convolutional neural network model adopts a network structure of Darknet53, the feature layer of the whole convolutional neural network model is divided into 5 scales, and the feature image sizes produced by the 5 scales are respectively 13 × 13, 26 × 26, 52 × 52, 104 × 104 and 208 × 208. The convolutional neural network model can better perform local feature interaction with a small-scale target by using the 5 feature images with different scales.

And 3-4, network output. 15 anchor boxes (anchor boxes) are obtained by using a K-means clustering method on a training set, and one anchor box with the largest IOU (input channel) with a true value is selected as a prediction box during output. For an input image, the convolutional neural network model maps the input image to 5 output tensors with different scales, and represents the probability that an object exists at each position of the image. For a 416 × 416 input image, each prediction is a 4+1+1 ═ 6-dimensional vector, and each 6-dimensional vector contains a prediction bounding box coordinate (c)x,cy,pw,ph) A confidence of the bounding box and a probability of the object class, where cx,cyIs to predict the center coordinates, p, of the bounding box on the feature imagew,phIs to predict the width and height of the bounding box on the feature map. When the confidence of the frame is higher than 0.3, the frame is judged to be a suspected target, when the intersection ratio of the two prediction frames is larger than a set threshold, the two prediction frames are considered to be marked by the same target, at the moment, the prediction frame with the highest confidence is selected as a final result by a Non-Maximum Suppression (NMS) method, and the coordinate information and the category information of the prediction frame are output.

3-5, loss function. The loss function consists of four parts, lossall=lossxy+lossbox×losswh+lossconf+losscls. Wherein lossxyLoss, representing a shift in position of coordinates x and ywhError loss, representing height and width of the target boxconfRepresents the loss of confidence, loss, of the target boxclsRepresents the loss of classification category, and finally introduces lossboxThe method has the function of reducing the regression loss of the target frame with the size scale in the process of calculating the height and the width of the target frame, so that the height and width loss of the small target is larger in the ratio of the total loss function and is more beneficial to being smallAnd (4) detecting the target.

The loss function is specifically calculated as follows:

wherein the parameters are defined as follows:

S2: total of S × S network units

B: there are B prediction blocks in each network element

The value is 0 or 1. When the target exists in the network unit i, the value 1 is taken to prove that the prediction of the jth prediction box is effective; when no target exists in the network element i, the value is 0.

The value is 0 or 1. The meaning thereof andthe opposite is true. When the network unit i has no target, taking the value of 0; when there is a target in network element i, the value is 1.

x, y: predicting a center position of a bounding box

Center position of actual bounding box

w, h: predicting width and height of bounding box

Width and height of the actual bounding box

c: a confidence score is expressed to indicate the magnitude of the probability that an object exists

Intersection of predicted bounding box and actual bounding box

λcoord: coordinate prediction weight, an adjustable weight parameter for adjusting the proportion of each partial loss function in the total loss function

λnoobj: confidence prediction weights to balance confidence loss for existing targets with confidence loss for non-existing targets

λ: an adjustable weight coefficient for adjusting lossboxSpecific gravity in the total loss function

pi(c) The method comprises the following steps Predicted object classes

Actual label categories

And 3-6, testing the model. The trained models are tested using a test set. A number of evaluation parameters were used, including Precision (Precision), Recall (Recall), F1 values, and intersection ratio (IOU). In addition, the frame rate (fps) is also used to detect the detection speed of the model.

The formula for calculating the above five parameters is as follows:

the precision ratio is as follows: precision TP/(TP + FP)

The recall ratio is as follows: recall ═ TP/(TP + FN)

F1 value: f1 ═ 2 Precision Recall/(Precision + Recall)

Cross-over ratio:

frame rate:

where TP is a true positive example representing the number of samples predicted to be 1 and having a true value of 1, FP is a false positive example representing the number of samples predicted to be 1 and having a true value of 0, and FN is a false negative example representing the number of samples predicted to be 0 and having a true value of 1. The intersection-comparison IOU is the overlapping rate of the prediction frame DR (detection result) output by the model and the original real labeling frame GT (ground Truth), and the optimal condition is that the prediction frame DR (detection result) and the original real labeling frame GT (ground Truth) are completely overlapped, and the IOU is 1 at the moment. Finally, frame rate fps is used herein to evaluate the speed of the algorithm for processing images, n is the total number of images processed, and T is the time required for processing, and the unit is frames per second (f/s).

Compared with the prior art, the invention has the following advantages and effects:

1. the YOLOv3 network structure is improved, the welding spot target is detected through 5 feature detection layers with different scales, and the detection effect of the target detection network on the small-scale target is improved.

2. The loss function is improved. The loss function of the convolutional neural network consists of four parts, and the result can be optimized in different aspects by using the multiple loss functions for constraint, so that the model is ensured to have high precision.

3. The convolutional neural network can achieve real-time detection while ensuring the accuracy, and meets the actual production requirements of factories.

Drawings

FIG. 1 is a flow chart of a method for detecting a solder joint based on computer vision.

FIG. 2 is an original image acquired by an AOI automated optical inspection apparatus.

Fig. 3 is an image of a data set segmented into 416 x 416.

Fig. 4 shows a target of a welding spot to be detected.

Fig. 5 is a diagram of a convolutional neural network model structure.

Fig. 6 is the result of model recognition in a 416 x 416 image.

FIG. 7 is an original image of a VEDAI dataset and a target.

Fig. 8 is the result of recognition of the convolutional neural network model in a 1024 × 1024 image.

Detailed description of the invention

The method of the invention is further described below with reference to the accompanying drawings and examples.

FIG. 5 is a diagram of a convolutional neural network model structure; the original YOLOv3 model has characteristic image sizes of 13 × 13, 26 × 26 and 52 × 52, respectively, due to the input image size of 416 × 416 and the characteristic image sizes generated by the three characteristic layers. It can be seen that the sizes of the three feature images are reduced by 32 times, 16 times, and 8 times, respectively, compared to the input image. Wherein the 8 times down-sampling has the smallest receptive field, which is suitable for detecting small targets, and the size of the target which can be detected at the smallest is 8 multiplied by 8. For better detection of small targets, the invention improves the YOLOv3 model, adds two feature layers, and generates feature images with the sizes of 104 × 104 and 208 × 208 respectively. The sizes of the two newly added characteristic images are reduced by 4 times and 2 times compared with the size of the input image, so that the size of the minimum detectable object is changed into 2 multiplied by 2, and the characteristics of the small object can be better acquired so as to be identified and positioned.

The invention provides a welding spot detection method based on a convolutional neural network model, which aims to accurately identify welding spots on a PCB and position the welding spots.

As shown in fig. 1, in order to achieve the purpose, the technical solution of the present invention comprises the following steps:

step 1, collecting a PCB welding spot data set, preprocessing the data, and then labeling and storing the data.

And 2, establishing a convolutional neural network model based on computer vision.

And 3, training the established convolutional neural network model by using the welding spot data training set, and testing the established model by using the welding spot data testing set.

In the step 1, a welding spot data set is collected, and the data is preprocessed, labeled and stored, and the specific implementation steps are as follows:

1-1, data acquisition, firstly, carrying out original image acquisition on a PCB by using AOI automatic optical detection equipment. For each PCB, a camera shoots a plurality of local view fields, and a complete image is spliced by an image splicing method. The size of the acquired image is 5182 × 2697 as shown in fig. 2.

1-2. data preprocessing, because the original image size is too large and the welding point is too small, in order to ensure that the welding point position can be accurately detected and positioned, the image is uniformly divided into 416 × 416 sizes, and the divided image is shown in fig. 3.

And 1-3, data annotation, namely, because a large amount of picture data is needed for training a neural network model, manually annotating the preprocessed image by using an annotation tool LabelImg, and marking an image welding spot target by using a rectangular frame of the preprocessed image, as shown in FIG. 4.

And 1-4, data storage. After labeling by using a LabelImg labeling tool, an xml file is generated, wherein the stored key information comprises a target name and coordinates xmin, xmax, ymin and ymax of a target frame. The marked data format is stored according to the VOC data format, one image corresponds to one label file, the image storage format is jpg, and the label file storage format is xml.

In the step 2, a convolutional neural network model is established based on computer vision, and the method is specifically realized as follows:

and detecting and positioning the welding spot by using the convolutional neural network model as a welding spot detection model. Aiming at the small-scale target detection of the welding spot detection, the model is improved by multi-scale detection on the basis of the existing YOLOv3, and a Darknet53 network improved on the basis of a residual neural network is used as a feature extractor, so that the improved convolutional neural network can better detect the small target. The structure of the convolutional neural network model is shown in fig. 5.

The process of training the convolutional neural network model by using the welding spot data set in the step 3 comprises the following steps:

3-1, dividing the data set. The acquired data set is as follows 7: the scale of 3 is divided into a training set and a test set.

3-2, network input and data enhancement. Due to 5 downsampling processes in the convolutional neural network model, namely 25The network input image should therefore be a multiple of 32. Here the training set is segmented into 13 x 13 cells per image. Since 13 × 32 is 416, the input image size is finally required to be 416 × 416.

In order to ensure that the trained model does not fit over, i.e., has sufficient generalization, a sufficient training set is required to train the model. The method carries out data enhancement on a limited training set, wherein the data enhancement comprises horizontal turning, vertical turning, random rotation, color dithering and the like on a training set picture. And a large number of pictures which are not acquired and conform to the reality can be generated by performing data enhancement on the limited training set, so that the generalization of the model is improved.

3-3. network structure. The backbone network (backbone) of the convolutional neural network model adopts a network structure of Darknet53, and the Darknet-53 adopts full convolution on one hand and introduces a residual error (residual) structure on the other hand, so that the problems of gradient disappearance, gradient explosion and the like can be avoided under the condition of a deeper network layer number, and the characteristics of an input image can be better extracted. The feature layer of the whole convolutional neural network model is divided into 5 scales, and the 5 scales produce feature images with the sizes of 13 × 13, 26 × 26, 52 × 52, 104 × 104 and 208 × 208 respectively. The convolutional neural network model can better perform local feature interaction with a small-scale target by using the 5 feature images with different scales.

And 3-4, network output. 15 anchor boxes (anchor boxes) are obtained by using a K-means clustering method on a training set, and one anchor box with the largest IOU (input channel) with a true value is selected as a prediction box during output. For an input image, the convolutional neural network model maps the input image to 5 output tensors with different scales, and represents the probability that an object exists at each position of the image. For a 416 × 416 input image, each prediction is a 4+1+1 ═ 6-dimensional vector, and each 6-dimensional vector contains a prediction bounding box coordinate (c)x,cy,pw,ph) A confidence of the bounding box and a probability of the object class, where cx,cyIs to predict the center coordinates, p, of the bounding box on the feature imagew,phIs to predict the width and height of the bounding box on the feature map. When the confidence of the frame is higher than 0.3, the frame is judged to be a suspected target, when the intersection ratio of the two prediction frames is larger than a set threshold value, the two prediction frames are considered to be marked as the same target, at the moment, the prediction frame with the highest confidence is selected as a final result by a Non-Maximum Suppression (NMS) method, and the final result is outputAnd the coordinate information and the category information are obtained.

3-5, loss function. Because of the comparison with the large-scale target, the width and height of the detection frame for detecting the small-scale target are more important, and the performance is more difficult to improve. Meanwhile, the deviation generated when the small-scale target frame is predicted has a great influence on the accuracy of the small target, but the proportion of the deviation in the total loss function of the YOLOv3 network before improvement is low, so that the convergence of the loss function in the network training process is not sensitive to the small target. Therefore, in the present method, in order to improve this problem, a new loss function is designed. The loss function consists of four parts, lossall=lossxy+lossbox×losswh+lossconf+losscls. Wherein lossxyLoss, representing a shift in position of coordinates x and ywhError loss, representing height and width of the target boxconfRepresents the loss of confidence, loss, of the target boxclsRepresenting class loss of classification, and finally introducing loss in the methodboxThe method has the advantages that the regression loss of the target frame with the size scale is reduced in the process of calculating the height and the width of the target frame, so that the height and width loss of the small target is larger in the ratio of the total loss function, and the method is more beneficial to the detection of the small target. By defining the loss function, the result can be optimized from different aspects, so that the result is more accurate.

The loss function is specifically calculated as follows:

wherein the parameters are defined as follows:

S2: total of S × S network units

B: there are B prediction blocks in each network element

The value is 0 or 1. When the target exists in the network unit i, the value 1 is taken to prove that the prediction of the jth prediction box is effective; when no target exists in the network element i, the value is 0.

The value is 0 or 1. The meaning thereof andthe opposite is true. When the network unit i has no target, taking the value of 0; when there is a target in network element i, the value is 1.

x, y: predicting a center position of a bounding box

Center position of actual bounding box

w, h: predicting width and height of bounding box

Width and height of the actual bounding box

c: a confidence score is expressed to indicate the magnitude of the probability that an object exists

Intersection of predicted bounding box and actual bounding box

λcoord: coordinate prediction weight, an adjustable weight parameter for adjusting the proportion of each partial loss function in the total loss function

λnoobj: confidence prediction weights to balance confidence loss for existing targets with confidence loss for non-existing targets

λ: an adjustable weight coefficient for adjusting lossboxSpecific gravity in the total loss function

pi(c) The method comprises the following steps Predicted object classes

Actual label categories

The result of the network model recognition after the training is completed is shown in fig. 6.

And 3-6, testing the model. The trained models are tested using a test set. To measure the quality of the model, the method uses a plurality of evaluation parameters, including Precision (Precision), Recall (Recall), F1 value, and intersection ratio (IOU). In addition, the frame rate (fps) is also used to detect the detection speed of the model.

The formula for calculating the above five parameters is as follows:

the precision ratio is as follows: precision TP/(TP + FP)

The recall ratio is as follows: recall ═ TP/(TP + FN)

F1 value: f1 ═ 2 Precision Recall/(Precision + Recall)

Cross-over ratio:

frame rate:

where TP is a true positive example representing the number of samples predicted to be 1 and having a true value of 1, FP is a false positive example representing the number of samples predicted to be 1 and having a true value of 0, and FN is a false negative example representing the number of samples predicted to be 0 and having a true value of 1. The intersection-comparison IOU is the overlapping rate of the prediction frame DR (detection result) output by the model and the original real labeling frame GT (ground Truth), and the optimal condition is that the prediction frame DR (detection result) and the original real labeling frame GT (ground Truth) are completely overlapped, and the IOU is 1 at the moment. Finally, frame rate fps is used herein to evaluate the speed of the algorithm for processing images, n is the total number of images processed, and T is the time required for processing, and the unit is frames per second (f/s).

The solder joint detection method based on computer vision is a technology for detecting whether there is a solder joint on a printed circuit board PCB and accurately positioning solder joint information by operating a camera and a computer. The flow chart of the welding spot detection method based on computer vision is shown in figure 1.

The specific effects of the model of the present invention are further illustrated by the following experiments.

The experimental environment and conditions of the present invention are as follows:

CPU:Core i7 i7-9700K 3.60GHz

GPU:NVIDIA GeForce RTX 2080Ti×2

memory: 32GB

Operating the system: ubuntu 18.04.4 LTS

The data set used in the experiment is from the same AOI automatic optical detection equipment, the size of each image is uniform to 416 x 416, and after training, the detection performance is better as shown in Table 1. The results of the test using the model are shown in fig. 6.

TABLE 1

Size Precision ratio (%) Recall (%) F1 value (%) IOU(%) Frame rate (f/s)
416×416 89 97.7 93.2 85.654 20.515

In order to further prove the improvement of the method on the detection performance of the small-scale target, a public small target data set is used for carrying out performance test on the model. The VEDAI dataset image size is 1024 × 1024, which is an aerial data set, the object in the dataset is very small, and the original image and the object are as shown in fig. 7. In the experiment, about 1200 aerial photographs containing vehicles and airplanes are selected from the VEDAI data set as the data set, and the data set is used for carrying out the experiment to prove that the detection performance of the small-scale target is improved by the improved convolutional neural network model. The results of the experiment are shown in table 2. The results of the test using the improved convolutional neural network model are shown in fig. 8. From the change of the F1 value in the experimental results of table 2, it can be seen that the improved convolutional neural network has a significant improvement on the detection performance of the small-scale target.

TABLE 2

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种用于集成电路制造的数据处理方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!