Defect detection method and device based on structural light field video stream

文档序号:6113 发布日期:2021-09-17 浏览:52次 中文

1. A defect detection method based on a structural light field video stream is characterized by comprising the following steps:

a1: constructing a detection device based on an active coding structure light source and a light field vision sensor;

a2: calibrating the light field visual sensor by using a white image, and decoding a structural light field video stream acquired by the light field visual sensor according to a calibration result to obtain a real-time microlens image video stream;

a3: calculating the inter-frame similarity after performing motion correction, gray correction and region-of-interest correction on the real-time microlens image video stream, and establishing a local defect detection result of the microlenses;

a4: and (3) counting the distribution condition of the defect characteristics on the adjacent microlenses by utilizing the spatial domain and time domain correlation of the light field video to obtain the final detection result of the defects of the structured light field video.

2. The detection apparatus according to claim 1, wherein the detection apparatus comprises an active coded structured light source system capable of emitting light and displaying a specific coded pattern, and a light field vision sensor system with light field video stream acquisition capability, wherein light emitted by the active coded structured light source system is captured by the light field vision sensor system after being reflected by an object to be detected; the active coding structure light source system is controlled by a computer program, and can display a specified coding pattern at a specified time according to the requirement of the program, preferably, the coding pattern comprises one or more of two-dimensional square wave stripes, two-dimensional sine wave stripes and two-dimensional checkerboards; the light field vision sensor system comprises one or more vision sensor devices with single-exposure stereo imaging function, and preferably adopts a light field camera or a camera array type light field camera based on multiplexing technology.

3. The method according to claim 1 or 2, characterized in that in step a 2:

for a white image captured by a light field vision sensor, after an edge detection algorithm is used for carrying out initial estimation on the center of a corresponding micro lens in the white image, inputting the initial estimation value of the center of the micro lens into an optimization model corresponding to the distribution of the micro lens of a camera, and obtaining a grid vector corresponding to the geometric distribution of the micro lens of the light field vision sensor after nonlinear optimization, thereby obtaining the accurate pixel coordinate of the center of the micro lens; preferably, the specific operation is as follows:

in the above formula:

r-geometric distribution grid vector of microlens;

theta-microlens angle distribution parameters matched with the microlens array;

h-a transformation matrix from the pixel coordinate system to the microlens coordinate system;

xk,yk-an initial estimate of pixel coordinate values for the k-th microlens center;

rrefined-optimized microlens geometric distribution grid vectors;

Hrefinedaccording to rrefinedGenerating a new transformation matrix;

xk,refined,yk,refined-exact pixel coordinate values of the k-th microlens center;

preferably, the light field video stream refers to a sequence of light field images continuously acquired using a light field vision sensor; traversing all the centers of the micro-lenses in the calibration result for each frame of light field original image in the sequence, and taking the accurate pixel coordinate value x of the micro-lensesi,refined,yi,refinedAnd selecting the neighborhood size close to the size of the micro lens as the center to carry out segmentation to obtain the real-time micro lens image video stream.

4. The method according to any one of claims 1 to 3, wherein said step A3 comprises:

a31: performing gray scale correction on the real-time microlens image video stream;

a32: performing region of interest ROI correction on a real-time microlens image video stream;

a33: performing motion correction on the real-time microlens image video stream;

a34: and calculating the inter-frame similarity in real time according to the corrected frame difference video stream, and establishing a local defect detection result of the micro lens.

5. The method of claim 4, wherein in step A31:

for the real-time microlens image video stream obtained in the step A2, intercepting two adjacent frames of microlens images, calculating the average value of the gray scale of the two frames of images, and performing linear transformation on a frame with smaller gray scale to obtain a video stream with consistent gray scale change; preferably, the specific operation is as follows:

in the above formula:

i (t) -real-time microlens images at time t;

Δ t-frame sampling interval of the light field vision sensor;

i (t + delta t) -the first frame of microlens image after time t;

and (t) -real-time microlens images after gray-scale correction at the time I' (t).

6. The method according to claim 4 or 5, wherein in step A32:

for the real-time microlens image video stream with consistent gray scale obtained in the step A31, intercepting two adjacent frames of microlens images, calculating the intersection part of the ROI of the two frames of images as a new ROI, and removing invalid information except the ROI; preferably, the specific operation is as follows:

I″(t)=∩{Bin[I′(t)],Bin[I′(t+Δt)]}·I′(t) (6)

in the above formula:

bin () -binarization operation;

and I' (t) -t time point ROI corrected real-time microlens image.

7. The method according to any one of claims 4 to 6, wherein in step A33:

for the real-time microlens image video stream with consistent gray scale and ROI obtained in the step A32, intercepting two adjacent frames of microlens images, moving the position of the previous frame and calculating inter-frame MSE corresponding to different displacements, wherein the displacement with the minimum MSE value is the optimal motion estimation between the two frames, and the motion error of the previous frame can be corrected according to the estimation value; preferably, the specific operation is as follows:

I″′t(i,j)=I″t(i+ut,j+vt) (9)

in the above formula:

I″treal-time microlens images with consistent gray scale and ROI at the moment t;

m, N-microlens image size;

i, j-microlens image pixel coordinates for traversal;

u, v-interframe displacement estimation for traversal;

MSEtthe MSE values of all the interframe displacement estimation used for traversing are corresponding to the moment t;

ut,vtbest inter-frame motion estimation at time t;

I″′tthe gray scale at the moment-t is consistent with ROI without motion errorPoor real-time microlens images.

8. The method according to any one of claims 4 to 7, wherein in step A34:

for the real-time microlens image video stream which is obtained in the step A33 and has consistent gray scale and ROI and no motion error, two adjacent frames of microlens images are intercepted, a similarity measure function suitable for a specific task is selected, threshold judgment is set for the inter-frame similarity, and a local defect detection result of the microlens is output; preferably, the specific operation is as follows:

Result(t)=Bin[Similarity(I″′t,I″′t+Δt)] (10)

in the above formula:

similarity () -Similarity measure function;

result (t) -local defect detection result of the microlens at the moment t.

9. The method according to any one of claims 1 to 8, wherein said step A4 comprises:

a41: utilizing the spatial correlation of the light field image to count the distribution condition of the defect characteristics on the adjacent microlenses, and establishing a defect position detection result in the single-frame difference light field image;

a42: counting the change rule of the defect position between adjacent frames by utilizing the time domain correlation of the light field video, and establishing a final structural light field video defect detection result;

preferably, in the step a 41:

using the real-time local defect detection result of the micro-lens obtained in the step A34 to generate a series of distribution of the central space positions of the micro-lens with the defects, and calculating the density of the defective micro-lens corresponding to each position of the current light field image frame, wherein the higher the density is, the higher the possibility of physical defects exists; more preferably, the specific operation is as follows:

FrameResult(k)=Bin[D(k)] (12)

in the above formula:

dist () -distance measure functions including, but not limited to, Euclidean distances, Gaussian distances, etc.;

s, the total number of the micro lenses with defects in the current frame;

d (k) -the defect density at the center of the kth microlens;

FrameResult (k) -the detection result of the defect position at the k-th microlens center of the current frame;

preferably, in the step a 42:

using the detection result of the defect position of the single-frame difference light field image obtained in the step a41, if a defect is detected in the region with a possibility of physical defect among several adjacent frames and the moving speed of the region position among the frames is close to the moving speed of the object set by the detection device, it is determined that the possibility of the defect existing in the position is high; the defect positions with time domain correlation are marked and output in real time on the light field video stream.

10. A defect detection apparatus based on a structured light field video stream, comprising a processor and a detection apparatus based on an actively encoded structured light source and a light field visual sensor, wherein the processor, when executing a computer program on a computer readable storage medium, implements steps a2-a4 of the defect detection method based on a structured light field video stream according to any one of claims 1 to 9.

Background

Machine vision defect detection is a technique in which an image captured by an optical sensor is input, and physical defects are identified by extracting surface image features by a computer vision method. Compared with the traditional manual identification, the technology has high detection precision and high detection speed, and has wide application prospect in the field of industrial quality inspection. However, the traditional image recognition and stereoscopic vision technology mostly depends on image feature points and texture information, and is often unable to be used when facing low-texture or non-texture objects to be detected and working scenes; the detection technology based on the structured light can reconstruct the three-dimensional information of the surface of the object, but the system has high cost and the detection precision is obviously influenced by the material and the processing technology of the object to be detected. In addition, the traditional vision sensor-based method has poor robustness when detecting tiny defects, more false detection and missed detection, and low efficiency because multiple times of multi-angle imaging are needed when processing objects with complex surface structures.

The light field imaging technology simultaneously captures four-dimensional light information containing space dimension and angle dimension through a special light path structure design, single-exposure multi-angle three-dimensional imaging is achieved, however, the light field data dimension is high, the processing is complex, real-time detection processing is difficult to achieve, and the related research of applying the light field imaging technology to industrial detection application is still in a starting stage.

It is to be noted that the information disclosed in the above background section is only for understanding the background of the present application and thus may include information that does not constitute prior art known to a person of ordinary skill in the art.

Disclosure of Invention

The invention provides a defect detection method and device based on a structural light field video stream, aiming at the problems of poor accuracy, stability, robustness and instantaneity of the existing machine vision defect detection technology.

In order to achieve the purpose, the invention adopts the following technical scheme:

a defect detection method based on a structural light field video stream comprises the following steps:

a1: constructing a detection device based on an active coding structure light source and a light field vision sensor;

a2: calibrating the light field visual sensor by using a white image, and decoding a structural light field video stream acquired by the light field visual sensor according to a calibration result to obtain a real-time microlens image video stream;

a3: calculating the inter-frame similarity after performing motion correction, gray correction and region-of-interest correction on the real-time microlens image video stream, and establishing a local defect detection result of the microlenses;

a4: and (3) counting the distribution condition of the defect characteristics on the adjacent microlenses by utilizing the spatial domain and time domain correlation of the light field video to obtain the final detection result of the defects of the structured light field video.

Further:

the detection device comprises an active coding structure light source system capable of emitting light and displaying a specific coding pattern and a light field vision sensor system with light field video stream acquisition capacity, wherein light emitted by the active coding structure light source system is captured by the light field vision sensor system after being reflected by an object to be detected; the active coding structure light source system is controlled by a computer program, and can display a specified coding pattern at a specified time according to the requirement of the program, preferably, the coding pattern comprises one or more of two-dimensional square wave stripes, two-dimensional sine wave stripes and two-dimensional checkerboards; the light field vision sensor system comprises one or more vision sensor devices with single-exposure stereo imaging function, and preferably adopts a light field camera or a camera array type light field camera based on multiplexing technology.

In the step A2:

for a white image captured by a light field vision sensor, after an edge detection algorithm is used for carrying out initial estimation on the center of a corresponding micro lens in the white image, inputting the initial estimation value of the center of the micro lens into an optimization model corresponding to the distribution of the micro lens of a camera, and obtaining a grid vector corresponding to the geometric distribution of the micro lens of the light field vision sensor after nonlinear optimization, thereby obtaining the accurate pixel coordinate of the center of the micro lens; preferably, the specific operation is as follows:

in the above formula:

r-geometric distribution grid vector of microlens;

theta-microlens angle distribution parameters matched with the microlens array;

h-a transformation matrix from the pixel coordinate system to the microlens coordinate system;

xk,yk-an initial estimate of pixel coordinate values for the k-th microlens center;

rrefined-optimized microlens geometric distribution grid vectors;

Hrefinedaccording to rrefinedGenerating a new transformation matrix;

xk,refined,yk,refined-exact pixel coordinate values of the k-th microlens center.

Preferably, the light field video stream refers to a sequence of light field images continuously acquired using a light field vision sensor; traversing all the centers of the micro-lenses in the calibration result for each frame of light field original image in the sequence, and taking the accurate pixel coordinate value x of the micro-lensesi,refined,yi,refinedAnd selecting the neighborhood size close to the size of the micro lens as the center to carry out segmentation to obtain the real-time micro lens image video stream.

The step A3 includes:

a31: performing gray scale correction on the real-time microlens image video stream;

a32: performing region of interest (ROI) correction on the real-time microlens image video stream;

a33: performing motion correction on the real-time microlens image video stream;

a34: and calculating the inter-frame similarity in real time according to the corrected frame difference video stream, and establishing a local defect detection result of the micro lens.

In the step A31:

for the real-time microlens image video stream obtained in the step A2, intercepting two adjacent frames of microlens images, calculating the average value of the gray scale of the two frames of images, and performing linear transformation on a frame with smaller gray scale to obtain a video stream with consistent gray scale change; preferably, the specific operation is as follows:

in the above formula:

i (t) -real-time microlens images at time t;

Δ t-frame sampling interval of the light field vision sensor;

i (t + delta t) -the first frame of microlens image after time t;

and (t) -real-time microlens images after gray-scale correction at the time I' (t).

In the step A32:

for the real-time microlens image video stream with consistent gray scale obtained in the step A31, intercepting two adjacent frames of microlens images, calculating the intersection part of the ROI of the two frames of images as a new ROI, and removing invalid information except the ROI; preferably, the specific operation is as follows:

I″(t)=∩{Bin[I′(t)],Bin[I′(t+Δt)]}·I′(t) (6)

in the above formula:

bin () -binarization operation;

and I' (t) -t time point ROI corrected real-time microlens image.

In the step A33:

for the real-time microlens image video stream with consistent gray scale and ROI obtained in the step A32, intercepting two adjacent frames of microlens images, moving the position of the previous frame and calculating inter-frame MSE corresponding to different displacements, wherein the displacement with the minimum MSE value is the optimal motion estimation between the two frames, and the motion error of the previous frame can be corrected according to the estimation value; preferably, the specific operation is as follows:

It″′(i,j)=It″(i+ut,j+vt) (9)

in the above formula:

Itreal-time microlens images with consistent gray scale and ROI at the moment of-t;

m, N-microlens image size;

i, j-microlens image pixel coordinates for traversal;

u, v-interframe displacement estimation for traversal;

MSEtthe MSE values of all the interframe displacement estimation used for traversing are corresponding to the moment t;

ut,vtbest inter-frame motion estimation at time t;

Itreal-time microlens image with gray scale at moment-t, consistent ROI and no motion error.

In the step A34:

for the real-time microlens image video stream which is obtained in the step A33 and has consistent gray scale and ROI and no motion error, two adjacent frames of microlens images are intercepted, a similarity measure function suitable for a specific task is selected, threshold judgment is set for the inter-frame similarity, and a local defect detection result of the microlens is output; preferably, the specific operation is as follows:

Result(t)=Bin[Similarity(I″′t,I″′t+Δt)] (10)

in the above formula:

similarity () -Similarity measure function;

result (t) -local defect detection result of the microlens at the moment t.

The step A4 includes:

a41: utilizing the spatial correlation of the light field image to count the distribution condition of the defect characteristics on the adjacent microlenses, and establishing a defect position detection result in the single-frame difference light field image;

a42: and (3) counting the change rule of the defect position between adjacent frames by utilizing the time domain correlation of the light field video, and establishing a final structural light field video defect detection result.

Preferably, in the step a 41:

using the real-time local defect detection result of the micro-lens obtained in the step A34 to generate a series of distribution of the central space positions of the micro-lens with the defects, and calculating the density of the defective micro-lens corresponding to each position of the current light field image frame, wherein the higher the density is, the higher the possibility of physical defects exists; more preferably, the specific operation is as follows:

FrameResult(k)=Bin[D(k)] (12)

in the above formula:

dist () -distance measure functions including, but not limited to, Euclidean distances, Gaussian distances, etc.;

s, the total number of the micro lenses with defects in the current frame;

d (k) -the defect density at the center of the kth microlens;

FrameResult (k) -the defect position detection result at the k-th microlens center of the current frame.

Preferably, in the step a 42:

using the detection result of the defect position of the single-frame difference light field image obtained in the step a41, if a defect is detected in the region with a possibility of physical defect among several adjacent frames and the moving speed of the region position among the frames is close to the moving speed of the object set by the detection device, it is determined that the possibility of the defect existing in the position is high; the defect positions with time domain correlation are marked and output in real time on the light field video stream.

A defect detection device based on a structured light field video stream comprises a processor and a detection device based on an active coding structure light source and a light field visual sensor, wherein the processor realizes the steps A2-A4 of the defect detection method based on the structured light field video stream when executing a computer program on a computer readable storage medium.

The invention has the following beneficial effects:

the invention provides a defect detection method and a device based on a structured light field video stream, which combine a coded structured light technology and a light field imaging technology, convert physical defects on the surface of an object to be detected into geometric distortion of a reflection coded pattern on the surface of the object by utilizing an active coded structured light source, capture the geometric distortion by utilizing the light field imaging through fewer exposure times in a higher dimension, and convert large-range unobvious distortion in the visual field of a traditional camera into small-range obvious distortion with spatial correlation in the level of a light field microlens image. Compared with the prior art, the method can effectively improve the accuracy and stability of structured light defect detection, solves the problem of 'from nothing to nothing' of light field visual defect detection of the complex workpiece, obviously improves the real-time performance of the light field video detection method, and has important significance for the industries of precision machining and the like and the popularization and application of the light field imaging technology.

Drawings

Fig. 1 is a flowchart of a defect detection method based on a structured light field video stream according to an embodiment of the present invention.

Detailed Description

The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.

The invention provides a defect detection method and a device based on a structured light field video stream, and the main ideas are as follows: the method combines the coded structured light technology and the light field imaging technology, converts physical defects of an object to be detected into geometric distortion of a coded pattern by using the active coded structured light source, captures the geometric distortion by using the light field imaging through fewer exposure times in higher dimensionality, converts large-range unobvious distortion in the visual field of a traditional camera into small-range significant distortion with spatial correlation in the light field microlens image level, and realizes accurate, efficient and stable detection of the physical defects of the object made of various materials. In some embodiments, as shown in fig. 1, the defect detection method includes the steps of:

a1: constructing a detection device based on an active coding structure light source and a light field vision sensor;

a2: calibrating the light field visual sensor by using a white image, and decoding a structural light field video stream acquired by the sensor according to a calibration result to obtain a real-time microlens image video stream;

a3: carrying out gray correction, region-of-interest correction and motion correction on the real-time microlens image video stream, then calculating inter-frame similarity, and establishing a local defect detection result of the microlens;

a4: and (3) counting the distribution condition of the defect characteristics on the adjacent microlenses by utilizing the spatial domain and time domain correlation of the light field video to obtain the final detection result of the defects of the structured light field video.

When the above steps are performed in a specific embodiment, the following operations may be performed. It should be noted that the specific methods employed in the practice are merely illustrative, and the scope of the present invention includes, but is not limited to, the following specific methods.

A1: a detection device based on an active coding structure light source and a light field vision sensor is constructed.

Specifically, the device comprises an active coding structure light source system capable of emitting light and displaying a specific coding pattern and a light field vision sensor system with light field video stream acquisition capability. The active coding structure light source system is controlled by a computer program and can display a specified coding pattern at a specified time according to the requirements of the program. The coding pattern includes but is not limited to two-dimensional square wave stripe, two-dimensional sine wave stripe, two-dimensional checkerboard and other patterns containing space modulation information, and the light source system hardware carrier includes but is not limited to photographic lamp, lamp box covered by the coding pattern and display equipment such as electronic display screen which can be controlled by computer program. The light field vision sensor system is composed of one or more vision sensor devices with single-exposure stereo imaging function, and the hardware carriers thereof include but are not limited to a light field camera based on multiplexing technology, a camera array light field camera and other stereo imaging devices capable of obtaining light field images through decoding. Light emitted by a light source system in the system is reflected by an object to be detected and then captured by a light field vision sensor system.

A2: and calibrating the light field visual sensor by using the white image, and decoding the structural light field video stream acquired by the sensor according to a calibration result to obtain a real-time microlens image video stream.

Specifically, for a white image captured by the light field visual sensor, after initial estimation is carried out on the center of a corresponding micro lens in the white image by using traditional edge detection algorithms such as Hough circle transformation and the like, the initial estimation value of the center of the micro lens is input into an optimization model corresponding to the distribution of the micro lens of the camera, a grid vector corresponding to the geometric distribution of the micro lens of the light field visual sensor is obtained after nonlinear optimization, and then the accurate pixel coordinate of the center of the micro lens is obtained. The specific operation is as follows:

in the above formula:

r-geometric distribution grid vector of microlens;

theta-microlens angle distribution parameters matched with the microlens array;

h-a transformation matrix from the pixel coordinate system to the microlens coordinate system;

xi,yi-an initial estimate of pixel coordinate values for the ith microlens center;

rrefined-optimized microlens geometric distribution grid vectors;

Hrefinedaccording to rrefinedGenerating a new transformation matrix;

xi,refined,yi,refined-exact pixel coordinate values of the ith microlens center.

A light field video stream refers to a sequence of light field images that are continuously acquired using a light field vision sensor. Traversing all the centers of the micro-lenses in the calibration result for each frame of light field original image in the sequence, and taking the accurate pixel coordinate value x of the micro-lensesi,refined,yi,refinedAnd selecting the neighborhood size close to the size of the micro lens as the center to carry out segmentation, thus obtaining the real-time micro lens image video stream.

A3: and performing gray correction, region-of-interest correction and motion correction on the real-time microlens image video stream, calculating inter-frame similarity, and establishing a local defect detection result of the microlens.

A31: performing gray scale correction on the real-time microlens image video stream;

specifically, for the real-time microlens image video stream obtained in step a2, two adjacent frames of microlens images are captured, the mean value of the gray levels of the two frames of images is calculated, and one frame with smaller gray level is linearly transformed to obtain a video stream with consistent gray level change. The specific operation is as follows (assuming that the frame with smaller gray scale is the previous frame):

in the above formula:

i (t) -real-time microlens images at time t;

Δ t-frame sampling interval of the light field vision sensor;

i (t + delta t) -the first frame of microlens image after time t;

and (t) -real-time microlens images after gray-scale correction at the time I' (t).

A32: performing region of interest (ROI) correction on the real-time microlens image video stream;

specifically, for the real-time microlens image video stream with consistent gray scale obtained in step a31, two adjacent microlens images are intercepted, the intersection part of ROIs of the two images is calculated as a new ROI, and invalid information except the ROI is removed. The specific operation is as follows:

I″(t)=∩{Bin[I′(t)],Bin[I′(t+Δt)]}·I′(t) (6)

in the above formula:

bin () -binarization operation;

and I' (t) -t time point ROI corrected real-time microlens image.

A33: performing motion correction on the real-time microlens image video stream;

specifically, for the real-time microlens image video stream with consistent gray scale and ROI obtained in step a32, two adjacent frames of microlens images are captured, the position of the previous frame is moved and inter-frame MSE corresponding to different displacements is calculated, the displacement with the minimum MSE value is the optimal motion estimation between two frames, and the motion error of the previous frame can be corrected according to the estimation value. The specific operation is as follows:

I″′t(i,j)=It″(i+ut,j+vt) (9)

in the above formula:

Itreal-time microlens images with consistent gray scale and ROI at the moment of-t;

m, N-microlens image size;

u, v-interframe displacement estimation for traversal;

MSEtthe MSE values of all the interframe displacement estimation used for traversing are corresponding to the moment t;

ut,vtbest inter-frame motion estimation at time t;

Itreal-time microlens image with gray scale at moment-t, consistent ROI and no motion error.

A34: and calculating the inter-frame similarity in real time according to the corrected frame difference video stream, and establishing a local defect detection result of the micro lens.

Specifically, for the real-time microlens image video stream with uniform gray scale and ROI and no motion error obtained in step a33, two adjacent frames of microlens images are captured, a similarity measure function (including but not limited to absolute error, mean square error, two-dimensional correlation coefficient, spectral correlation coefficient and other similarity measures) suitable for a specific task is selected, a threshold is set for the inter-frame similarity to judge, and a local defect detection result for the microlens is output. The specific operation is as follows:

Result(t)=Bin[Similarity(I″′t,I″′t+Δt)] (10)

in the above formula:

similarity () -Similarity measure function;

result (t) -local defect detection result of the microlens at the moment t.

The method can output real-time local defect detection results for each microlens.

A4: and (3) counting the distribution condition of the defect characteristics on the adjacent microlenses by utilizing the spatial domain and time domain correlation of the light field video to obtain the final detection result of the defects of the structured light field video.

A41: utilizing the spatial correlation of the light field image to count the distribution condition of the defect characteristics on the adjacent microlenses, and establishing a defect position detection result in the single-frame difference light field image;

specifically, a series of microlens center space position distributions with defects are generated by using the microlens real-time local defect detection result obtained in step a34, and the density of the corresponding defective microlenses at all positions of the current light field image frame is calculated, wherein the higher the density is, the higher the possibility of physical defects exists. The specific operation is as follows:

FrameResult(k)=Bin[D(k)] (12)

in the above formula:

dist () -distance measure functions including, but not limited to, Euclidean distances, Gaussian distances, etc.;

s, the total number of the micro lenses with defects in the current frame;

d (k) -the defect density at the center of the kth microlens;

FrameResult (k) -the defect position detection result at the k-th microlens center of the current frame.

A42: and (3) counting the change rule of the defect position between adjacent frames by utilizing the time domain correlation of the light field video, and establishing a final structural light field video defect detection result.

Specifically, using the single-frame difference optical field image defect position detection result obtained in step a41, for an area where there is a possibility of a physical defect, if the area detects a defect between adjacent frames and the moving speed of the area position between each frame is close to the object moving speed set by the detection device, there is a high possibility that there is a defect at the position. And marking and outputting the defect position with the time domain correlation on the light field video stream in real time to obtain a final real-time structural light field video defect detection result.

The invention discloses a defect detection method and a device based on a structured light field video stream, which combines a coded structured light technology and a light field imaging technology, captures the geometric distortion of a coded pattern caused by surface defects in the light field video stream in a higher dimension, and improves the accuracy and the stability of machine vision defect detection.

The background of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.

The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:透明基板薄膜的瑕疵检测方法及其系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!