Method for detecting end face angle of Y waveguide modulator of fiber-optic gyroscope
1. A method for detecting the end face angle of a Y waveguide modulator of a fiber-optic gyroscope is characterized by comprising the following steps:
s10, constructing a feature learning network based on all line segments of the edge of the end face of the Y waveguide modulator, and generating a final line segment mask based on the feature learning network and the feature output network;
s20, constructing a line segment division grid, and performing feature search on all line segments on the edge of the end face of the Y waveguide modulator by the line segment division grid according to a final line segment mask to obtain a plurality of segmented line segments matched with the final line segment mask;
s30, connecting the multiple segments into two straight lines by using a line segment connection method;
and S40, fitting the two straight lines respectively, and obtaining the included angle of the two straight lines after fitting to finish the detection of the end face angle of the Y waveguide modulator.
2. The method of claim 1, wherein in S10, constructing a feature learning network based on all line segments of the edge of the end face of the Y waveguide modulator, and generating a final line segment mask based on the feature learning network and the feature output network comprises:
s11, taking all line segments of the edge of the end face of the Y waveguide modulator as sample data, and randomly selecting partial data in the sample data as a training set;
s12, constructing a feature learning network based on the training set and the pre-training network;
s13, training the feature learning network based on the binary cross entropy loss function to obtain the trained feature learning network;
s14, learning the sample data based on the trained feature learning network to obtain the score and the location of each segment of the sample data, and taking the segment with the score and the location meeting the preset requirements as the positive sample data;
s15, constructing a feature output network, carrying out threshold binarization processing on the positive sample data based on the feature output network to generate a preliminary segment mask, and taking the preliminary segment mask larger than a preset threshold as a final segment mask;
the architecture of the feature learning network is shown in table 1:
TABLE 1 architecture of a feature learning network
The architecture of the feature output network is shown in table 2:
TABLE 2 architecture of the feature output network
3. The method of claim 1, wherein the structure of the line segment segmentation grid in S20 is shown in table 3:
TABLE 3 architecture of a segment segmentation network
4. The method of claim 1, wherein in S30, the connecting the plurality of fragmented segments into two straight lines using a segment connection method comprises:
s31, performing morphological closing operation and rectangularization processing on the multiple segments of the fragmentized line segments to obtain rectangles corresponding to each segment of the fragmentized line segments one by one, and obtaining the length of the long edge and the offset angle of each rectangle;
s32, taking the rectangle with the largest long edge length as a first candidate rectangle, and taking the rectangle with the long edge length which is greater than the preset length and less than the long edge length of the first candidate rectangle as a first reference rectangle;
s33, acquiring Euclidean distance from the center of each first reference rectangle to the center of the first candidate rectangle;
s34, acquiring all first reference rectangles with Euclidean distances smaller than a preset distance, and acquiring an absolute value of a difference value between an offset angle of each first reference rectangle with Euclidean distances smaller than the preset distance and an offset angle of a first candidate rectangle, wherein the difference value is used as a first angle difference;
s35, acquiring all first reference rectangles with the first angle differences smaller than a preset angle, and sequentially connecting the fragmentation line segments corresponding to the first reference rectangles with the first angle differences smaller than the preset angle with the fragmentation line segments corresponding to the first candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a first straight line;
s36, taking the rectangle with the largest long edge length corresponding to all the fragmentation line segments which are not included in the first straight line as a second candidate rectangle, and taking the rectangle with the long edge length which is larger than the preset length and smaller than the long edge length of the second candidate rectangle and is not included in the first straight line as a second reference rectangle;
s37, acquiring Euclidean distance from the center of each second reference rectangle to the center of each second candidate rectangle;
s38, acquiring all second reference rectangles with Euclidean distances smaller than a preset distance, and acquiring the absolute value of the difference value between the offset angle of each second reference rectangle with Euclidean distances smaller than the preset distance and the offset angle of a second candidate rectangle as a second angle difference;
and S39, acquiring all second reference rectangles with the second angle differences smaller than the preset angle, and sequentially connecting the fragmentation line segments corresponding to the second reference rectangles with the second angle differences smaller than the preset angle with the fragmentation line segments corresponding to the second candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a second straight line.
Background
The fiber optic gyroscope is a sensor for sensing the angular velocity of a carrier based on Sagnac (Sagnac) effect, has the advantages of no moving part, short starting time, wide precision coverage range and the like, and is widely concerned and applied in the fields of aviation, aerospace, navigation and land precision navigation, weapon precision guidance, automatic control and the like.
The Y waveguide modulator is used as a core device of the fiber-optic gyroscope and is the basis for realizing phase modulation. The performance of the gyroscope directly influences the demodulation precision of the gyroscope signal. Due to the effective refractive index difference between the Y waveguide modulator and the fiber, there is 4% fresnel back reflection at the coupling interface between the two, which is equivalent to superimposing a parasitic Michelson interference on the basis of Sagnac interference. This parasitic Michelson interference effect is sensitive to environmental perturbations, creating an unstable Sagnac interference phase difference, thereby introducing rotational rate inspection errors. For the inertial-class fiber optic gyroscope, the light intensity of the back reflected wave is at least better than-70 dB relative to the main wave. In order to attenuate the back reflection of the coupling interface of the Y waveguide modulator and the optical fiber, the Y waveguide modulator and the end face of the optical fiber are polished at a certain angle. According to the reflection law, the end face inclination angle of the Y waveguide modulator is 10 degrees, the end face inclination angle of the optical fiber is 15 degrees, Fresnel reflection can be greatly weakened according to the combination, back reflection is better than-70 dB, and coupled optical power is not weakened. Therefore, it is necessary to accurately detect the end face tilt angles of the Y waveguide modulator and the optical fiber. In order to realize nondestructive detection, contactless image detection is generally adopted.
Most of traditional image feature detection algorithms are based on manual feature extraction, fixed feature extraction algorithms are set manually, and single fixed features are identified and extracted. Such algorithms include hough transform, Canny detection, morphological filtering, etc. In most cases, the traditional feature detection algorithm can only realize the identification of a single fixed feature, because the internal algorithm parameters are fixed. In an actual environment, target features change due to changes of different illumination, contrast and other factors caused by different environments, so that the performance of an algorithm with fixed parameters is reduced, and the recognition and extraction capabilities are reduced. Therefore, the traditional feature detection algorithm has a poor detection effect on the actual target.
Because the refractive index of the Y waveguide modulator is about 2.2, the Y waveguide modulator is in a transparent state in an actual environment, and the edge information of the Y waveguide modulator is submerged in a background image, so that the difficulty of edge detection is increased. The traditional feature detection algorithm has extremely low recognition rate, and the accurate interpretation of the angle information is seriously influenced.
Disclosure of Invention
The invention provides a method for detecting the end face angle of a Y waveguide modulator of a fiber-optic gyroscope, which can solve the technical problem that the end face angle of the Y waveguide modulator of the fiber-optic gyroscope cannot be accurately detected by the conventional detection method.
The invention provides a method for detecting the end face angle of a Y waveguide modulator of a fiber-optic gyroscope, which comprises the following steps:
s10, constructing a feature learning network based on all line segments of the edge of the end face of the Y waveguide modulator, and generating a final line segment mask based on the feature learning network and the feature output network;
s20, constructing a line segment division grid, and performing feature search on all line segments on the edge of the end face of the Y waveguide modulator by the line segment division grid according to a final line segment mask to obtain a plurality of segmented line segments matched with the final line segment mask;
s30, connecting the multiple segments into two straight lines by using a line segment connection method;
and S40, fitting the two straight lines respectively, and obtaining the included angle of the two straight lines after fitting to finish the detection of the end face angle of the Y waveguide modulator.
Preferably, in S10, constructing the feature learning network based on all the line segments of the edge of the end face of the Y waveguide modulator, and generating the final line segment mask based on the feature learning network and the feature output network includes:
s11, taking all line segments of the edge of the end face of the Y waveguide modulator as sample data, and randomly selecting partial data in the sample data as a training set;
s12, constructing a feature learning network based on the training set and the pre-training network;
s13, training the feature learning network based on the binary cross entropy loss function to obtain the trained feature learning network;
s14, learning the sample data based on the trained feature learning network to obtain the score and the location of each segment of the sample data, and taking the segment with the score and the location meeting the preset requirements as the positive sample data;
s15, constructing a feature output network, carrying out threshold binarization processing on the positive sample data based on the feature output network to generate a preliminary segment mask, and taking the preliminary segment mask larger than a preset threshold as a final segment mask;
the architecture of the feature learning network is shown in table 1:
TABLE 1 architecture of a feature learning network
The architecture of the feature output network is shown in table 2:
TABLE 2 architecture of the feature output network
Preferably, the architecture of the line segment division mesh in S20 is shown in table 3:
TABLE 3 architecture of a segment segmentation network
Preferably, in S30, the connecting the plurality of fragmented segments into two straight lines using the segment connection method includes:
s31, performing morphological closing operation and rectangularization processing on the multiple segments of the fragmentized line segments to obtain rectangles corresponding to each segment of the fragmentized line segments one by one, and obtaining the length of the long edge and the offset angle of each rectangle;
s32, taking the rectangle with the largest long edge length as a first candidate rectangle, and taking the rectangle with the long edge length which is greater than the preset length and less than the long edge length of the first candidate rectangle as a first reference rectangle;
s33, acquiring Euclidean distance from the center of each first reference rectangle to the center of the first candidate rectangle;
s34, acquiring all first reference rectangles with Euclidean distances smaller than a preset distance, and acquiring an absolute value of a difference value between an offset angle of each first reference rectangle with Euclidean distances smaller than the preset distance and an offset angle of a first candidate rectangle, wherein the difference value is used as a first angle difference;
s35, acquiring all first reference rectangles with the first angle differences smaller than a preset angle, and sequentially connecting the fragmentation line segments corresponding to the first reference rectangles with the first angle differences smaller than the preset angle with the fragmentation line segments corresponding to the first candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a first straight line;
s36, taking the rectangle with the largest long edge length corresponding to all the fragmentation line segments which are not included in the first straight line as a second candidate rectangle, and taking the rectangle with the long edge length which is larger than the preset length and smaller than the long edge length of the second candidate rectangle and is not included in the first straight line as a second reference rectangle;
s37, acquiring Euclidean distance from the center of each second reference rectangle to the center of each second candidate rectangle;
s38, acquiring all second reference rectangles with Euclidean distances smaller than a preset distance, and acquiring the absolute value of the difference value between the offset angle of each second reference rectangle with Euclidean distances smaller than the preset distance and the offset angle of a second candidate rectangle as a second angle difference;
and S39, acquiring all second reference rectangles with the second angle differences smaller than the preset angle, and sequentially connecting the fragmentation line segments corresponding to the second reference rectangles with the second angle differences smaller than the preset angle with the fragmentation line segments corresponding to the second candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a second straight line.
By applying the technical scheme of the invention, the deep learning network is utilized to design the feature learning network, the feature output network and the line segment segmentation network, the autonomous learning capability is realized, the edge of the Y waveguide modulator has good identification and positioning performance, the detection of the end face angle of the Y waveguide modulator of the fiber optic gyroscope is realized, and the identification of the edge of the Y waveguide modulator and the detection of the end face angle in a complex environment can be realized. Compared with the traditional image detection algorithm, the method greatly improves the edge recognition rate and the angle interpretation accuracy rate, and greatly reduces the requirement on the background light source.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart of a method for detecting an end face angle of a Y waveguide modulator of a fiber-optic gyroscope according to an embodiment of the invention;
FIG. 2 is a schematic diagram illustrating a method for detecting an end face angle of a fiber-optic gyroscope Y-waveguide modulator according to an embodiment of the invention;
FIG. 3 illustrates a schematic diagram of the length of the long side and the offset angle of a rectangle provided in accordance with an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating euclidean distances between centers of rectangles, provided in accordance with an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise. Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description. Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate. In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
As shown in fig. 1 and fig. 2, the present invention provides a method for detecting an end face angle of a Y waveguide modulator of a fiber-optic gyroscope, the method comprising:
s10, constructing a feature learning network based on all line segments of the edge of the end face of the Y waveguide modulator, and generating a final line segment mask based on the feature learning network and the feature output network;
s20, constructing a line segment division grid, and performing feature search on all line segments on the edge of the end face of the Y waveguide modulator by the line segment division grid according to a final line segment mask to obtain a plurality of segmented line segments matched with the final line segment mask;
s30, connecting the multiple segments into two straight lines by using a line segment connection method;
and S40, fitting the two straight lines respectively, and obtaining the included angle of the two straight lines after fitting to finish the detection of the end face angle of the Y waveguide modulator.
The invention designs the feature learning network, the feature output network and the segment segmentation network by utilizing the deep learning network, has autonomous learning capability, has good identification and positioning performance on the edge of the Y waveguide modulator, realizes the detection of the end face angle of the Y waveguide modulator of the fiber-optic gyroscope, and can realize the identification of the edge of the Y waveguide modulator and the detection of the end face angle in a complex environment. Compared with the traditional image detection algorithm, the method greatly improves the edge recognition rate and the angle interpretation accuracy rate, and greatly reduces the requirement on the background light source.
According to an embodiment of the present invention, in S10, constructing the feature learning network based on all line segments of the edge of the end face of the Y waveguide modulator, and generating the final line segment mask based on the feature learning network and the feature output network includes:
s11, taking all line segments of the edge of the end face of the Y waveguide modulator as sample data, and randomly selecting partial data in the sample data as a training set;
s12, constructing a feature learning network based on the training set and the pre-training network;
s13, training the feature learning network based on the binary cross entropy loss function to obtain the trained feature learning network;
s14, learning the sample data based on the trained feature learning network to obtain the score and the location of each segment of the sample data, and taking the segment with the score and the location meeting the preset requirements as the positive sample data;
s15, constructing a feature output network, carrying out threshold binarization processing on the positive sample data based on the feature output network to generate a preliminary segment mask, and taking the preliminary segment mask larger than a preset threshold as a final segment mask;
the architecture of the feature learning network is shown in table 1:
TABLE 1 architecture of a feature learning network
Wherein, in Table 1, Conv1-1 to Conv1-7 represent convolutional layers 1-1 to convolutional layers 1-7, respectively; pool1-1 to Pool1-3 represent pooling layers 1-1 to 1-3, respectively; fc1 to Fc3 represent full link layers 1 to 3, respectively; "3 × 3, 64" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 64; "3 × 3,128" in the filter size indicates that the convolution kernel is 3 × 3, and the number of convolution kernels is 128; "3 × 3,256" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 256; "3 × 3, s ═ 2" in the filter size means that the convolution kernel is 3 × 3 and the step size is 2.
In table 1, the feature maps of each convolutional layer were averaged and up-sampled to the size of the feature map of the upper convolutional layer. By doing so, the averaged feature maps of the previous and current layers may implement element-level operations, represented as an element product. And the positive sample data enters a feature output network and is output to obtain the full-size mask features.
The architecture of the feature output network is shown in table 2:
TABLE 2 architecture of the feature output network
Wherein, in table 2, DConv2-1 to DConv2-6 represent deconvolution layers 2-1 to 2-6, respectively; conv2-1 to Conv2-9 represent convolutional layers 2-1 to 2-9, respectively; "2 × 2,256" in the filter size means that the convolution kernel is 2 × 2, and the number of convolution kernels is 256; "3 × 3,128" in the filter size indicates that the convolution kernel is 3 × 3, and the number of convolution kernels is 128; "2 × 2,128" in the filter size indicates that the convolution kernel is 2 × 2, and the number of convolution kernels is 128; "3 × 3, 64" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 64; "2 × 2, 64" in the filter size means that the convolution kernel is 2 × 2, and the number of convolution kernels is 64; "3 × 3, 32" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 32; "3 × 3, 16" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 16; "2 × 2, 16" in the filter size means that the convolution kernel is 2 × 2, and the number of convolution kernels is 16; "3 × 3, 8" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 8; "2 × 2, 8" in the filter size means that the convolution kernel is 2 × 2 and the number of convolution kernels is 8.
In a feature learning network, each segment is labeled with a class label,
in the formula, giA label representing the ith line segment.
In S13 of the present invention, the binary cross entropy loss function is shown as follows:
wherein N is the total number of segments, piIs the fraction of the ith line segment.
In S15 of the present invention, in order to improve the positioning accuracy, the threshold binarization processing is performed on the positive sample data by the following equation:
Mi(x,y)=Bi(x,y)*gi;
in the formula, Mi(x, y) is the pixel value of the ith positive sample data, Bi(x, y) is a binary mask.
The architecture of the line segment segmentation mesh in S20 according to an embodiment of the present invention is shown in table 3:
TABLE 3 architecture of a segment segmentation network
In table 3, the structure of the segment split network has four convolutional layers and four anti-convolutional layers, Conv3-1 to Conv3-19 respectively represent convolutional layers 3-1 to convolutional layers 3-19; pool3-1 to Pool3-4 represent pooling layers 3-1 to 3-4, respectively; DConv3-1 to DConv3-4 represent deconvolution layers 3-1 to 3-4, respectively; "3 × 3, 64" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 64; "3 × 3, s ═ 2" in the filter size means that the convolution kernel is 3 × 3, and the step size is 2; "3 × 3,128" in the filter size indicates that the convolution kernel is 3 × 3, and the number of convolution kernels is 128; "3 × 3,256" in the filter size means that the convolution kernel is 3 × 3, and the number of convolution kernels is 256; "3 × 3,512" in the filter size indicates that the convolution kernel is 3 × 3, and the number of convolution kernels is 512; "3 × 3,1024" in the filter size indicates that the convolution kernel is 3 × 3, and the number of convolution kernels is 1024; "2 × 2,1024" in the filter size indicates that the convolution kernel is 2 × 2, and the number of convolution kernels is 1024; "2 × 2,512" in the filter size indicates that the convolution kernel is 2 × 2, and the number of convolution kernels is 512; "2 × 2,256" in the filter size means that the convolution kernel is 2 × 2, and the number of convolution kernels is 256; "2 × 2,128" in the filter size indicates that the convolution kernel is 2 × 2, and the number of convolution kernels is 128; "1 × 1, 2" in the filter size means that the convolution kernel is 1 × 1, and the number of convolution kernels is 2.
In table 3, four sets of deconvolution layers are provided after the four sets of convolution layers in order to realize pixel-level identification and upsampling of the feature map. A Dropout layer is placed after the deconvolution layer to prevent overfitting. And (3) using the binary cross entropy as a network loss function, wherein the threshold value of the output of Softmax binarization is 0.5, and the output result is used as the input of a line segment joining algorithm (LSC).
In the present invention, the interference resistance of the method of the present invention is improved by S20.
According to an embodiment of the present invention, in S30, connecting the plurality of fragmented segments into two straight lines using a segment connection method includes:
s31, performing morphological closing operation and rectangularization processing on the multiple segments of the fragmentized line segments to obtain rectangles corresponding to each segment of the fragmentized line segments one by one, and obtaining the length of the long edge and the offset angle of each rectangle;
s32, since the longer the length of the long side is, the higher the confidence score is, the longest the long side length is as the first candidate rectangle, and the longest line segment is as the start line segment of the connection; taking the rectangle with the long edge length larger than the preset length and smaller than the long edge length of the first candidate rectangle as a first reference rectangle to delete possible error early warning or avoid angle misjudgment caused by too short line segments;
s33, acquiring Euclidean distance from the center of each first reference rectangle to the center of the first candidate rectangle;
s34, acquiring all first reference rectangles with Euclidean distances smaller than a preset distance, and acquiring an absolute value of a difference value between an offset angle of each first reference rectangle with Euclidean distances smaller than the preset distance and an offset angle of a first candidate rectangle, wherein the difference value is used as a first angle difference;
s35, acquiring all first reference rectangles with the first angle differences smaller than a preset angle, and sequentially connecting the fragmentation line segments corresponding to the first reference rectangles with the first angle differences smaller than the preset angle with the fragmentation line segments corresponding to the first candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a first straight line;
s36, taking the rectangle with the largest long edge length corresponding to all the fragmentation line segments which are not included in the first straight line as a second candidate rectangle, and taking the rectangle with the long edge length which is larger than the preset length and smaller than the long edge length of the second candidate rectangle and is not included in the first straight line as a second reference rectangle;
s37, acquiring Euclidean distance from the center of each second reference rectangle to the center of each second candidate rectangle;
s38, acquiring all second reference rectangles with Euclidean distances smaller than a preset distance, and acquiring the absolute value of the difference value between the offset angle of each second reference rectangle with Euclidean distances smaller than the preset distance and the offset angle of a second candidate rectangle as a second angle difference;
and S39, acquiring all second reference rectangles with the second angle differences smaller than the preset angle, and sequentially connecting the fragmentation line segments corresponding to the second reference rectangles with the second angle differences smaller than the preset angle with the fragmentation line segments corresponding to the second candidate rectangles according to the sequence of Euclidean distances from small to large to obtain a second straight line.
In S31 of the present invention, the performing the morphological close operation on the multi-segment fragmented segments includes expanding the multi-segment fragmented segmentsAnd corrosion ofProcessing to obtain a closed, multi-segmented fragmentation line segment, can be represented by the following equation:
in the formula IcRepresenting closed, multi-segmented line segments, IpRepresenting a multi-segment fragmented segment, K is a 3 × 3 arithmetic core.
Performing rectangular treatment on the closed multi-segment fragmented line segments to obtain rectangles r corresponding to each segment of fragmented line segments one by oneiThat is, a rectangular set R is obtained, R ═ R1,r2,...,ri,...,rmAnd m is the total number of rectangles. liIs a rectangle riLength of long side of thetaiIs a rectangle riAs shown in fig. 3, wherein the offset angle is the angle between the long side of the rectangle and the horizontal direction.
In S33 or S37 of the present invention, the euclidean distance d from the center of each of the first/second reference rectangles to the center of the first/second candidate rectangle is obtained by the following formulaijAs shown in fig. 4:
in the formula Ii(cx,cy) Is the center coordinate of the first/second candidate rectangle,/j(c′x,c′y) Is the central coordinate of the jth first/second reference rectangle, | · | | computationally2Is the norm of L2.
In the present inventionIn the clear S34 or S38, the first/second angular difference Δ θ is obtained by the following equationij:
Δθij=|θi-θj|;
In the formula, thetaiIs the offset angle of the first/second candidate rectangle, θjIs the offset angle of the jth first/second reference rectangle.
In addition, in the present embodiment, a threshold parameter, preset length L, may be adoptedth10, preset distance Dth150, preset angle θth=3。
Spatially relative terms, such as "above … …," "above … …," "above … …," "above," and the like, may be used herein for ease of description to describe one device or feature's spatial relationship to another device or feature as illustrated in the figures. It will be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if a device in the figures is turned over, devices described as "above" or "on" other devices or configurations would then be oriented "below" or "under" the other devices or configurations. Thus, the exemplary term "above … …" can include both an orientation of "above … …" and "below … …". The device may be otherwise variously oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
It should be noted that the terms "first", "second", and the like are used to define the components, and are only used for convenience of distinguishing the corresponding components, and the terms have no special meanings unless otherwise stated, and therefore, the scope of the present invention should not be construed as being limited.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:自移机尾姿态检测方法、装置以及存储介质