Intelligent monitoring method for point cloud of rail intersection bow net
1. An intelligent monitoring method for a point cloud of an orbit intersection network is characterized by comprising the following steps: the method comprises the following steps: the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images; step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
2. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps: respectively preprocessing a left image and a right image acquired by a binocular camera, and segmenting a target object monitored by the pantograph from the images; the first step comprises the following steps: 1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images; 1 b: the preprocessed target image R becomes a target area of the original gray image, the background is a pure white image, and the target object is segmented.
3. An orbital-arch net according to claim 1The point cloud intelligent monitoring method is characterized by comprising the following steps of: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; the second step comprises the following steps: 2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle box being represented by Rect-L (X L, Y L, W L, H L), the right image minimum rectangle box being represented by Rect-R (X R, Y R, W R, H R): where X, Y represent the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H represent the width and height of the rectangular frame: 2b, in the process of stereo matching, an effective matching area of an image to be matched (a right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched: the regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax; and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a parallax value of an arbitrary point (x, y) of the parallax matrix.
4. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps of: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; the third step comprises: 3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras; given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:the three-dimensional coordinates of this point are (X W, Y W, Z W).
5. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following four steps: the cloud computer performs ICP (iterative closest point) registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object; the fourth step comprises: 4 a: reading model point cloud X and target point P, the number of the point concentration points being,(ii) a 4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWhereinBelongs to X; 4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds(ii) a 4 d: set of pointsCovariance matrix of PConstructing a 4 x 4 symmetric matrix(ii) a 4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector(ii) a 4 f: transforming the rigid body in the flow 5) into a matrix,Acting on eyesPoint cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P(ii) a 4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
Background
The key point of the operation safety of the rail transit vehicle is the safety and the guarantee of equipment in a pantograph system, so that a wireless video monitoring system is usually adopted to monitor the operation condition in real time, a plane camera and a traditional monitoring method of two-dimensional image identification are usually adopted, and a plurality of identification technical problems exist. Therefore, an image recognition system begins to develop a more advanced technical field of three-dimensional image recognition, three-dimensional point cloud adopting stereoscopic vision is a set for describing three-dimensional information data points of an object, and a three-dimensional model of the object is a key technology in development by rapidly acquiring complete three-dimensional point cloud of a target with a complex shape and reconstructing the three-dimensional model of the target. The methods for acquiring three-dimensional image data of three-dimensional point cloud can be mainly divided into two types: contact measurements acquired by some mechanical contact, such as three coordinate measuring machine measurements; non-contact measurements, such as measurements based on machine vision techniques, acquired by mechanical contact are not required. Three-dimensional scanning is an effective means for acquiring three-dimensional point cloud, and is widely applied to various fields such as target identification, three-dimensional reconstruction, Geographic Information System (GIS) construction, medical assistance, ancient cultural relic restoration and the like. However, the high-precision scanners used in China mostly depend on imports such as laser scanning, are high in precision and cost, and are not suitable for equipment application environments of rail transit vehicles and the like.
Therefore, how to replace a complex laser three-dimensional monitoring system by a binocular vision system which is simple in structure and easy to install and quickly acquire complete three-dimensional point cloud three-dimensional monitoring data of a complex target object becomes an important problem in that the binocular vision system is easy to install on rail transit vehicles and carry out safety monitoring work of a pantograph-catenary.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides an intelligent monitoring method for a point cloud of an intersection bow net.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
the three-dimensional coordinates of the point are (X W, Y W, Z W);
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of the point concentration points being,;
4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWherein∈X ;
4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds;
4 d: set of pointsCovariance matrix of PConstructing a 4 x 4 symmetric matrix;
4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector;
4 f: transforming the rigid body in the flow 5) into a matrix,Acting on the target point cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P;
4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
Has the advantages that: according to the intelligent monitoring method for the point cloud of the rail-crossing pantograph network, the complete three-dimensional point cloud three-dimensional image of the target with the complex shape of the rail-crossing pantograph network monitoring target object can be quickly acquired by adopting the binocular vision system and the cloud computing equipment, the structure is simple, the requirement of the installation and operation environment of rail-crossing traffic vehicles is met, the monitoring pantograph network operation safety is facilitated, the precise and complex imported laser three-dimensional scanning system is effectively replaced, the cost is reduced, and the efficiency is improved.
Drawings
FIG. 1 is a flow chart of intelligent acquisition of point clouds in accordance with the present invention
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
An intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: processing by a cloud computer, automatically intercepting a rectangular area where a target is located, and performing stereo matching on the rectangular area to obtain the parallax of a measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
the three-dimensional coordinates of the point are (X W, Y W, Z W);
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of points in the point set being respectivelyIs composed of,;
4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWherein∈X ;
4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds;
4 d: set of pointsCovariance matrix of PConstructing a 4 x 4 symmetric matrix;
4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector;
4 f: transforming the rigid body in the flow 5) into a matrix,Acting on the target point cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P;
4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种结合全局和邻域信息的血管分割方法