Device and method for calibrating positions of laser radar and camera

文档序号:6606 发布日期:2021-09-17 浏览:40次 中文

1. A method for calibrating the positions of laser radar and camera features that the camera is used to collect the image of calibration reference device, the image coordinates of calibration point are obtained by image algorithm, the calibration reference device is composed of at least two square plates whose colours are different and whose numbers are different, the four vertexes of two square plates are respectively identified by image algorithm, the vertexes are used as calibration points to calibrate the image coordinates of the vertexes,

collecting the point cloud of the calibration reference device by a laser radar, acquiring the three-dimensional coordinates of the calibration point by using a point cloud algorithm, wherein at least four vertexes of two square flat plates are respectively identified by the point cloud algorithm, calibrating the three-dimensional coordinates of the vertexes,

and determining the position relation of the laser radar and the camera by utilizing a PnP algorithm according to the image coordinate and the three-dimensional coordinate corresponding to the calibration point.

2. The method of claim 1, wherein the distance between the lidar and the camera is estimated, the distance between the lidar and the camera is adjusted to the same order of magnitude as the calibrated reference device, the calibrated reference device is kept stationary, the image of the calibrated reference device is captured by the camera, and the point cloud of the calibrated reference device is captured by the lidar.

3. The method for calibrating the positions of the laser radar and the camera as claimed in claim 1 or 2, wherein for the image of the calibration reference device, 4 straight lines of each flat plate are extracted by using a hough transform method, and the intersection points of the straight lines are used for obtaining the image coordinates of 4 vertexes.

4. The method as claimed in claim 1 or 3, wherein for the point cloud of the calibration reference device, the RANSAC method is used to segment the plane of the square flat plate from the point cloud collected by the laser radar, and the plane is fitted to the square of the square flat plate to obtain the three-dimensional coordinates of the four vertices.

5. The method of claim 1 or 4, wherein the method comprises obtaining a relative position relationship parameter between the lidar and the camera by a PnP algorithm according to an internal parameter matrix of the camera, re-projecting a spatial point of a point cloud of a calibration reference device to an image of the calibration reference device according to the relative position relationship parameter, and comparing the spatial point with an image coordinate to determine a position relationship between the lidar and the camera.

6. The method of claim 5, wherein the camera is calibrated using the Zhang friend calibration method to obtain the intrinsic parameter matrix.

7. A system for calibrating the positions of a laser radar and a camera is characterized by comprising an image calibration module, a point cloud calibration module and a calculation module,

the image calibration module acquires image coordinates of the calibration points by using an image algorithm through a calibration reference device image acquired by a camera, wherein the calibration reference device comprises at least two square flat plates which are sequentially connected with each other, have different colors and are sequentially increased, the image calibration module at least respectively identifies four vertexes of the two square flat plates by using the image algorithm, takes the vertexes as the calibration points, calibrates the image coordinates of the vertexes,

the point cloud calibration module acquires the three-dimensional coordinates of the calibration points by using a point cloud algorithm through the point cloud of the calibration reference device acquired by the laser radar, wherein the point cloud calibration module at least respectively identifies four vertexes of two square flat plates by using the point cloud algorithm to calibrate the three-dimensional coordinates of the vertexes,

and the calculation module determines the position relation between the laser radar and the camera by using a PnP algorithm through the image coordinates and the three-dimensional coordinates corresponding to the calibration points.

8. A laser radar and camera position calibration device is characterized by comprising at least one memory and at least one processor;

the at least one memory to store a machine readable program;

the at least one processor, configured to invoke the machine readable program to perform a method for lidar and camera position calibration as defined in any of claims 1 to 6.

Background

Lidar and cameras are two commonly used environmental sensing sensors. The laser radar actively emits laser through the light source, receives the laser reflected by the surface of an object through the sensing device, and accordingly calculates the distance of a target, and can obtain the three-dimensional coordinate of a spatial midpoint, namely point cloud. The camera is in a passive sensing mode, light reflected by the surface of an object is converted into an electric signal through the photosensitive chip, and color information of each pixel, namely a two-dimensional image, can be obtained. The target category can be identified from the image through an image identification algorithm, but the depth information is lost, and the depth information exists in the point cloud, but the target is not convenient to identify. In order to increase the environment sensing capability, an intelligent device such as an automatic driving device or a robot can simultaneously install a laser radar and a camera to perform information fusion, firstly, a target such as a pedestrian or a vehicle is identified through an image, then, distance information of the corresponding target is found in a point cloud, and in order to find the corresponding relation between a space point in the point cloud and a pixel point in the image, the relative position relation between a calibrated laser radar coordinate system and a camera coordinate system needs to be known. In addition, when the real scene three-dimensional reconstruction is realized, the fusion application of the laser radar and the camera is needed, corresponding pixel points are found in an image to obtain colors through the point cloud collected by the laser radar, the point cloud is colored to form color point cloud, and the position relationship between the laser radar and the camera also needs to be calibrated in advance.

The existing calibration method of the laser radar and the camera enables the laser radar and the camera to simultaneously use the same square flat plate as a calibration object, collects point clouds and images, finds out corresponding points in the point clouds and the images as a pair of calibration points through human eye observation, then manually clicks in a point cloud file and an image file to obtain three-dimensional coordinates and image coordinates, and can solve the position relation of the laser radar and the camera when the number of the calibration points is enough. However, the method for observing the corresponding point by human eyes and manually acquiring the coordinate has great artificial subjective error, some point clouds acquired by the laser radar are sparse, and the three-dimensional coordinate of the calibration point is acquired by manual clicking and is probably the coordinate of the closest point of the spatial neighborhood. The distance between the laser radar and the camera and the size of the calibration plate are not necessarily in one order of magnitude, so that the calibration error is easy to cause; the calibration point pairs on the calibration plate are all on one plane, the calibration points on a plurality of depths cannot be acquired simultaneously, and the calculation result is probably only the optimal solution meeting the specific distance. These problems all easily cause a large calibration error, and cause problems for the subsequent fusion of the laser radar and the camera.

Disclosure of Invention

Aiming at the problems in the prior art, the invention provides a device and a method for calibrating the positions of a laser radar and a camera, which can automatically find corresponding points of a point cloud and an image by using an algorithm and have high accuracy; the calibration device is three-dimensional rather than planar, and can acquire calibration point pairs at different depths, so that the calibration result is more accurate; the size of the calibration device can be adjusted, and the laser radar and the camera at different distances can be calibrated; has strong practical value.

The specific scheme provided by the invention is as follows:

a method for calibrating the positions of laser radar and camera includes such steps as collecting the image of calibration reference device by camera, acquiring the image coordinates of calibration point by image algorithm, connecting the square plates in turn, sequentially increasing the color of each plate, recognizing the four vertexes of two square plates by image algorithm, calibrating the image coordinates of the vertexes,

collecting the point cloud of the calibration reference device by a laser radar, acquiring the three-dimensional coordinates of the calibration point by using a point cloud algorithm, wherein at least four vertexes of two square flat plates are respectively identified by the point cloud algorithm, calibrating the three-dimensional coordinates of the vertexes,

and determining the position relation of the laser radar and the camera by utilizing a PnP algorithm according to the image coordinate and the three-dimensional coordinate corresponding to the calibration point.

Further, the method for calibrating the positions of the laser radar and the camera estimates the distance between the laser radar and the camera, adjusts the distance between the laser radar and the camera to be in the same order of magnitude as that of the calibration reference device, keeps the position of the calibration reference device still, collects images of the calibration reference device through the camera, and collects point clouds of the calibration reference device through the laser radar.

Furthermore, in the method for calibrating the positions of the laser radar and the camera, 4 straight lines of each flat plate are extracted by using a Hough transform method for an image of a calibration reference device, and intersection points are obtained by using the straight lines to obtain image coordinates of 4 vertexes.

Further, in the method for calibrating the positions of the laser radar and the camera, the RANSAC method is used for dividing the plane of the square flat plate from the point cloud collected by the laser radar, and the plane is fitted into the square of the square flat plate to obtain the three-dimensional coordinates of four vertexes.

Further, in the method for calibrating the positions of the laser radar and the camera, relative position relation parameters of the laser radar and the camera are obtained through a PnP algorithm according to an internal parameter matrix of the camera, spatial points of a point cloud of a calibration reference device are re-projected to an image of the calibration reference device according to the relative position relation parameters, and are compared with image coordinates to determine the position relation of the laser radar and the camera.

Furthermore, in the method for calibrating the positions of the laser radar and the camera, a Zhang-Yongyou calibration method is used for calibrating the camera to obtain an internal parameter matrix.

A system for calibrating the positions of a laser radar and a camera comprises an image calibration module, a point cloud calibration module and a calculation module,

the image calibration module acquires image coordinates of the calibration points by using an image algorithm through a calibration reference device image acquired by a camera, wherein the calibration reference device comprises at least two square flat plates which are sequentially connected with each other, have different colors and are sequentially increased, the image calibration module at least respectively identifies four vertexes of the two square flat plates by using the image algorithm, takes the vertexes as the calibration points, calibrates the image coordinates of the vertexes,

the point cloud calibration module acquires the three-dimensional coordinates of the calibration points by using a point cloud algorithm through the point cloud of the calibration reference device acquired by the laser radar, wherein the point cloud calibration module at least respectively identifies four vertexes of two square flat plates by using the point cloud algorithm to calibrate the three-dimensional coordinates of the vertexes,

and the calculation module determines the position relation between the laser radar and the camera by using a PnP algorithm through the image coordinates and the three-dimensional coordinates corresponding to the calibration points.

A laser radar and camera position calibration device comprises at least one memory and at least one processor;

the at least one memory to store a machine readable program;

the at least one processor is used for calling the machine readable program to execute the laser radar and camera position calibration method.

The invention has the advantages that:

the invention provides a method for calibrating the positions of a laser radar and a camera, which is used for calculating the relative positions of the laser radar and the camera, easily identifying four vertexes of each flat plate from an image collected by the camera according to a calibration reference device, easily identifying four vertexes of each flat plate from a point cloud collected by the laser radar, and solving the position relation between the laser radar and the camera by utilizing at least 8 calibration point pairs through a PnP (pseudo-random-P) algorithm. The method has the advantages of simple implementation process, simple manufacture of the used calibration device, low cost, capability of automatically finding the corresponding points of the point cloud and the image and high accuracy; and the calibration reference device does not need to limit the size, can obtain calibration point pairs in different depth extents, and has more accurate calibration results.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

Fig. 1 is a schematic view of an application scenario of the method of the present invention.

Fig. 2 is a schematic view of the calibration apparatus.

FIG. 3 is a schematic view of the calibration of the vertices of two square plates.

Reference numerals:

a flat plate 1, a flat plate 2, a flat plate 3, a connecting rod 4,

Detailed Description

The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.

The invention provides a method for calibrating the positions of a laser radar and a camera, which comprises the steps of collecting images of a calibration reference device through the camera, acquiring image coordinates of a calibration point by utilizing an image algorithm, wherein the calibration reference device comprises at least two square flat plates, the square flat plates are sequentially connected with each other, have different colors and are sequentially increased, at least four vertexes of the two square flat plates are respectively identified by the image algorithm, the vertexes are used as calibration points, the image coordinates of the vertexes are calibrated,

collecting the point cloud of the calibration reference device by a laser radar, acquiring the three-dimensional coordinates of the calibration point by using a point cloud algorithm, wherein at least four vertexes of two square flat plates are respectively identified by the point cloud algorithm, calibrating the three-dimensional coordinates of the vertexes,

and determining the position relation of the laser radar and the camera by utilizing a PnP algorithm according to the image coordinate and the three-dimensional coordinate corresponding to the calibration point.

The method provided by the invention is utilized to observe the same calibration reference device according to the laser radar and the camera, the image coordinates of the calibration point are obtained through the image algorithm, the three-dimensional coordinates of the calibration point are obtained through the point cloud algorithm, the calibration point pair corresponds to the calibration point pair, and when the calibration point pair is sufficient, namely at least 8 pairs of time, the relative positions of the laser radar and the camera can be rapidly and accurately calculated. The method is convenient for increasing the environment perception capability, and is applied to intelligent equipment such as automatic driving or robots to complete information fusion of the laser radar and the camera. Meanwhile, when a real scene is reconstructed in a three-dimensional mode, the method can also be used for rapidly fusing the information of the two sensors, so that the point cloud acquired by the laser radar can be used for finding the corresponding pixel points in the image to obtain the color, and the point cloud is colored to form the color point cloud.

In a specific application, in some embodiments of the present invention, the process is:

step 1: and roughly estimating the distances between the laser radar and the camera, and respectively adjusting the distances between the laser radar and the camera and the calibration reference device to the same order of magnitude. The image of the calibration device is collected through a camera, and the point cloud of the calibration device is collected through a laser radar. Referring to fig. 1 and 2, the calibration reference device comprises three square flat plates and a connecting rod 4, wherein the connecting rod 4 is connected with the flat plates 1-3, the flat plates are respectively a flat plate 1, a flat plate 2 and a flat plate 3, the side lengths of the three square plates are sequentially increased, and the three square plates are different in color.

And 2, in the image, referring to fig. 3, the flat plate 1 and the flat plate 2 are different in color, 4 edge straight lines of the square of the flat plate 1 are respectively extracted by using a Hough transform method, and then intersection points are solved by using the straight lines to obtain image coordinates of 4 vertexes. Similarly, the image coordinates of the four vertices of the flat plate 2 are obtained.

Step 3, in the point cloud, using a random sampling consistency RANSAC method to segment the plane of the square flat plate from the point cloud collected by the laser radar, and referring to the following process:

step 31: in the point cloud, three points are randomly selected, a plane is determined, and a distance threshold value is set to be 0.01 m, namely, only the distance from the point to the plane is smaller than the threshold value and is regarded as the point on the plane. Calculate how many points within this distance threshold satisfy this plane.

Step 32: and randomly selecting three points again, determining a plane, calculating whether the plane can be within the same distance threshold value, containing more points in the point cloud, and recording the plane as a result if the plane is within the same distance threshold value.

Step 33: step 32 is repeated for loop iterations. Iterate a sufficient number of times.

Step 34: and (4) ending the iteration, recording the current result, wherein the point contained in the current plane is the segmented plane.

Step 35: and (4) removing points contained in the plane obtained in the step (34) from the laser radar point cloud, and repeating the steps (31) to (34) on the residual point cloud to obtain the next plane until the plane of each flat plate is obtained.

After the plane of the flat plate is divided, the flat plate can be corresponding to the flat plates 1-3 according to the front and back positions or the area size. And fitting the point clouds of the flat plate 1 and the flat plate 2 into a square, and acquiring the coordinates of four vertexes of the flat plate 1 and the three-dimensional coordinates of four vertexes of the flat plate 2.

And 4, solving the relative position relation parameters of the laser radar and the camera, namely the rotation matrix and the translation vector through a PnP algorithm according to the camera internal reference matrix and the space coordinates and the image coordinates of the 8 calibration point pairs.

And 5, re-projecting the space points of the point cloud to the image according to the result, comparing the space points with the image coordinates, and evaluating the calibration result.

In the above implementation process, in other embodiments of the present invention, it is described that the calibration method for calibrating the camera to obtain the internal parameter matrix is performed in step 4 by using the upright friend calibration methodWhereinAnddenotes the focal length of the camera in pixels in the x and y directions, respectively, (u)0,v0) Representing the pixel coordinates of the optical center on the image. And the formula of the conversion relationship from the laser radar coordinate system to the image coordinate is as follows:

from the lidar is obtained the three-dimensional coordinates of the index pointThe image coordinates of the index point are obtained from the cameraWhen the number of calibration point pairs reaches 8 pairs, the rotation matrix R and the translation of the laser radar and the camera can be solvedVector t. And re-projecting the space points to the image according to the result, comparing the space points with the image coordinates, and evaluating the calibration result.

In the implementation process, the calibration reference device is three-dimensional instead of planar, calibration point pairs at different depths can be obtained, so that the calibration result is more accurate, and meanwhile, the size of the calibration reference device can be adjusted, so that the laser radar and the camera at different distances can be calibrated, and the method has wide application value.

Meanwhile, the invention provides a system for calibrating the positions of a laser radar and a camera, which comprises an image calibration module, a point cloud calibration module and a calculation module,

the image calibration module acquires image coordinates of the calibration points by using an image algorithm through a calibration reference device image acquired by a camera, wherein the calibration reference device comprises at least two square flat plates which are sequentially connected with each other, have different colors and are sequentially increased, the image calibration module at least respectively identifies four vertexes of the two square flat plates by using the image algorithm, takes the vertexes as the calibration points, calibrates the image coordinates of the vertexes,

the point cloud calibration module acquires the three-dimensional coordinates of the calibration points by using a point cloud algorithm through the point cloud of the calibration reference device acquired by the laser radar, wherein the point cloud calibration module at least respectively identifies four vertexes of two square flat plates by using the point cloud algorithm to calibrate the three-dimensional coordinates of the vertexes,

and the calculation module determines the position relation between the laser radar and the camera by using a PnP algorithm through the image coordinates and the three-dimensional coordinates corresponding to the calibration points. The information interaction, execution process and other contents between the modules in the system are based on the same concept as the method embodiment of the present invention, and specific contents can be referred to the description in the method embodiment of the present invention, and are not described herein again. Similarly, the system can observe the same calibration reference device according to the laser radar and the camera, obtain the image coordinates of the calibration point through an image algorithm, obtain the three-dimensional coordinates of the calibration point through a point cloud algorithm, correspond to the calibration point pair, and when the calibration point pair is sufficient, namely at least 8 pairs of time, can quickly and accurately calculate the relative position of the laser radar and the camera. The method is convenient for increasing the environment perception capability, and is applied to intelligent equipment such as automatic driving or robots to complete information fusion of the laser radar and the camera. Meanwhile, when a real scene is reconstructed in a three-dimensional mode, the method can also be used for rapidly fusing the information of the two sensors, so that the point cloud acquired by the laser radar can be used for finding the corresponding pixel points in the image to obtain the color, and the point cloud is colored to form the color point cloud.

The invention also provides a device for calibrating the positions of the laser radar and the camera, which comprises at least one memory and at least one processor;

the at least one memory to store a machine readable program;

the at least one processor is used for calling the machine readable program to execute the laser radar and camera position calibration method. The contents of information interaction, readable program process execution and the like of the processor in the device are based on the same concept as the method embodiment of the present invention, and specific contents can be referred to the description in the method embodiment of the present invention, and are not described herein again. Similarly, the device of the invention can observe the same calibration reference device according to the laser radar and the camera, obtain the image coordinate of the calibration point through the image algorithm, obtain the three-dimensional coordinate of the calibration point through the point cloud algorithm, and correspond to the calibration point pair, when the calibration point pair is enough, namely at least 8 pairs, the relative position of the laser radar and the camera can be rapidly and accurately calculated. The method is convenient for increasing the environment perception capability, and is applied to intelligent equipment such as automatic driving or robots to complete information fusion of the laser radar and the camera. Meanwhile, when a real scene is reconstructed in a three-dimensional mode, the method can also be used for rapidly fusing the information of the two sensors, so that the point cloud acquired by the laser radar can be used for finding the corresponding pixel points in the image to obtain the color, and the point cloud is colored to form the color point cloud.

It should be noted that not all steps and modules in the processes and system structures of the above preferred embodiments are necessary, and some steps or modules may be omitted according to actual needs. The execution order of the steps is not fixed and can be adjusted as required. The system structure described in the above embodiments may be a physical structure or a logical structure, that is, some modules may be implemented by the same physical entity, or some modules may be implemented by a plurality of physical entities, or some components in a plurality of independent devices may be implemented together.

The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种集成拖曳式探测声呐系统的集装箱

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!