Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
1. A multi-feature fusion IGV positioning and mapping method based on a 3D laser radar is characterized by comprising the following steps:
s1: dividing the whole map scene into a plurality of cubes with the same size, giving a certain probability value to each cube, giving a probability value of a three-dimensional landmark region 1, and constructing a three-dimensional occupation grid map. Adding a three-dimensional landmark with various characteristics of a tearing point, an angular point, a straight line and a circular arc into a map, calibrating the three-dimensional landmark by using a 3D laser radar sensor, and defining an IGV initial pose according to actual distance and angle information.
S2: and preprocessing data acquired by the 3D laser radar sensor, and recording a laser point acquired by the 3D laser radar as C, wherein the laser point comprises timestamp, reflection intensity, distance and angle information. The IGV defined in step S1 has an initial pose (x, y, θ) in the world coordinate system, and the scanned laser point has a coordinate (x) in the world coordinate systemc,yc) Comprises the following steps:
where d is distance information included in C, and θ is angle information included in C.
The point cloud data structure of the laser point C is represented by xc、ycA timestamp and a reflection intensity, and a frame of point cloud obtained under a radar coordinate system L at the moment k isWherein t iskK is the time stamp at the time this frame of point cloud was acquired,for the ith point in the point cloud under the radar coordinate system L at the moment kAnd (4) marking. Therefore, ordering of the three-dimensional point cloud data can be realized according to the radar scanning time, the scanning angle and the angle resolution.
And finally, removing unnecessary point clouds by range filtering, voxel filtering and point cloud denoising of the ordered point clouds, and outputting preprocessed point cloud data. The range filtering removes point clouds which are not in the 3D laser radar scene; the voxel filtering uses partial point clouds to replace whole point clouds and stores the original point cloud characteristics, so that the number of the point clouds is reduced; point cloud denoising is the removal of noise generated by range filtering and voxel filtering.
S3: and scanning and matching the preprocessed 3D laser radar sensor data, and adopting a frame-subgraph insertion mode for the processed 3D laser radar sensor data, wherein one frame is composed of a plurality of point clouds, and the subgraph is composed of a plurality of frames. Before the current frame is inserted into the sub-graph, the pose of the scanning frame and the current sub-graph need to be optimized, and the probability sum of all laser points in the frame falling into the current sub-graph is maximum, so that the problem can be converted into the following least square problem:
xi is the pose of IGV, the function M () is the probability of occupying a coordinate in the grid map as an obstacle, and the function SiAnd (xi) is the coordinate of the obstacle irradiated by the laser point I in one frame in the occupation grid map.
The least squares problem is a local optimization problem requiring an initial pose estimate, which can be calculated by step S1 if the IGV is stationary, and provided by the encoder and IMU prediction if the IGV is in operation. The pose transformation relation matrix T of the current frame and the subgraph can be obtained by solvingξThe following were used:
wherein R isξAs a rotation matrix, tξTo translate the matrix, the subgraph is described using a probability grid.
S4: the method comprises the steps of local map construction, continuous scanning matching of a 3D laser radar sensor is achieved, a sub-graph set strategy is adopted, a sub-graph set is firstly constructed, each sub-graph set is composed of a plurality of sub-graphs, the last two sub-graphs of the sub-graph set are respectively marked as P1 and P2, a current frame is matched with P1 and inserted into P1 and P2, as the sub-graphs are composed of a plurality of frames, after the number of the frames in P2 reaches a threshold number, marking P1 is finished, a new sub-graph is built, the original P2 is marked as new P1, the new sub-graph is marked as P2, and the insertion process of the current frame is continued. And (3) increasing three constraints of a grid cost function occupied by the point cloud, a translation cost function and a rotation cost function through a sub-set strategy, and constructing a local optimal sub-graph.
S5: in the back-end optimization, since the lidar sensor scanning frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus, an accumulated error exists. Therefore, the pose of the scanning frame acquired by the 3D laser radar sensor when the sub-graph is inserted can be cached in the memory, and when the sub-graph does not change any more, all the scanning frame and the sub-graph can be used for closed-loop detection. By adopting the optimization mode of the graph, taking the multi-feature three-dimensional landmarks and the IGV poses as vertexes, taking the pose transformation relation matrixes of the current frame and the sub-image obtained in the step S3 and the error function of the pose transformation relation observed by the IGV at the moment as edges, the overall optimization problem is changed into a form of adding a plurality of edges, and the target optimization function is constructed as follows:
in the above equation, the function E () is an error function,the positions and postures of the subgraphs and the scanning frames are respectively expressed under certain constraint conditions, m is the number of the subgraphs, and n is the number of the scanning frames. Relative pose xiijIndicating the matching position of scan frame j in sub-picture i, associated with the co-partyDifference matrix omegaijTogether forming optimization constraints. And (4) combining a Gauss-Newton method to quickly solve the optimization problem to obtain the conversion relation after the frame and the subgraph are optimized.
S6: and (4) acquiring a track pose, namely, constructing a local map in step S4, combining the input of motion control quantity, taking the multi-feature three-dimensional landmark as observed quantity, and predicting the IGV pose by using an IMU and an encoder to obtain the IGV running track.
S7: and (4) closed-loop detection, namely storing the track pose obtained in the step S6, and rapidly traversing all tracks by adopting a branch-and-bound method when passing through a certain point on the track again to obtain a historical frame. If the current frame and the historical frame are successfully matched, one edge is added to the graph optimization problem in step S5 to form a constraint and complete closed-loop detection.
2. The 3D lidar-based multi-feature fusion IGV positioning and mapping method according to claim 1, wherein step S3 specifically comprises the steps of:
s31: in static state, the initial pose is calculated by the IGV relative to the three-dimensional landmark physical information, and in dynamic state, the initial pose is provided by the encoder and IMU prediction. And (4) recording the initial pose T as { x, y, theta }, setting a window near the initial pose T, setting the size of the window, and calculating the iteration step length and the iteration times of the window according to the following formula.
Wherein h iskRepresenting laser spots in the scan frame, dmaxDenotes the farthest laser spot, r denotes the position iterationStep size, typically one grid; deltaθRepresenting an angle iteration step; wx、Wy、WzDenotes the window sizes, ω, of x, y, and θ, respectivelyk、ωy、ωθRepresenting the number of iterations in the x, y, z directions, respectively.
S32: obtaining all candidate pose transformation combinations w of the search window according to S31 as follows:
w={-ωx,...,ωx}×{-ωy,...,ωy}×{-ωθ,...,ωθ}
s33: calculating all candidate pose transformation scores, and recording one candidate value as Ti={xi,yi,θiAnd converting the current frame point cloud C into the established grid map, summing the grid probability values corresponding to the coordinates of each point after the change, dividing the sum by the point number to obtain a probability mean value S, and calculating the score according to the following formula.
Wherein α and β represent the weights occupied by translation and rotation;
s34: and selecting the pose transformation with the highest score from the candidate values to serve as the optimal matching.
3. The 3D lidar-based multi-feature fusion IGV positioning and mapping method according to claim 2, wherein the step S5 specifically comprises the steps of:
s51: recording the pose transformation matrix of the IGV with the highest score in the scanning matching of the points A and B as TaAnd TbAnd the pose transformation relation between the two points is T:
wherein T is in the optimization theory of the graphaAnd TbIs the vertex of the graph;
s52: since the lidar scan frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus there is an accumulated error. Recording the position and posture transformation matrix observed by the 3D laser radar as ZabThen error function eab(A,B,Zab) Expressed as:
where t2v () represents the matrixing of the transformation into vectors, i.e. translations and poses;
s53: the edges defining graph optimization are:
eab=e(A,B,ZAB)TΩABe(A,B,ZAB)
wherein the information matrix omegaABRepresenting the weight of the vertexes A and B;
s54: according to the relation between the top point and the edge in the graph, the optimization objective function is the minimum value of the sum of all the edges
Wherein x is a set of vertices, C is a set of two vertex permutations,represents the edge between vertices i and j, ΩijVertex i and j weights;
s55: and solving the objective function by adopting a Gauss-Newton method, using the pose transformation of the three-dimensional landmark relative to the IGV as a vertex, and constructing an edge relation only when the three-dimensional landmark is observed in the running process of the IGV, thereby reducing the calculation amount of the objective function solution and optimizing the pose transformation relation.
Background
With the development of storage logistics, our country has shifted from low-end manufacturing industry mainly relying on labor force to artificial intelligence technology industry, and the continuous development of mobile robot industry is promoted by factors such as insufficient artificial supply, and China will become a large country for mobile robot manufacturing and application in the future. Compared with a camera, the 3D laser radar is another sensor capable of sensing a three-dimensional environment, can directly acquire three-dimensional point cloud data, and has the characteristics of high distance measurement precision, small influence of light rays, electromagnetic interference resistance and the like. The research on the high-precision positioning and mapping suitable for the warehouse logistics environment is significant to the operation of the warehouse robot.
Disclosure of Invention
The invention aims to realize high-precision positioning and mapping of storage IGVs, simultaneously overcome the plane limitation of a 2D laser radar and meet the requirement of all-day operation of the storage IGVs, and provides a multi-feature fusion IGV positioning and mapping method based on a 3D laser radar. Because the built subgraph has errors, the method adopts graph optimization to build an optimization objective function, solves the problem through a Gauss-Newton method, reduces the calculated amount by depending on the observation value of the multi-feature three-dimensional landmark, and quickly finds the optimal subgraph. After the optimal subgraph is constructed, all tracks are stored, and when the optimal subgraph passes through again, all tracks are traversed, the IGV is repositioned, closed-loop detection is completed, and therefore high-precision IGV positioning and graph construction are achieved
In order to achieve the purpose, the technical scheme of the invention is as follows: a multi-feature fusion IGV positioning and mapping method based on a 3D laser radar comprises the following steps:
s1: dividing the whole map scene into a plurality of cubes with the same size, giving a certain probability value to each cube, giving a probability value of a three-dimensional landmark region 1, and constructing a three-dimensional occupation grid map. Adding a three-dimensional landmark with various characteristics of a tearing point, an angular point, a straight line and a circular arc into a map, calibrating the three-dimensional landmark by using a 3D laser radar sensor, and defining an IGV initial pose according to actual distance and angle information.
S2: and preprocessing data acquired by the 3D laser radar sensor, and recording a laser point acquired by the 3D laser radar as C, wherein the laser point comprises timestamp, reflection intensity, distance and angle information. The IGV defined in step S1 has an initial pose (x, y, θ) in the world coordinate system, and the scanned laser point has a coordinate (x) in the world coordinate systemc,yc) Comprises the following steps:
where d is distance information included in C, and θ is angle information included in C.
The point cloud data structure of the laser point C is represented by xc、ycA timestamp and a reflection intensity, and a frame of point cloud obtained under a radar coordinate system L at the moment k isWherein t iskK is the time stamp at the time this frame of point cloud was acquired,and the coordinate of the ith point in the point cloud under the radar coordinate system L at the moment k. Therefore, ordering of the three-dimensional point cloud data can be realized according to the radar scanning time, the scanning angle and the angle resolution.
And finally, removing unnecessary point clouds by range filtering, voxel filtering and point cloud denoising of the ordered point clouds, and outputting preprocessed point cloud data. The range filtering removes point clouds which are not in the 3D laser radar scene; the voxel filtering uses partial point clouds to replace whole point clouds and stores the original point cloud characteristics, so that the number of the point clouds is reduced; point cloud denoising is the removal of noise generated by range filtering and voxel filtering.
S3: and scanning and matching the preprocessed 3D laser radar sensor data, and adopting a frame-subgraph insertion mode for the processed 3D laser radar sensor data, wherein one frame is composed of a plurality of point clouds, and the subgraph is composed of a plurality of frames. Before the current frame is inserted into the sub-graph, the pose of the scanning frame and the current sub-graph need to be optimized, and the probability sum of all laser points in the frame falling into the current sub-graph is maximum, so that the problem can be converted into the following least square problem:
xi is the pose of IGV, the function M () is the probability of occupying a coordinate in the grid map as an obstacle, and the function SiAnd (xi) is the coordinate of the obstacle irradiated by the laser point I in one frame in the occupation grid map.
The least squares problem is a local optimization problem requiring an initial pose estimate, which can be calculated by step S1 if the IGV is stationary, and provided by the encoder and IMU prediction if the IGV is in operation. The pose transformation relation matrix T of the current frame and the subgraph can be obtained by solvingξThe following were used:
wherein R isξAs a rotation matrix, tξTo translate the matrix, the subgraph is described using a probability grid.
S4: the method comprises the steps of local map construction, continuous scanning matching of a 3D laser radar sensor is achieved, a sub-graph set strategy is adopted, a sub-graph set is firstly constructed, each sub-graph set is composed of a plurality of sub-graphs, the last two sub-graphs of the sub-graph set are respectively marked as P1 and P2, a current frame is matched with P1 and inserted into P1 and P2, as the sub-graphs are composed of a plurality of frames, after the number of the frames in P2 reaches a threshold number, marking P1 is finished, a new sub-graph is built, the original P2 is marked as new P1, the new sub-graph is marked as P2, and the insertion process of the current frame is continued. And (3) increasing three constraints of a grid cost function occupied by the point cloud, a translation cost function and a rotation cost function through a sub-set strategy, and constructing a local optimal sub-graph.
S5: in the back-end optimization, since the lidar sensor scanning frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus, an accumulated error exists. Therefore, the pose of the scanning frame acquired by the 3D laser radar sensor when the sub-graph is inserted can be cached in the memory, and when the sub-graph does not change any more, all the scanning frame and the sub-graph can be used for closed-loop detection. By adopting the optimization mode of the graph, taking the multi-feature three-dimensional landmarks and the IGV poses as vertexes, taking the pose transformation relation matrixes of the current frame and the sub-image obtained in the step S3 and the error function of the pose transformation relation observed by the IGV at the moment as edges, the overall optimization problem is changed into a form of adding a plurality of edges, and the target optimization function is constructed as follows:
in the above equation, the function E () is an error function,the positions and postures of the subgraphs and the scanning frames are respectively expressed under certain constraint conditions, m is the number of the subgraphs, and n is the number of the scanning frames. Relative pose xiijRepresents the matching position of the scan frame j in the sub-graph i, the covariance matrix omega associated with itijTogether forming optimization constraints. And (4) combining a Gauss-Newton method to quickly solve the optimization problem to obtain the conversion relation after the frame and the subgraph are optimized.
S6: and (4) acquiring a track pose, namely, constructing a local map in step S4, combining the input of motion control quantity, taking the multi-feature three-dimensional landmark as observed quantity, and predicting the IGV pose by using an IMU and an encoder to obtain the IGV running track.
S7: and (4) closed-loop detection, namely storing the track pose obtained in the step S6, and rapidly traversing all tracks by adopting a branch-and-bound method when passing through a certain point on the track again to obtain a historical frame. If the current frame and the historical frame are successfully matched, one edge is added to the graph optimization problem in step S5 to form a constraint and complete closed-loop detection.
Further, step S3 specifically includes the following steps:
s31: in static state, the initial pose is calculated by the IGV relative to the three-dimensional landmark physical information, and in dynamic state, the initial pose is provided by the encoder and IMU prediction. And (4) recording the initial pose T as { x, y, theta }, setting a window near the initial pose T, setting the size of the window, and calculating the iteration step length and the iteration times of the window according to the following formula.
Wherein h iskRepresenting laser spots in the scan frame, dmaxThe farthest laser point is represented, r represents the position iteration step, typically a grid; deltaθRepresenting an angle iteration step; wx、Wy、WzDenotes the window sizes, ω, of x, y, and θ, respectivelyx、ωy、ωθRepresenting the number of iterations in the x, y, z directions, respectively.
S32: obtaining all candidate pose transformation combinations w of the search window according to S31 as follows:
w={-ωx,...,ωx}×{-ωy,...,ωy}×{-ωθ,...,ωθ}
s33: calculating all candidate pose transformation scores, and recording one candidate value as Ti={xi,yi,θiAnd converting the current frame point cloud C into the established grid map, summing the grid probability values corresponding to the coordinates of each point after the change, dividing the sum by the point number to obtain a probability mean value S, and calculating the score according to the following formula.
Wherein α and β represent the weights occupied by translation and rotation;
s34: selecting pose transformation with the highest score from the candidate values, and taking the pose transformation as optimal matching;
further, the step S5 specifically includes the following steps:
s51: recording the pose transformation matrix of the IGV with the highest score in the scanning matching of the points A and B as TaAnd TbAnd the pose transformation relation between the two points is T:
wherein T is in the optimization theory of the graphaAnd TbIs the vertex of the graph;
s52: since the lidar scan frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus there is an accumulated error. Recording the position and posture transformation matrix observed by the 3D laser radar as ZabThen error function eab(A,B,Zab) Expressed as:
where t2v () represents the matrixing of the transformation into vectors, i.e. translations and poses;
s53: the edges defining graph optimization are:
eab=e(A,B,ZAB)TΩABe(A,B,ZAB)
wherein the information matrix omegaABRepresenting the weight of the vertexes A and B;
s54: according to the relation between the top point and the edge in the graph, the optimization objective function is the minimum value of the sum of all the edges
Wherein x is a set of vertices, C is a set of two vertex permutations,represents the edge between vertices i and j, ΩijVertex i and j weights;
s55: and solving the objective function by adopting a Gauss-Newton method, using the pose transformation of the three-dimensional landmark relative to the IGV as a vertex, and constructing an edge relation only when the three-dimensional landmark is observed in the running process of the IGV, thereby reducing the calculation amount of the objective function solution and optimizing the pose transformation relation.
The invention has the beneficial effects that: the invention adopts a multi-feature three-dimensional landmark as a road sign, and does not need to consider the map symmetry condition existing in a plane environment; the 3D laser radar is used for capturing multi-feature information such as three-dimensional landmark distance, reflection intensity and angle in space, a good initial pose is provided for front-end scanning matching, and meanwhile, in back-end optimization, an optimized objective function constructed in a graph optimization mode can be rapidly solved. And (3) storing all track data by IGV and rapidly realizing closed-loop detection by using a branch-and-bound method. The method enables the IGV to have the capability of positioning and mapping the environment with high precision, thereby realizing the stable and rapid operation of the warehouse logistics.
Drawings
FIG. 1 is a flow chart of the method implementation of the present invention.
Fig. 2 is a schematic diagram of a three-dimensional landmark of the present invention.
Detailed Description
The steps performed by the method are described in further detail in the following description with reference to the figures, but the scope of the invention is not limited to the following.
As shown in fig. 1, the method for positioning and mapping a multi-feature fusion IGV based on a 3D lidar provided by the present invention enables the IGV to realize high-precision positioning and mapping in a complex environment. The method specifically comprises the following steps:
s1: dividing the whole map scene into a plurality of cubes with the same size, giving a certain probability value to each cube, giving a probability value of a three-dimensional landmark region 1, and constructing a three-dimensional occupation grid map. In the grid map, let the laser hit be hit, the miss be miss, the probability of miss is represented by p (s ═ 1), the probability of hit is represented by p (s ═ 0), and the sum of the two is 1.
Taking the ratio of the two as the state of the point:
p (s ═ 1) is represented by P:
for a 3D laser sensor, a new measurement value needs to update the state of a point, and the probability before measurement is odd (p), and the probability occupied after update is:
Mnew(x)=clamp(odds-1(odds(Mold(x))·odd(Phit)))
wherein M isold(x) To update the pre-occupation probability, PhitIs P.
A plurality of multi-feature three-dimensional landmarks are placed in a map, wherein the multi-feature three-dimensional landmarks comprise three-dimensional landmarks with various features such as tearing points, corner points, straight lines, circular arcs and the like, and the three-dimensional landmarks have various feature information such as straight lines and circular arcs on the aspect of geometrical features. In terms of material, the light-reflecting material has strong light-reflecting property. And calibrating the three-dimensional landmarks by using the 3D laser radar sensor, taking the multi-feature three-dimensional landmarks as markers in a grid map, and defining the initial pose of the IGV according to actual distance and angle information without additional markers.
S2: and confirming the horizontal scanning range, the pitching scanning range, the scanning maximum distance and the frequency of the 3D laser radar.
The method comprises the following steps of preprocessing sensor data acquired by a 3D laser radar sensor:
s21: recording the laser point collected by the 3D laser radar as C, knowing that the initial pose of the IGV in the world coordinate system is (x, y, theta) from S1, converting all the scanned laser points into coordinates in the world coordinate system, namely coordinates (x)c,yc) Comprises the following steps:
where d is distance information of the laser spot, and θ is angle information of the laser spot.
S22: from xc、ycA point cloud data structure of the laser point C is formed by a timestamp, a reflection intensity and the like, and a frame of point cloud obtained under a radar coordinate system L at the moment k isWherein t iskK is the time stamp at the time this frame of point cloud was acquired,and the coordinate of the ith point in the point cloud under the radar coordinate system L at the moment k. And ordering the three-dimensional point cloud data according to the radar scanning time, the scanning angle, the angle resolution and the like.
S23: and (3) for reducing the point clouds by combining range filtering, voxel filtering and point cloud denoising for the sorted point clouds, the specific process is as follows:
1. firstly, removing point cloud data which are not in a 3D laser radar scene;
2. then, counting the point clouds contained in each grid and the number of laser lines passing through the grid;
3. and traversing all the point clouds to obtain the point cloud number n of the grid where each point is located and the laser line number m passing through the grid. And setting a threshold value L, if n is larger than Lm, considering the point as an external point, deleting, and otherwise, keeping.
4. Finally, down-sampling is carried out, and the gravity center in the grid is used for replacing points in the whole grid;
s3: and scanning and matching the preprocessed point cloud data, and adopting a frame-subgraph insertion mode for the processed 3D laser radar sensor data, wherein one frame consists of a plurality of point clouds, and the subgraph consists of a plurality of frames. Before the current frame is inserted into the sub-graph, the pose of the scanning frame and the current sub-graph need to be optimized, and the probability sum of all laser points in the frame falling into the current sub-graph is maximum, so that the problem can be converted into the following least square problem:
xi is the pose of IGV, the function M () is the probability of occupying a coordinate in the grid map as an obstacle, and the function SiAnd (xi) is the coordinate of the obstacle irradiated by the laser point I in one frame in the occupation grid map.
The least squares problem is a local optimization problem requiring an initial pose estimate, which can be calculated by step S1 if the IGV is stationary, and provided by the encoder and IMU prediction if the IGV is in operation. The pose transformation relation matrix T of the current frame and the subgraph can be obtained by solvingξThe following were used:
wherein R isξAs a rotation matrix, tξTo translate the matrix, the subgraph is described using a probability grid. Step S3 specifically includes the following substeps:
s31: in static state, the initial pose is physically solved through the IGV relative to the three-dimensional landmark, and in dynamic state, the initial pose is provided through the prediction of an encoder and an IMU. When the initial pose is physically solved through the three-dimensional landmarks, the three-dimensional landmarks need to be scanned by a 3D laser radar, and the initial pose of the IGV is defined through extracting multi-feature characteristic information such as angles, distances, reflection intensity, angular points and the like, wherein the process specifically comprises the following steps:
based on the plane curvature, the angular point and the plane point are distinguished only by addition and subtraction operation among the laser point coordinates, and the calculation formula of the curvature c at the point i is as follows:
wherein, S is a set of n continuous points around the point i, and comprises n/2 points distributed around the point i.
In an environment with a large and uniform point cloud density, the local curvature features of the point neighborhood are considered in the feature point extraction algorithm, and then the types of the points are sorted and distinguished according to the local relative sizes of the curvature features, and the simplified calculation of the calculation formula in the step S31 includes:
and finishing the extraction of the angular points, and selecting Mc points with the largest curvature as the angular points, wherein Nc points with the largest point cloud sequencing are used as main angular points, and Mc-Nc points are used as secondary angular points. And selecting the minimum Np points as main plane points, and classifying all other remaining points as secondary plane points. And selecting the feature points line by line until all the points of one frame of point cloud are classified, and completing feature extraction of the multi-feature three-dimensional landmarks.
And S32, recording step S31 to provide an initial pose T ═ x, y, theta, generate a series of candidate values near the current frame point cloud, correspondingly generate a series of candidate values near the value to be evaluated, form a search window, set the size of the window, and calculate the iteration step length and the iteration times of the window according to the following formula.
Wherein h iskRepresenting laser spots in the scan frame, dmaxThe farthest laser point is represented, r represents the position iteration step, typically a grid; deltaθRepresenting an angle iteration step; wx、Wy、WzDenotes the window sizes, ω, of x, y, and θ, respectivelyx、ωy、ωθRepresenting the number of iterations in the x, y, z directions, respectively.
S32: obtaining all candidate pose transformation combinations of the search window according to S32:
w={-ωx,...,ωx}×{-ωy,...,ωy}×{-ωθ,...,ωθ}
s33: calculating all candidate pose transformation scores, and recording one candidate value as Ti={xi,yi,θiAnd converting the current frame point cloud C into the established grid map, summing the grid probability values corresponding to the changed point coordinates, dividing the sum by the points to obtain a probability mean value S, and calculating the score according to the following formula.
Wherein α and β represent the weights occupied by translation and rotation;
s34: selecting pose transformation with the highest score from the candidate values, and taking the pose transformation as an optimal pose state transformation relation;
s4: and constructing a local map. The step S3 specifically includes the following steps:
and S41, adopting a sub-set strategy for constructing the local map for the continuous scanning matching of the 3D laser radar.
S42: firstly, three constraints of a grid cost function, a translation cost function and a rotation cost function occupied by the point cloud are added.
S43: and then constructing sub-graph sets, wherein each sub-graph set is formed by a certain number of sub-graphs, and the last two sub-graphs of the sub-graph sets are marked as P1 and P2 respectively.
S44: the current frame is matched with P1 and inserted into P1 and P2, since the sub-picture is composed of several frames, when the number of frames in P2 reaches a threshold number, the flag P1 has been completed, formally called a member of the sub-picture set.
S45: and establishing a new sub-graph, recording the original P2 as a new P1, recording the new sub-graph as P2, continuously inserting the current frame, and continuously iterating to obtain the local optimal map.
S5: since the lidar sensor scan frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus there is an accumulated error. Therefore, the pose of the scanning frame acquired by the 3D laser radar sensor when the sub-graph is inserted can be cached in the memory, and when the sub-graph does not change any more, all the scanning frame and the sub-graph can be used for closed-loop detection. For the accumulated errors calculated in the scanning matching and the local mapping, an optimization mode of the mapping is adopted, the back-end optimization is carried out, the multi-feature three-dimensional landmarks and the IGV poses are taken as vertexes, the pose transformation relation matrixes of the current frame and the sub-image obtained in the step S3 and the error function of the pose transformation relation of IGV observation at the moment are taken as sides, the overall optimization problem is changed into a form of adding a plurality of sides, and a target optimization function is constructed as follows:
in the above formula, the function E is an error function,respectively representing the pose of the subgraph and the pose of the scanning frame under certain constraint conditions, wherein m is the subgraphThe number, n, is the number of scan frames. Relative pose xiijRepresents the matching position of the scan frame j in the sub-graph i, the covariance matrix omega associated with itijTogether forming optimization constraints. And (4) combining a Gauss-Newton method to quickly solve the optimization problem to obtain a conversion relation after optimizing the frame and the subgraph, and correcting the accumulated error generated by front-end scanning.
Through the established three-dimensional landmarks and in combination with a gauss-newton method, performing fast solving on an optimization problem to obtain an optimized pose transformation relation, wherein the step S5 specifically comprises the following steps:
s51: according to step S4, the pose transformation matrix with the highest score of IGV in A and B point scanning matching is recorded as TaAnd TbAnd the pose transformation relation between the two points is T:
wherein T is in the optimization theory of the graphaAnd TbIs the vertex of the graph;
s52: since the lidar scan frame is only matched with the current sub-graph in step S3, the environment map is composed of a series of sub-graphs, and thus there is an accumulated error. Recording the posture transformation matrix observed by the 3D laser radar sensor at the moment as ZabThen the error function is expressed as:
where t2v () represents the matrixing of the transformation into vectors, i.e. translations and poses;
s53: the edges defining graph optimization are:
eab=e(A,B,ZAB)TΩABe(A,B,ZAB)
wherein the information matrix omegaABRepresenting the weight of the vertexes A and B;
s54: according to the relation between the top point and the edge in the graph, the optimization objective function is the minimum value of the sum of all the edges
Wherein x is a set of vertices, C is a set of two vertex permutations,represents the edge between vertices i and j, ΩijVertex i and j weights;
s55: solving the objective function by adopting a Gauss-Newton method, using the pose transformation of the three-dimensional landmark relative to the IGV as a vertex, and iterating an initial value x0At Taylor first order expansion, then there are:
where C is a set of two vertex permutation combinations, C is a constant term independent of Δ x, bTIs a first order coefficient, and H is a Hessian matrix, which is a second order coefficient.
S56: calculating a Jacobian matrix, and expanding the formula in the step S55 as follows:
wherein R isz,tzRespectively representing a rotation matrix and a translation matrix in the observed transformation matrix,the amount of error translation is indicated,an error rotation matrix is represented.
S57: respectively obtaining the partial derivatives of i and j, firstly obtaining the translation vector derivative, obtaining the attitude derivative, and recording the translation vector derivative result as MijThe attitude derivation result is recorded as Nij。
S58: the three-dimensional landmark is a landmark on an IGV path, and in the operation process, only part of points construct the relationship of edges with the vertex of the three-dimensional landmark, so the Jacobian matrix form is as follows:
Jij=(0,0,0,Mij,Nij,0,...,0)
s59: calculating the first order term coefficients, step S55 may result:
s510, calculating a Hessian matrix, wherein the step S55 can obtain:
s511, solving the objective function by adopting a Gauss-Newton method, and optimizing the pose transformation relation; in the running process of the IGV, only when the three-dimensional ground is observed, the edge relation is constructed, so that the calculation amount of the objective function is reduced, and the pose transformation relation is optimized.
S6: and (4) track pose acquisition, namely, establishing a local map in the step S4, and simultaneously calculating an IGV track by using a given control quantity and multi-feature three-dimensional landmark pose state information in the map, wherein the multi-feature three-dimensional landmarks are used as observed quantities and combined with an IMU, an encoder and a 3D laser radar sensor.
S7: and closed-loop detection, wherein a track is recorded once every time of drawing construction, the IGV stores all tracks, and when the IGV passes through a certain point on the track again, all tracks are quickly traversed by adopting a branch-and-bound method to obtain historical frames. By detecting successfully between the current frame and the historical frame, an edge is added to the graph optimization problem in step S5 to form a constraint to complete closed-loop detection. If the IGV is on a passing trajectory, the IGV is quickly repositioned and brought on-line.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.