Non-cooperative target flexible attachment multi-node fusion estimation method
1. The non-cooperative target flexible attachment multi-node fusion estimation method is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
firstly, constructing multi-viewpoint geometric optical information by pixels and image lines of angular point features in an observation area overlapped among nodes, and constructing a node configuration estimation problem by utilizing the multi-viewpoint geometric optical information under the constraint of flexible connection to realize accurate estimation of a cooperative attached multi-node configuration;
secondly, taking the pixel coordinates of the surface appearance features of the small celestial body in each node camera as observed quantities, and when the observed information of a certain node is insufficient, converting the observation of other nodes into the observation coordinate system of the node by using the node configuration, and fusing the observation of each node;
and thirdly, each node estimates the state of the node, and then combines the node configuration estimated in the first step, and adopts the minimum estimation covariance criterion to fuse the state estimation results of each node, so that the autonomous navigation performance of each node in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of the non-cooperative target is further improved, and the non-cooperative target is favorably and safely attached to the surface of a small celestial body.
2. The non-cooperative target flexible attachment multi-node fusion estimation method according to claim 1, characterized in that: the first implementation method comprises the following steps of,
the node configuration is represented as
{Lxij∈LXF||(i,j)∈ε} (1)
Wherein the content of the first and second substances,LXFin order to be in a node configuration,Lxij=(Lpij,Lvij,Lqij) Are the elements that make up the configuration,represents a graph into which the node configuration is abstracted,representing nodes, epsilon representing the connection between nodes, set of nodes adjacent to node iIs shown as
The observation equation of the node-mounted camera is
Wherein the content of the first and second substances,is normalized characteristic pixel element and image line coordinate (x)i,yi,zi) Is the position of node i in the coordinate system of the attachment point, cijThe elements in the attitude transformation matrix for node i under the attachment point coordinate system,is characterized by three-dimensional coordinates under an attachment point coordinate system; consider thatThe cameras of all the nodes have coincident observation areas, and the epipolar constraint relation between the images is established by pairwise pixel coordinates of the homonymous feature points
Wherein the content of the first and second substances,andthe coordinates of the pixels and the image lines of the same characteristic in the two-viewpoint images are essential matrixes which represent the pose relationship between two cameras
Wherein the content of the first and second substances,is the relative position between the nodes and is,is the relative attitude between the nodes; solving the essential matrix by simultaneously establishing equations formed by multiple groups of homonymous features, and decomposing the features to obtain the relative pose relationship between the two cameras
Obtaining the constraint between the relative attitude and the relative position vector between the nodes according to the formula (7) and the formula (8); it is worth noting that the specific value of the relative posture between the nodes is uniquely determined by the feature visibility;
wherein R isiIn the body coordinate system, | · | is a mathematical symbol of an absolute value,is a mathematical symbol with a modular length;
meanwhile, inequality constraint of relative distance between nodes is obtained according to flexible connection relation between nodes
Wherein the content of the first and second substances,the relative distance between the nodes in the nominal state,the allowed relative distance variation range between the nodes is the structure;
the formula (9) and the formula (10) provide effective information for the node configuration state estimation, namely the posterior estimation of the node configuration needs to be satisfied
s.t.
Wherein J is a loss function of the unconstrained optimal estimation of the node configuration,for a prior estimate of the dynamically derived configuration,for the configuration prior covariance, equation (12) is the equality constraint defined by equation (9), equation (13) is the inequality constraint defined by equation (10), and the inequality constraint defined by equation (13) is converted into the equality constraint by the active set method
The concrete form of the above formula is
The equations (11) to (13) are an optimal estimation problem with constraint, and the optimal estimation problem of the node configuration under the constraint is constructed by a Lagrange multiplier method
Wherein the content of the first and second substances,*j is a node configuration under constraintLoss function, λ, of the optimal estimation problem1And λ2For the Lagrange multiplier, the superscript (κ) represents the kth row of the matrix, n1And n2The number is restricted; and solving the optimal estimation problem, and transferring the node configuration obtained by estimation to the second step or the third step to realize the state fusion estimation of each node.
3. The non-cooperative target flexible attachment multi-node fusion estimation method according to claim 2, characterized in that: the second step is realized by the method that,
the flexible detector has m measurement nodes, and the node dynamic equation is
Wherein the content of the first and second substances,andas the state of the i-th node,in order to control the amount of the liquid,is a noise term; controlled acceleration experienced by a nodeMainly comprising thrust acceleration of an actuating mechanismAnd acceleration of flexible structureTwo are providedPart, which is measured by inertial navigation sensors
In the above formula, the first and second carbon atoms are,andare the measurements of an accelerometer and a gyroscope,andmeasuring noise of the inertial navigation sensor; besides the inertial sensor, each node is also provided with a navigation camera, and the observed quantity of the navigation camera is
Wherein, yiIn order to navigate the measurement quantities of the camera,h is the height of the node from the attachment plane, wcFor measuring noise, the noise w is measuredcIs white gaussian noise; the camera measurement equation of each node is
Wherein, yiFor the observed quantity, f is the focal length of the camera,in order to be a noise term, the noise term,is the line-of-sight vector, p, between the feature point and the node i cameraiIs the position vector of node i in the coordinate system of the attachment point,is a position vector of the feature point in the coordinate system of the attachment point, Ri,cA posture conversion matrix between a camera coordinate system and an attachment point coordinate system;
the node configuration estimated in the step one is utilized to convert the observation of other nodes into the observation coordinate system of the node, the observation fusion of each node is carried out, and the observation information conversion process is
Wherein, yj→iFor the observation that node j translates to node i,for a homogeneous observation of node j,for the attitude transformation matrix from the node j system to the node i system,the position vector direction from the node i to the node j is the system of the node i;
observing the observed quantity y of the node i after fusioni*Is composed of
Wherein, yiFor observation of node i itself, yj→iThe translated observations are observed for neighboring nodes.
4. The non-cooperative target flexible attachment multi-node fusion estimation method according to claim 3, characterized in that: the third step is to realize the method as follows,
considering that both node dynamics and an observation equation have strong uncertainty, estimating the node state by adopting a lossless Kalman filter; sigma sampling point at k-1 moment calculated by lossless transformation is
Wherein the content of the first and second substances,is the node i state at time k-1Sigma sampling point of, n isλ is a scale parameter, and iota is a process variable;
the state prediction process of the lossless Kalman filter is
The state updating process of the lossless Kalman filter is
Converting the state estimation of the node into the state estimation of an adjacent node through coordinate transformation; considering asynchronous state estimation, state recursion is carried out through lossless transformation to synchronously process estimated values at different moments, and the conversion process of state estimation is
Wherein the content of the first and second substances,for the estimation of the result from the state at time τThe k-time configuration estimation obtained by dynamics recursion,as a result of state estimation from time ζEstimating k moment obtained by dynamics recursion; similarly, the transformation process of covariance estimation is
Wherein the content of the first and second substances,estimating covariance from state at time ζEstimating covariance of state at the moment k by dynamics recursion;
fusion estimation of states of nodes by means of covariance cross fusion
Wherein, piijAs weighting coefficients, obtained by trace weighting of covariance
The relative state between the nodes is estimated by utilizing the multi-node optical observation information, and then observation data or states of all the nodes are fused, so that the autonomous navigation performance of all the nodes in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of a non-cooperative target is improved, and the non-cooperative target can be safely attached to the surface of a small celestial body.
5. The non-cooperative target flexible attachment multi-node fusion estimation method according to claim 4, wherein: in order to reduce the influence of time-varying unknown node configurations on multi-node information fusion, a node configuration estimation problem is constructed by using multi-viewpoint geometric optical information under the constraint of flexible connection, and the estimation problem adopts an estimated least square problem.
Background
The small celestial body dynamics environment is complex and the prior information is deficient, and the safe and accurate attachment on the surface is very difficult to realize. In a future small celestial body detection task, the flexible multi-node collaborative attachment mode can reduce the risks of overturning and rebounding of the detector on the surface of the small celestial body, and the autonomy and safety of the task are effectively improved through collaborative navigation, guidance and control. In order to realize collaborative autonomous navigation, two strategies of data fusion and state fusion are proposed in documents, and estimation methods such as federal Kalman filtering, covariance cross fusion and the like are respectively proposed. On the basis, in order to further improve the cooperative navigation performance, a cooperative estimation method under observation time lag and incomplete measurement is also proposed in documents, and the influence of asynchronous uncertainty of observation step length on cooperative estimation is corrected. However, for the multi-node collaborative navigation problem of flexible attachment of the non-cooperative small celestial body, the time-varying characteristic of the relative state (node configuration) between nodes is considered in combination with the characteristic of flexible attachment, so that accurate collaborative estimation is realized, and support is provided for safe attachment.
Disclosure of Invention
The invention discloses a non-cooperative target flexible attachment multi-node fusion estimation method, which aims to solve the technical problems that: aiming at the flexible multi-node collaborative attachment process of the small celestial body, the influence of uncertain node configuration on node information fusion is considered, under the premise of flexible connection constraint, the relative state between nodes is estimated by utilizing multi-node optical observation information, observation data or states of all nodes are fused, the autonomous navigation performance of all nodes in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of a non-cooperative target is further improved, and the safe attachment of the non-cooperative target on the surface of the small celestial body is facilitated.
The purpose of the invention is realized by the following technical scheme.
The invention discloses a non-cooperative target flexible attachment multi-node fusion estimation method, which aims at the problem of time-varying node configuration in the flexible attachment process of a small celestial body, utilizes pixels and image lines of corner features in a coincident observation region between nodes to construct multi-view geometric optical information, utilizes the multi-view geometric optical information to estimate the node configuration on the premise of flexible connection constraint to obtain more accurate node configuration, further utilizes the configuration to convert observation data or state estimation of each node to adjacent nodes, and synchronously processes the observation data or state estimation at different moments through dynamics recursion to realize multi-node fusion estimation. The invention can improve the autonomous navigation performance of each node in the flexible attachment process, and is beneficial to realizing the safe attachment of the surface of the small celestial body.
The invention discloses a non-cooperative target flexible attachment multi-node fusion estimation method, which comprises the following steps:
step one, constructing multi-viewpoint geometric optical information by pixels and image lines of angular point features in an observation area overlapped among nodes, and constructing a node configuration estimation problem by utilizing the multi-viewpoint geometric optical information under the constraint of flexible connection, so as to realize accurate estimation of a cooperative attached multi-node configuration.
In order to reduce the influence of time-varying unknown node configurations on multi-node information fusion, a node configuration estimation problem is constructed by using multi-viewpoint geometric optical information under the constraint of flexible connection, and preferably, the estimation problem is an estimated least square problem.
The node configuration is represented as
{Lxij∈LXF||(i,j)∈ε} (1)
Wherein the content of the first and second substances,LXFin order to be in a node configuration,Lxij=(Lpij,Lvij,Lqij) Are the elements that make up the configuration,representing a graph formed by abstracting the node configuration, v represents nodes, epsilon represents the connection relation between the nodes, and the collection of nodes adjacent to the node iIs shown as
The observation equation of the node-mounted camera is
Wherein the content of the first and second substances,is normalized characteristic pixel element and image line coordinate (x)i,yi,zi) Is the position of node i in the coordinate system of the attachment point, cijThe elements in the attitude transformation matrix for node i under the attachment point coordinate system,is a three-dimensional coordinate characterized in the coordinate system of the attachment point. Considering that cameras of all nodes have coincident observation regions, establishing epipolar constraint relation between images in pairs through pixel coordinates of homonymous feature points
Wherein the content of the first and second substances,andimages in two view images for the same featureThe coordinates of the element and the image line are essential matrixes which represent the pose relationship between two cameras
Wherein the content of the first and second substances,is the relative position between the nodes and is,is the relative attitude between the nodes. Solving the essential matrix by simultaneously establishing equations formed by multiple groups of homonymous features, and decomposing the features to obtain the relative pose relationship between the two cameras
The constraints between the relative attitude and the relative position vector between the nodes are obtained by the equations (7) and (8). It is noted that the specific value of the relative attitude between the nodes is uniquely determined by the feature visibility.
Wherein R isiIn the body coordinate system, | · | is a mathematical symbol of an absolute value,is a modulo long mathematical notation.
Meanwhile, inequality constraint of relative distance between nodes is obtained according to flexible connection relation between nodes
Wherein the content of the first and second substances,the relative distance between the nodes in the nominal state,the allowed relative distance variation range between nodes is the structure.
The formula (9) and the formula (10) provide effective information for the node configuration state estimation, namely the posterior estimation of the node configuration needs to be satisfied
s.t.
Wherein J is a loss function of the unconstrained optimal estimation of the node configuration,for a prior estimate of the dynamically derived configuration,for the configuration prior covariance, equation (12) is the equality constraint defined by equation (9), equation (13) is the inequality constraint defined by equation (10), and the inequality constraint defined by equation (13) is converted into the equality constraint by the active set method
The concrete form of the above formula is
The equations (11) to (13) are an optimal estimation problem with constraint, and the optimal estimation problem of the node configuration under the constraint is constructed by a Lagrange multiplier method
Wherein the content of the first and second substances,*j is a loss function of the optimal estimation problem of the node configuration under the constraint, lambda1And λ2For the Lagrange multiplier, the superscript (κ) represents the kth row of the matrix, n1And n2Is a constrained number. And solving the optimal estimation problem, and transferring the node configuration obtained by estimation to the second step or the third step to realize the state fusion estimation of each node.
And secondly, taking the pixel coordinates of the surface appearance features of the small celestial body in each node camera as observed quantities, and when the observed information of a certain node is insufficient, converting the observation of other nodes into the observation coordinate system of the node by using the node configuration, and fusing the observation of each node.
The flexible detector has m measurement nodes, and the node dynamic equation is
Wherein the content of the first and second substances,andas the state of the i-th node,in order to control the amount of the liquid,is a noise term. Controlled acceleration experienced by a nodeMainly comprising thrust acceleration of an actuating mechanismAnd acceleration of flexible structureTwo parts, which are measured by inertial navigation sensors
In the above formula, the first and second carbon atoms are,andare the measurements of an accelerometer and a gyroscope,andthe noise is the measurement noise of the inertial navigation sensor. Besides the inertial sensor, each node is also provided with a navigation camera, and the observed quantity of the navigation camera is
Wherein, yiIn order to navigate the measurement quantities of the camera,h is the height of the node from the attachment plane, wcFor measuring noise, the noise w is measuredcIs gaussian white noise. The camera measurement equation of each node is
Wherein, yiFor the observed quantity, f is the focal length of the camera,in order to be a noise term, the noise term,is the line-of-sight vector, p, between the feature point and the node i cameraiIs the position vector of node i in the coordinate system of the attachment point,is a position vector of the feature point in the coordinate system of the attachment point, Ri,cIs the attitude transformation matrix between the camera coordinate system and the attachment point coordinate system.
The node configuration estimated in the step one is utilized to convert the observation of other nodes into the observation coordinate system of the node, the observation fusion of each node is carried out, and the observation information conversion process is
Wherein, yj→iFor the observation that node j translates to node i,for a homogeneous observation of node j,for the attitude transformation matrix from the node j system to the node i system,the position vector direction from node i to node j is tied to node i.
Observing the observed quantity y of the node i after fusioni*Is composed of
Wherein, yiFor observation of node i itself, yj→iThe translated observations are observed for neighboring nodes.
And thirdly, each node estimates the state of the node, and then combines the node configuration estimated in the first step, and adopts the minimum estimation covariance criterion to fuse the state estimation results of each node, so that the autonomous navigation performance of each node in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of the non-cooperative target is further improved, and the non-cooperative target is favorably and safely attached to the surface of a small celestial body.
And (4) estimating the node state by adopting a lossless Kalman filter in consideration of the fact that the node dynamics and the observation equation have stronger uncertainty. Sigma sampling point at k-1 moment calculated by lossless transformation is
Wherein the content of the first and second substances,is the node i state at time k-1Sigma sampling point of, n isλ is the scale parameter and iota is the process variable.
The state prediction process of the lossless Kalman filter is
The state updating process of the lossless Kalman filter is
And converting the state estimation of the node into the state estimation of the adjacent node through coordinate transformation. Considering asynchronous state estimation, state recursion is carried out through lossless transformation to synchronously process estimated values at different moments, and the conversion process of state estimation is
Wherein the content of the first and second substances,for the estimation of the result from the state at time τThe k-time configuration estimation obtained by dynamics recursion,as a result of state estimation from time ζEstimation of k-time by kinetic recursion. Similarly, the transformation process of covariance estimation is
Wherein the content of the first and second substances,estimating covariance from state at time ζAnd estimating covariance of the state at the k moment obtained by dynamics recursion.
Fusion estimation of states of nodes by means of covariance cross fusion
Wherein, piijAs weighting coefficients, obtained by trace weighting of covariance
The relative state between the nodes is estimated by utilizing the multi-node optical observation information, and then observation data or states of all the nodes are fused, so that the autonomous navigation performance of all the nodes in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of a non-cooperative target is improved, and the non-cooperative target can be safely attached to the surface of a small celestial body.
Has the advantages that:
1. the invention discloses a non-cooperative target flexible attachment multi-node fusion estimation method, which aims at the problem that node configuration is time-varying unknown due to a flexible structure, utilizes pixels and image lines of corner features in a superposition observation area between nodes to construct multi-view geometric optical information, and utilizes the multi-view geometric optical information to estimate the relative state between the nodes on the premise of flexible connection constraint to obtain the node configuration, thereby being beneficial to maintaining the configuration of a multi-node system.
2. The non-cooperative target flexible attachment multi-node fusion estimation method disclosed by the invention has the advantages that the observation data or state estimation of each node is converted into the adjacent nodes by utilizing the configuration, the observation data or state estimation at different moments is synchronously processed by dynamics recursion, the asynchronous fusion of the observation data and state estimation of each node is realized, the navigation precision and the risk resistance of each node are favorably improved, the flexible attachment collaborative navigation performance of the non-cooperative target is further improved, and the safe attachment of the non-cooperative target on the surface of a small celestial body is favorably realized.
Drawings
FIG. 1 is a flow chart of a non-cooperative target flexible attachment multi-node fusion estimation method disclosed by the present invention;
FIG. 2 is a diagram of an actual attachment trajectory of flexible multi-node cooperative attachment of a small celestial body;
fig. 3 is a variation curve of the relative position estimation error between nodes in the small celestial body attachment process, where fig. 3(a) is the relative position estimation error between node 1 and node 2, fig. 3(b) is the relative position estimation error between node 1 and node 3, and fig. 3(c) is the relative position estimation error between node 2 and node 3;
fig. 4 is a variation curve of the estimation error of the relative attitude between the nodes in the process of attaching the small celestial body, in which fig. 4(a) is the estimation error of the relative attitude between the node 1 and the node 2, fig. 4(b) is the estimation error of the relative attitude between the node 1 and the node 3, and fig. 4(c) is the estimation error of the relative attitude between the node 2 and the node 3;
FIG. 5 is a variation curve of the mean estimated error of the node position in the process of attaching the small celestial body, wherein FIG. 5(a) is the mean estimated error of the x-axis position of each node, FIG. 5(b) is the mean estimated error of the y-axis position of each node, and FIG. 5(c) is the mean estimated error of the z-axis position of each node;
fig. 6 is a variation curve of the mean attitude estimation error of the nodes during the attachment of the small celestial body, where fig. 6(a) is the mean yaw attitude estimation error of each node, fig. 6(b) is the mean pitch attitude estimation error of each node, and fig. 6(c) is the mean roll attitude estimation error of each node.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples.
Example 1:
in order to verify the feasibility of the method, the simulation calculation of the surface attachment autonomous navigation is performed by taking an attachment task for a certain small celestial body as an example. The number of the flexible attachment measurement nodes is 3, and the positions of the nodes in a detector body coordinate system are respectively (-0.375, 0), (0,0.75,0) and (C: (C) ((C))0.375, 0), the initial position of the detector centroid in the attachment point coordinate system is (10, 800) m, the initial velocity is (-0.020.01-0.5) m/s, the initial pose is (1,0,0,0), the initial position error is εp~N(0,[25,25,25]T m2) Initial velocity error of epsilonv~N(0,[0.01,0.01,0.01]T(m/s)2) Initial attitude error of εq~N(0,[25,25,25]T) And the initial speed of each node is consistent with the detector body. Measurement nodes are allCarrying an inertial sensor and a navigation camera, wherein the coordinate system of the sensor and the coordinate system of the node body are coincided, namely Ri,c=Ri,imu=Ri. In the attachment process, the z axis of the detector always points to the attachment plane vertically by the attitude control law, the detector reaches a position 30m above an attachment point after passing 1800s from an initial position, the nominal tail end speed is (0,0,0) m/s, an attachment nominal track is generated by the Apollo guidance law, each node keeps an initial configuration in a nominal state, and the speed and the centroid of the detector are kept consistent. And extracting a feature central point in a view field as a feature point in the attachment process, wherein the feature is generated by a simulation platform, the matching error of 2D-2D is 0.1pixel, and the matching error of 2D-3D is 1 pixel.
As shown in fig. 1, the method for estimating the fusion of the small celestial body collaborative attachment multiple nodes disclosed in this embodiment includes the following specific steps:
step one, constructing multi-viewpoint geometric optical information by pixels and image lines of angular point features in an observation area overlapped among nodes, and constructing a node configuration estimation problem by utilizing the multi-viewpoint geometric optical information under the constraint of flexible connection, so as to realize accurate estimation of a cooperative attached multi-node configuration.
Assuming that the flexible detector has 3 measurement nodes, the node configuration is represented as
LXF=(Lx12,Lx13,Lx23) (36)
Wherein the content of the first and second substances,LXFin order to be in a node configuration,Lxij=(Lpij,Lvij,Lqij) Are elements that make up the configuration.
The method comprises the steps that coincident observation regions exist among all node cameras and have sufficient texture, feature matching is carried out by extracting S-T corner points of the coincident regions, mismatching and removing are carried out by using a Ranpac algorithm, an essential matrix is solved by using a five-point method, and a relative pose relation between the two cameras is obtained by feature decomposition. Meanwhile, assuming that the structure allows a relative distance change range between nodes to be 5%, the optimal estimation model of the node configuration is in the formulas (11) to (13), the optimization problem is converted into the formulas (16) and (17), the optimal estimation problem is solved through an optimization tool package, and the node configuration obtained through estimation is converted into the second step and the third step to achieve fusion estimation of each node.
And secondly, taking the pixel coordinates of the surface appearance features of the small celestial body in each node camera as observed quantities, and when the observed information of a certain node is insufficient, converting the observation of other nodes into the observation coordinate system of the node by using the node configuration to perform the observation and fusion of the nodes.
The kinetic equation of each node under the coordinate system of the attachment point is
Wherein the content of the first and second substances,Lpithe j node state in the attachment point coordinate system is the position of the j node,Lviis the speed of the jth node,Lqiis the attitude quaternion for the jth node,is the resultant acceleration, R, experienced by the nodeiIs the attitude transformation matrix corresponding to the current attitude,andis the drift parameter of the inertial navigation sensor,andin order to be a markov process parameter,Lomega is the spin angular velocity of the small celestial body,Lgiis the gravitational angular velocity at the jth node,for the angular acceleration of the j-th node, Ω (-) is the 4 th order antisymmetric matrix transformer,for an antisymmetric matrix transformer, wiIs a noise term. Resultant acceleration experienced by a nodeMainly comprising thrust acceleration of an actuating mechanismAnd acceleration of flexible structureTwo parts, measured by inertial navigation sensors
In the above formula, the first and second carbon atoms are,andare the measurements of an accelerometer and a gyroscope,andthe noise is the measurement noise of the inertial navigation sensor. Besides the inertial sensor, optical navigation sensors such as a navigation camera and a laser range finder are carried on each node. The parameters of the cameras at all nodes are consistent, the field angle of the camera is 30 degrees, the focal length of the camera is 717mm, and the imaging resolution is 1024 multiplied by 1024. For the six degree of freedom state estimation problem, at least three landmarks with known three-dimensional positions in the attachment system are required. And (5) converting the observation of other nodes into the node by using the node configuration estimated in the step one to perform observation fusion, wherein the observation information conversion process is as shown in (24).
And thirdly, each node estimates the state of the node, and then combines the node configuration estimated in the first step, and adopts the minimum estimation covariance criterion to fuse the state estimation results of each node, so that the autonomous navigation performance of each node in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of the non-cooperative target is further improved, and the non-cooperative target is favorably and safely attached to the surface of a small celestial body.
Considering that both the node dynamics and the observation equation have strong uncertainties, the node states are estimated by using the lossless kalman filter described in equations (26) to (28). Based on the estimated node configuration, the state and state covariance estimation of the node is converted into the state estimation and state covariance estimation of adjacent nodes through formulas (29) to (32), and then the state of each node is subjected to fusion estimation in a covariance cross fusion mode according to formulas (33) to (35), so that the autonomous navigation performance of each node in the flexible attachment process is improved, the flexible attachment collaborative navigation performance of a non-cooperative target is improved, and the non-cooperative target is facilitated to realize safe attachment on the surface of a small celestial body.
Considering the attachment locus as shown in FIG. 2, the node configuration has a random error of 5% in the z direction from the nominal configuration, the dynamic integration step size is 5s, and the dynamic noise error covariance is
Fig. 3 and 4 show the variation curves of the estimated configuration errors of the relative positions and the relative attitudes between nodes in the attachment process, respectively, wherein the blue dotted line is the error between the nominal configuration and the actual configuration, and the red solid line is the error between the estimated configuration and the actual configuration in the first step. Fig. 5 and fig. 6 respectively show the estimation error variation curves of the position, the speed, and the attitude of each node during the attachment process, where the blue dotted line is the state estimation error before the state estimation fusion of each node, the red solid line is the state estimation error after the state estimation fusion of each node, and each node in the considered scene can observe more than three known terrain features at each observation time.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:基于迁移学习的任务导向的图像质量测评方法