Positioning method, positioning device, electronic equipment and storage medium
1. A positioning method is applied to a processing device and comprises the following steps:
responding to a positioning request initiated on an operable page and aiming at an object to be positioned, and obtaining initial positioning information of the object to be positioned, wherein the initial positioning information at least comprises pose information of the object to be positioned;
based on the acquired satellite observation data, speed measurement data and associated feature point coordinate data which are matched with the object to be positioned, positioning correction processing is carried out on the initial positioning information to obtain target positioning information of the object to be positioned, wherein the feature point coordinate data are obtained after feature point matching is carried out on associated image frames; wherein the positioning correction process includes: updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting the satellite observation data and the feature point coordinate data to obtain the target positioning information;
and presenting the target positioning information corresponding to the object to be positioned on the operable page.
2. The method of claim 1, wherein said updating said initial positioning information based on said velocity measurement data to obtain intermediate positioning information, and iteratively modifying said intermediate positioning information using said satellite observation data and said feature point coordinate data, respectively, to obtain said target positioning information, comprises:
acquiring satellite observation data matched with the object to be positioned, acquiring speed measurement data acquired by an inertial sensor, and determining feature point coordinate data based on image data acquired by image acquisition equipment;
establishing a parameter matrix associated with the object to be positioned based on the initial positioning information of the object to be positioned, the speed measurement data, the zero offset information of the inertial sensor and the carrier phase double-difference ambiguity parameter;
updating initial positioning information of the object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtaining an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix;
and respectively adopting the satellite observation data and the feature point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtaining a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
3. The method of claim 2, wherein said iteratively modifying said intermediate positioning information using satellite observations and said feature point coordinate data, respectively, comprises:
establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
4. The method of claim 2, wherein said iteratively modifying said intermediate positioning information using satellite observations and said feature point coordinate data, respectively, comprises:
determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the first parameter covariance matrix and a second increment matrix for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
5. The method of any of claims 3 or 4, wherein said establishing a real-time kinematic RTK differential constraint relationship based on said satellite observation data comprises:
according to the satellite observation data and the intermediate positioning information of the object to be positioned, a residual error matrix comprising a pseudo-range double-difference residual error matrix and a carrier phase double-difference residual error matrix is established and used as an established RTK differential constraint relation;
the determining the pose increment information of the object to be positioned based on the feature point coordinate data comprises the following steps:
and determining the attitude increment information and the position increment information of the object to be positioned by adopting a random sampling consistency algorithm and a normal distribution transformation algorithm based on the characteristic point coordinate data and the calibration parameters of the image acquisition equipment.
6. The method of claim 5, wherein said establishing a residual matrix comprising a pseudorange double difference residual matrix and a carrier phase double difference residual matrix based on said satellite observations and intermediate positioning information for said object to be positioned comprises:
determining position information of each positioning satellite according to the satellite observation data, acquiring position information of a target reference station, respectively determining a first geometric distance between each positioning satellite and the target reference station based on the position information of each positioning satellite, the position information of the target reference station and the intermediate positioning information of the object to be positioned, respectively determining a second geometric distance between each positioning satellite and the object to be positioned, and determining a pseudo-range double-difference estimation value and a carrier phase double-difference estimation value based on the first geometric distance and the second geometric distance;
respectively determining pseudo-range double-difference observed values between a reference satellite and other positioning satellites observed by the target reference station and the object to be positioned based on pseudo-range information in the satellite observation data, and respectively determining carrier-phase double-difference observed values between the reference satellite and other positioning satellites observed by the target reference station and the object to be positioned according to carrier-phase observed values in the satellite observation data;
establishing a pseudo-range double-difference residual matrix based on the difference value between the pseudo-range double-difference observed value and the pseudo-range double-difference estimated value, and establishing a carrier phase double-difference residual matrix based on the difference value between the carrier phase double-difference observed value and the carrier phase double-difference estimated value;
and establishing a residual error matrix comprising the pseudo-range double-difference residual error matrix and the carrier phase double-difference residual error matrix.
7. The method of any of claims 1-4, wherein said obtaining initial positioning information for the object to be positioned comprises:
if the object to be positioned is determined to be primarily positioned, acquiring first network positioning information of the processing equipment, determining the first network positioning information as initial position information of the object to be positioned, determining initial posture information of the object to be positioned according to the deviation condition of an inertial coordinate system corresponding to an inertial sensor relative to a terrestrial coordinate system, taking the initial position information and the initial posture information as the initial positioning information of the object to be positioned, and installing the processing equipment and the inertial sensor on the object to be positioned;
if the object to be positioned is determined to be non-primary positioning, obtaining historical target positioning information obtained when the object to be positioned is positioned last time, and determining the historical target positioning information as the initial positioning information of the object to be positioned at the current moment.
8. The method of any one of claims 1-4, wherein the obtaining of satellite observations that match the object to be located comprises:
acquiring second network positioning information of the processing equipment, and sending a data acquisition request to a satellite data server based on the second network positioning information so that the satellite data server determines a target reference station corresponding to the object to be positioned based on the second network positioning information;
receiving, by the satellite data server, ephemeris information and first type of satellite observation data transmitted by the target reference station, where the first type of satellite observation data at least includes: pseudo-range information and a carrier phase observation value obtained after each positioning satellite is observed through the target reference station;
and acquiring second satellite observation data observed by the satellite positioning equipment on the object to be positioned, and taking the ephemeris information, the first satellite observation data and the second satellite observation data as the acquired satellite observation data matched with the object to be positioned.
9. The method of any one of claims 1 to 4, wherein the obtaining feature point coordinate data associated with the object to be positioned comprises:
acquiring image data acquired by image acquisition equipment on the object to be positioned, denoising the image data by adopting a wiener filtering processing mode, and performing distortion removal processing on the image data based on internal parameters of the image acquisition equipment to obtain processed image data;
performing framing processing on the processed image data to obtain a first image frame acquired at the current moment and obtain a second image frame acquired at a positioning moment after the current moment;
extracting feature points included in the first image frame and the second image frame respectively by adopting an image feature point extraction algorithm, screening target feature points successfully matched in the first image frame and the second image frame by adopting a feature point matching algorithm, and taking coordinate data of the target feature points in the first image frame and the second image frame respectively as the acquired feature point coordinate data matched with the object to be positioned.
10. The method of any of claims 2-4, wherein said updating the initial positioning information based on the velocity measurement data to obtain intermediate positioning information comprises:
determining the attitude information of the object to be positioned at the current moment based on angular velocity measurement data in the velocity measurement data obtained at the current moment, the attitude information determined at a positioning moment before the current moment and the time interval between the current moment and the previous positioning moment;
determining the speed information of the current moment based on the acceleration measurement data, the speed information of the previous positioning moment, the gravity value information corresponding to the current moment and the time interval in the speed measurement data obtained at the current moment, and determining the position information of the current moment based on the speed information of the current moment, the speed information of the previous positioning moment, the time interval and the position information of the previous positioning moment;
updating corresponding parameters in the parameter matrix based on the determined attitude information, speed information and position information corresponding to the current time to obtain an intermediate parameter matrix, and taking the attitude information and the position information corresponding to the current time as intermediate positioning information of the current time.
11. The method of claim 10, further comprising:
constructing a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude information in the speed measurement data at the current moment;
and updating an initial parameter covariance matrix according to the state transition matrix and an error matrix determined by the attribute information of the inertial sensor to obtain an intermediate parameter covariance matrix, wherein the initial parameter covariance matrix is the parameter covariance matrix obtained after the last positioning is completed.
12. A positioning device, comprising:
the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for responding to a positioning request initiated on an operable page and aiming at an object to be positioned, and obtaining initial positioning information of the object to be positioned, and the initial positioning information at least comprises pose information of the object to be positioned;
the processing unit is used for carrying out positioning correction processing on the initial positioning information based on the acquired satellite observation data, the acquired speed measurement data and the acquired associated feature point coordinate data which are matched with the object to be positioned to obtain target positioning information of the object to be positioned, wherein the feature point coordinate data are obtained by carrying out feature point matching on the associated image frame; wherein the positioning correction process includes: updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting the satellite observation data and the feature point coordinate data to obtain the target positioning information;
and the presentation unit is used for presenting the target positioning information corresponding to the object to be positioned on the operable page.
13. The apparatus as claimed in claim 12, wherein said processing unit is configured to update said initial positioning information based on said velocity measurement data to obtain intermediate positioning information, and to iteratively modify said intermediate positioning information using said satellite observation data and said feature point coordinate data, respectively, to obtain said target positioning information, and wherein said processing unit is configured to:
acquiring satellite observation data matched with the object to be positioned, acquiring speed measurement data acquired by an inertial sensor, and determining feature point coordinate data based on image data acquired by image acquisition equipment;
establishing a parameter matrix associated with the object to be positioned based on the initial positioning information of the object to be positioned, the speed measurement data, the zero offset information of the inertial sensor and the carrier phase double-difference ambiguity parameter;
updating initial positioning information of the object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtaining an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix;
and respectively adopting the satellite observation data and the feature point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtaining a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
14. An electronic device, comprising a processor and a memory, wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 11.
15. A computer-readable storage medium, characterized in that it comprises program code for causing an electronic device to carry out the steps of the method of any one of claims 1 to 11, when said program code is run on the electronic device.
Background
With the development of navigation positioning technology, the processing device can guide the motion trend of the specified object by positioning the specified object.
In the related art, positioning of a specific object is usually achieved by means of a combined action of an Inertial Navigation System (INS) System and a Global Navigation Satellite System (GNSS).
However, in a scenario of navigation and positioning depending on GNSS, positioning accuracy is limited by GNSS signal quality, and in a complex road environment and an area with weak signals, the problems of inaccurate positioning, position drift, or position jump are very likely to occur, and the positioning requirement cannot be met.
Disclosure of Invention
The embodiment of the application provides a positioning method, a positioning device, electronic equipment and a storage medium, which are used for improving positioning precision and avoiding the problem of inaccurate positioning caused by the reduction of GNSS signal quality.
In a first aspect, an embodiment of the present application provides a positioning method, which is applied to a processing device, and the method includes:
responding to a positioning request initiated on an operable page and aiming at an object to be positioned, and obtaining initial positioning information of the object to be positioned, wherein the initial positioning information at least comprises pose information of the object to be positioned;
based on the acquired satellite observation data, speed measurement data and associated feature point coordinate data which are matched with the object to be positioned, positioning correction processing is carried out on the initial positioning information to obtain target positioning information of the object to be positioned, wherein the feature point coordinate data are obtained after feature point matching is carried out on associated image frames; wherein the positioning correction process includes: updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting the satellite observation data and the feature point coordinate data to obtain the target positioning information;
and presenting the target positioning information corresponding to the object to be positioned on the operable page.
Optionally, the updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by using the satellite observation data and the feature point coordinate data respectively to obtain the target positioning information includes:
acquiring satellite observation data matched with the object to be positioned, acquiring speed measurement data acquired by an inertial sensor, and determining feature point coordinate data based on image data acquired by image acquisition equipment;
establishing a parameter matrix associated with the object to be positioned based on the initial positioning information of the object to be positioned, the speed measurement data, the zero offset information of the inertial sensor and the carrier phase double-difference ambiguity parameter;
updating initial positioning information of the object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtaining an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix;
and respectively adopting the satellite observation data and the feature point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtaining a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
Optionally, the iteratively correcting the intermediate positioning information by respectively using the satellite observation data and the feature point coordinate data includes:
establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, the iteratively correcting the intermediate positioning information by respectively using the satellite observation data and the feature point coordinate data includes:
determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the first parameter covariance matrix and a second increment matrix for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, the establishing a real-time dynamic RTK differential constraint relationship based on the satellite observation data includes:
according to the satellite observation data and the intermediate positioning information of the object to be positioned, a residual error matrix comprising a pseudo-range double-difference residual error matrix and a carrier phase double-difference residual error matrix is established and used as an established RTK differential constraint relation;
the determining the pose increment information of the object to be positioned based on the feature point coordinate data comprises the following steps:
and determining the attitude increment information and the position increment information of the object to be positioned by adopting a random sampling consistency algorithm and a normal distribution transformation algorithm based on the characteristic point coordinate data and the calibration parameters of the image acquisition equipment.
Optionally, the establishing a residual error matrix including a pseudorange double-difference residual error matrix and a carrier phase double-difference residual error matrix according to the satellite observation data and the intermediate positioning information of the object to be positioned includes:
determining position information of each positioning satellite according to the satellite observation data, acquiring position information of a target reference station, respectively determining a first geometric distance between each positioning satellite and the target reference station based on the position information of each positioning satellite, the position information of the target reference station and the intermediate positioning information of the object to be positioned, respectively determining a second geometric distance between each positioning satellite and the object to be positioned, and determining a pseudo-range double-difference estimation value and a carrier phase double-difference estimation value based on the first geometric distance and the second geometric distance;
respectively determining pseudo-range double-difference observed values between a reference satellite and other positioning satellites observed by the target reference station and the object to be positioned based on pseudo-range information in the satellite observation data, and respectively determining carrier-phase double-difference observed values between the reference satellite and other positioning satellites observed by the target reference station and the object to be positioned according to carrier-phase observed values in the satellite observation data;
establishing a pseudo-range double-difference residual matrix based on the difference value between the pseudo-range double-difference observed value and the pseudo-range double-difference estimated value, and establishing a carrier phase double-difference residual matrix based on the difference value between the carrier phase double-difference observed value and the carrier phase double-difference estimated value;
and establishing a residual error matrix comprising the pseudo-range double-difference residual error matrix and the carrier phase double-difference residual error matrix.
Optionally, the obtaining initial positioning information of the object to be positioned includes:
if the object to be positioned is determined to be primarily positioned, acquiring first network positioning information of the processing equipment, determining the first network positioning information as initial position information of the object to be positioned, determining initial posture information of the object to be positioned according to the deviation condition of an inertial coordinate system corresponding to an inertial sensor relative to a terrestrial coordinate system, taking the initial position information and the initial posture information as the initial positioning information of the object to be positioned, and installing the processing equipment and the inertial sensor on the object to be positioned;
if the object to be positioned is determined to be non-primary positioning, obtaining historical target positioning information obtained when the object to be positioned is positioned last time, and determining the historical target positioning information as the initial positioning information of the object to be positioned at the current moment.
Optionally, the acquiring of the satellite observation data matched with the object to be positioned includes:
acquiring second network positioning information of the processing equipment, and sending a data acquisition request to a satellite data server based on the second network positioning information so that the satellite data server determines a target reference station corresponding to the object to be positioned based on the second network positioning information;
receiving, by the satellite data server, ephemeris information and first type of satellite observation data transmitted by the target reference station, where the first type of satellite observation data at least includes: pseudo-range information and a carrier phase observation value obtained after each positioning satellite is observed through the target reference station;
and acquiring second satellite observation data observed by the satellite positioning equipment on the object to be positioned, and taking the ephemeris information, the first satellite observation data and the second satellite observation data as the acquired satellite observation data matched with the object to be positioned.
Optionally, the obtaining feature point coordinate data associated with the object to be positioned includes:
acquiring image data acquired by image acquisition equipment on the object to be positioned, denoising the image data by adopting a wiener filtering processing mode, and performing distortion removal processing on the image data based on internal parameters of the image acquisition equipment to obtain processed image data;
performing framing processing on the processed image data to obtain a first image frame acquired at the current moment and obtain a second image frame acquired at a positioning moment after the current moment;
extracting feature points included in the first image frame and the second image frame respectively by adopting an image feature point extraction algorithm, screening target feature points successfully matched in the first image frame and the second image frame by adopting a feature point matching algorithm, and taking coordinate data of the target feature points in the first image frame and the second image frame respectively as the acquired feature point coordinate data matched with the object to be positioned.
Optionally, the updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information includes:
determining the attitude information of the object to be positioned at the current moment based on angular velocity measurement data in the velocity measurement data obtained at the current moment, the attitude information determined at a positioning moment before the current moment and the time interval between the current moment and the previous positioning moment;
determining the speed information of the current moment based on the acceleration measurement data, the speed information of the previous positioning moment, the gravity value information corresponding to the current moment and the time interval in the speed measurement data obtained at the current moment, and determining the position information of the current moment based on the speed information of the current moment, the speed information of the previous positioning moment, the time interval and the position information of the previous positioning moment;
updating corresponding parameters in the parameter matrix based on the determined attitude information, speed information and position information corresponding to the current time to obtain an intermediate parameter matrix, and taking the attitude information and the position information corresponding to the current time as intermediate positioning information of the current time.
Optionally, the method further includes:
constructing a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude information in the speed measurement data at the current moment;
and updating an initial parameter covariance matrix according to the state transition matrix and an error matrix determined by the attribute information of the inertial sensor to obtain an intermediate parameter covariance matrix, wherein the initial parameter covariance matrix is the parameter covariance matrix obtained after the last positioning is completed.
In a second aspect, a positioning device is provided, which includes:
the device comprises an obtaining unit, a processing unit and a processing unit, wherein the obtaining unit is used for responding to a positioning request initiated on an operable page and aiming at an object to be positioned, and obtaining initial positioning information of the object to be positioned, and the initial positioning information at least comprises pose information of the object to be positioned;
the processing unit is used for carrying out positioning correction processing on the initial positioning information based on the acquired satellite observation data, the acquired speed measurement data and the acquired associated feature point coordinate data which are matched with the object to be positioned, so as to obtain target positioning information of the object to be positioned, wherein the feature point coordinate data are obtained after feature point matching is carried out on the associated image frame; wherein the positioning correction process includes: updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting the satellite observation data and the feature point coordinate data to obtain the target positioning information;
and the presentation unit is used for presenting the target positioning information corresponding to the object to be positioned on the operable page.
Optionally, when the initial positioning information is updated based on the speed measurement data to obtain intermediate positioning information, and the intermediate positioning information is iteratively corrected by respectively using the satellite observation data and the feature point coordinate data, and the target positioning information is obtained, the processing unit is configured to:
acquiring satellite observation data matched with the object to be positioned, acquiring speed measurement data acquired by an inertial sensor, and determining feature point coordinate data based on image data acquired by image acquisition equipment;
establishing a parameter matrix associated with the object to be positioned based on the initial positioning information of the object to be positioned, the speed measurement data, the zero offset information of the inertial sensor and the carrier phase double-difference ambiguity parameter;
updating initial positioning information of the object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtaining an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix;
and respectively adopting the satellite observation data and the feature point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtaining a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
Optionally, when the intermediate positioning information is iteratively corrected by respectively using satellite observation data and the feature point coordinate data, the processing unit is configured to:
establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, when the intermediate positioning information is iteratively corrected by respectively using satellite observation data and the feature point coordinate data, the processing unit is configured to:
determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the first parameter covariance matrix and a second increment matrix for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, when the real-time dynamic RTK differential constraint relationship is established based on the satellite observation data, the processing unit is configured to:
according to the satellite observation data and the intermediate positioning information of the object to be positioned, a residual error matrix comprising a pseudo-range double-difference residual error matrix and a carrier phase double-difference residual error matrix is established and used as an established RTK differential constraint relation;
the determining the pose increment information of the object to be positioned based on the feature point coordinate data comprises the following steps:
and determining the attitude increment information and the position increment information of the object to be positioned by adopting a random sampling consistency algorithm and a normal distribution transformation algorithm based on the characteristic point coordinate data and the calibration parameters of the image acquisition equipment.
Optionally, when a residual matrix including a pseudorange double-difference residual matrix and a carrier phase double-difference residual matrix is established according to the satellite observation data and the intermediate positioning information of the object to be positioned, the processing unit is configured to:
determining position information of each positioning satellite according to the satellite observation data, acquiring position information of a target reference station, respectively determining a first geometric distance between each positioning satellite and the target reference station based on the position information of each positioning satellite, the position information of the target reference station and the intermediate positioning information of the object to be positioned, respectively determining a second geometric distance between each positioning satellite and the object to be positioned, and determining a pseudo-range double-difference estimation value and a carrier phase double-difference estimation value based on the first geometric distance and the second geometric distance;
respectively determining pseudo-range double-difference observed values between a reference satellite and other positioning satellites observed by the target reference station and the object to be positioned based on pseudo-range information in the satellite observation data, and respectively determining carrier-phase double-difference observed values between the reference satellite and other positioning satellites observed by the target reference station and the object to be positioned according to carrier-phase observed values in the satellite observation data;
establishing a pseudo-range double-difference residual matrix based on the difference value between the pseudo-range double-difference observed value and the pseudo-range double-difference estimated value, and establishing a carrier phase double-difference residual matrix based on the difference value between the carrier phase double-difference observed value and the carrier phase double-difference estimated value;
and establishing a residual error matrix comprising the pseudo-range double-difference residual error matrix and the carrier phase double-difference residual error matrix.
Optionally, when the initial positioning information of the object to be positioned is obtained, the obtaining unit is configured to:
if the object to be positioned is determined to be primarily positioned, acquiring first network positioning information of the processing equipment, determining the first network positioning information as initial position information of the object to be positioned, determining initial posture information of the object to be positioned according to the deviation condition of an inertial coordinate system corresponding to an inertial sensor relative to a terrestrial coordinate system, taking the initial position information and the initial posture information as the initial positioning information of the object to be positioned, and installing the processing equipment and the inertial sensor on the object to be positioned;
if the object to be positioned is determined to be non-primary positioning, obtaining historical target positioning information obtained when the object to be positioned is positioned last time, and determining the historical target positioning information as the initial positioning information of the object to be positioned at the current moment.
Optionally, when acquiring satellite observation data matched with the object to be positioned, the processing unit is configured to:
acquiring second network positioning information of the processing equipment, and sending a data acquisition request to a satellite data server based on the second network positioning information so that the satellite data server determines a target reference station corresponding to the object to be positioned based on the second network positioning information;
receiving, by the satellite data server, ephemeris information and first type of satellite observation data transmitted by the target reference station, where the first type of satellite observation data at least includes: pseudo-range information and a carrier phase observation value obtained after each positioning satellite is observed through the target reference station;
and acquiring second satellite observation data observed by the satellite positioning equipment on the object to be positioned, and taking the ephemeris information, the first satellite observation data and the second satellite observation data as the acquired satellite observation data matched with the object to be positioned.
Optionally, when obtaining feature point coordinate data associated with the object to be positioned, the processing unit is configured to:
acquiring image data acquired by image acquisition equipment on the object to be positioned, denoising the image data by adopting a wiener filtering processing mode, and performing distortion removal processing on the image data based on internal parameters of the image acquisition equipment to obtain processed image data;
performing framing processing on the processed image data to obtain a first image frame acquired at the current moment and obtain a second image frame acquired at a positioning moment after the current moment;
extracting feature points included in the first image frame and the second image frame respectively by adopting an image feature point extraction algorithm, screening target feature points successfully matched in the first image frame and the second image frame by adopting a feature point matching algorithm, and taking coordinate data of the target feature points in the first image frame and the second image frame respectively as the acquired feature point coordinate data matched with the object to be positioned.
Optionally, the processing unit is configured to update the initial positioning information based on the speed measurement data, and when obtaining intermediate positioning information, the processing unit is configured to:
determining the attitude information of the object to be positioned at the current moment based on angular velocity measurement data in the velocity measurement data obtained at the current moment, the attitude information determined at a positioning moment before the current moment and the time interval between the current moment and the previous positioning moment;
determining the speed information of the current moment based on the acceleration measurement data, the speed information of the previous positioning moment, the gravity value information corresponding to the current moment and the time interval in the speed measurement data obtained at the current moment, and determining the position information of the current moment based on the speed information of the current moment, the speed information of the previous positioning moment, the time interval and the position information of the previous positioning moment;
updating corresponding parameters in the parameter matrix based on the determined attitude information, speed information and position information corresponding to the current time to obtain an intermediate parameter matrix, and taking the attitude information and the position information corresponding to the current time as intermediate positioning information of the current time.
Optionally, the processing unit is further configured to:
constructing a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude information in the speed measurement data at the current moment;
and updating an initial parameter covariance matrix according to the state transition matrix and an error matrix determined by the attribute information of the inertial sensor to obtain an intermediate parameter covariance matrix, wherein the initial parameter covariance matrix is the parameter covariance matrix obtained after the last positioning is completed.
In a third aspect, an electronic device is proposed, which comprises a processor and a memory, wherein the memory stores program code, which, when executed by the processor, causes the processor to perform the steps of any of the above-mentioned methods of the first aspect.
In a fourth aspect, a computer-readable storage medium is proposed, which comprises program code for causing an electronic device to perform the steps of the method of any of the above first aspects, when the program code runs on the electronic device.
The beneficial effect of this application is as follows:
the embodiment of the application provides a positioning method, a positioning device, electronic equipment and a storage medium. In the embodiment of the application, in response to a positioning request initiated on an operable page and aiming at an object to be positioned, initial positioning information of the object to be positioned is obtained, wherein the initial positioning information at least comprises pose information of the object to be positioned, and positioning correction processing is carried out on the initial positioning information based on acquired satellite observation data matched with the object to be positioned, speed measurement data and associated feature point coordinate data, so that target positioning information of the object to be positioned is obtained, and the feature point coordinate data is obtained after feature point matching is carried out on an associated image frame; wherein the positioning correction process includes: updating initial positioning information based on the speed measurement data to obtain intermediate positioning information, performing iterative correction on the intermediate positioning information by respectively adopting satellite observation data and feature point coordinate data to obtain target positioning information, and then presenting the target positioning information corresponding to the object to be positioned on an operable page.
Therefore, the object to be positioned is positioned by fusing the three types of data, namely the satellite observation data, the speed measurement data and the feature point coordinate data, the excessive dependence on the satellite data is avoided, the positioning precision and the robustness are improved, and under the condition that the satellite signal is weak, the effective positioning can be realized based on the speed measurement data and the feature point coordinate data in an auxiliary mode, so that the high-precision positioning can be realized in the complex road condition or the tunnel, and the use experience of a user is optimized.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram illustrating inaccurate positioning of an object to be positioned according to an embodiment of the present application;
FIG. 2a is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2b is a schematic view of an operable interface in a scene where an object to be positioned is positioned according to an embodiment of the present application;
fig. 3a is a schematic flow chart of a positioning process in an embodiment of the present application;
FIG. 3b is a schematic diagram of a positioning correction process in the embodiment of the present application
FIG. 4a is a schematic flow chart illustrating the determination of coordinate data of feature points in the embodiment of the present application;
FIG. 4b is a schematic diagram of image frames before and after processing in an embodiment of the present application;
FIG. 4c is a schematic diagram of an image coordinate system established in an embodiment of the present application;
FIG. 4d is a diagram illustrating matched feature points in an embodiment of the present application;
FIG. 5a is a flowchart of an algorithm for vehicle localization in an embodiment of the present application;
FIG. 5b is a block diagram of a positioning system including portions according to an embodiment of the present application;
FIG. 5c is a schematic flow chart of a positioning process in an embodiment of the present application;
fig. 6 is a schematic logical structure diagram of a positioning apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware component of an electronic device to which an embodiment of the present application is applied;
fig. 8 is a schematic structural diagram of a computing device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present application. All other embodiments obtained by a person skilled in the art without any inventive step based on the embodiments described in the present application are within the scope of the protection of the present application.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
An inertial sensor: a sensor capable of detecting and measuring acceleration, inclination, impact, vibration, rotation and multiple degree of freedom (DoF) motion is an important component for solving navigation, orientation and motion carrier control, and most of the inertial sensors currently configured in processing equipment are Micro-Electro-Mechanical systems (MEMS) inertial sensors.
Real-time dynamic differential positioning: a Real-time kinematic (RTK) differential positioning technology is also called a carrier phase differential positioning technology, and the RTK differential positioning technology is a Real-time kinematic positioning technology that completes positioning based on a carrier phase observation value, and can provide a three-dimensional positioning result of a station under test in a specified coordinate system in Real time and achieve centimeter-level precision; in the RTK positioning mode, the base station transmits the observed value and the coordinate information of the survey station to the rover station through the data chain, the rover station receives data from the base station through the data chain, acquires satellite observation data and performs real-time positioning processing based on the acquired data.
Global satellite navigation system: the Global Navigation Satellite System, also known as GNSS, is a space-based radio Navigation positioning System capable of providing users with all-weather three-dimensional coordinates, velocity, and time information at any location on the surface of the earth or in near-earth space. Common systems are: global Positioning System (GPS), BeiDou Navigation Satellite System (BDS), glonass Satellite Navigation System (Global Navigation SATELLITE SYSTEM, GLONASS) and GALILEO Satellite Navigation System (GALILEO).
A visual sensor: the imaging principle is to map a three-dimensional point in a real three-dimensional space to an imaging plane in a two-dimensional space, and specifically, the process can be described by using a small pinhole imaging model.
The characteristic points are as follows: the method is characterized in that the method refers to points with violent change of image gray values in an image or points with larger curvature on the edges of the image (namely intersection points of two edges), can reflect the essential characteristics of the image, identify a target object in the image, and can complete the matching of the image through the matching of characteristic points. The characteristic points mainly consist of two parts: a keypoint (keypoint) and a descriptor (descriptor).
The key points are as follows: the positions of the feature points in the image are referred to, and some feature points also have information such as directions and scales.
A descriptor: usually a vector, describing the relationship information between the key points and surrounding pixels in a manner designed based on actual requirements, and features with similar appearance usually have similar descriptors. Therefore, when matching, if the distances (mahalanobis distance, hamming distance, etc.) of the two feature point descriptors in the vector space are similar, the two feature point descriptors can be regarded as the same feature point.
Matching feature points: specifically, the descriptors of two feature points located in different image frames are compared, and if the distances (such as mahalanobis distance) of the two feature point descriptors in the vector space are determined to be similar, the two feature point descriptors can be regarded as the same matched feature point.
Satellite data server: the method can acquire satellite observation data observed by each reference station in a reference station network based on a set data transmission system, can receive a registration request of a processing device, responds to a data acquisition request sent by the registered processing device, broadcasts ephemeris information to the registered processing device, and provides satellite observation data of a target reference station to the registered processing device.
A reference station: the satellite navigation signal is continuously observed for a long time, and the communication facilities transmit the observed data to the ground fixed observation station of the satellite data server in real time or at regular time.
Kalman filtering: the method is an algorithm for carrying out optimal estimation on the system state by using a linear system state equation and inputting and outputting observation data through a system. The method and the device for calculating the covariance matrix of the observation data are used for establishing a conversion relation between the observation data increment and the corrected estimation data increment in a certain time period and establishing a conversion relation between the corrected parameter covariance matrix and the predicted parameter covariance matrix.
Embodiments of the present application relate to Artificial Intelligence (AI) and machine learning techniques, and are designed based on computer vision techniques and Machine Learning (ML) in the AI.
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence is a comprehensive subject, and relates to a wide field, namely a hardware level technology and a software level technology. The basic technology of artificial intelligence generally comprises the technologies of a sensor, a special artificial intelligence chip, cloud computing, distributed storage, a big data processing technology, an operation interaction system, electromechanical integration and the like; software techniques for artificial intelligence generally include computer vision techniques, natural language processing techniques, and machine learning/deep learning. With the development and progress of artificial intelligence, artificial intelligence is researched and applied in multiple fields, such as common smart homes, smart customer services, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, robots, smart medical care and the like.
The automatic driving technology generally comprises technologies such as high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and has wide application prospects. And the accurate positioning is the basis for realizing automatic driving, and is an important operation for realizing behavior decision, path planning, motion control and constructing a high-precision map.
It should be noted that the positioning method provided in the present application may be applied to scenarios including, but not limited to, maps, navigation, automatic driving, internet of vehicles, and vehicle-road coordination.
The following briefly introduces the design concept of the embodiments of the present application:
in the related art, when an object to be positioned is positioned, positioning of the object to be positioned is realized by means of combined action of an inertial navigation system and a GNSS system, so that positioning accuracy is limited by GNSS signal quality, the GNSS signal quality is greatly reduced in complex scenes such as urban complex road conditions and tunnels, accurate positioning of the object to be positioned cannot be guaranteed, and further, in the process of continuously positioning the object to be positioned, the phenomena of position drift, discontinuous positioning and position jump of the object to be positioned are easily caused.
Referring to fig. 1, which is a schematic diagram illustrating inaccurate positioning of an object to be positioned according to an embodiment of the present application, fig. 1 illustrates a situation of position drift due to poor GNSS signal quality, and based on the schematic content in fig. 1, it is obvious that there are problems of position jump and discontinuous positioning in a positioning result.
In view of this, in the embodiment of the present application, the processing device positions an object to be positioned by fusing three types of data, namely, satellite observation data, velocity measurement data, and feature point coordinate data, so that excessive dependence on satellite data is avoided, and positioning accuracy and robustness are improved.
The preferred embodiments of the present application will be described in conjunction with the drawings of the specification, it should be understood that the preferred embodiments described herein are for purposes of illustration and explanation only and are not intended to limit the present application, and features of the embodiments and examples of the present application may be combined with each other without conflict.
Fig. 2a is a schematic view of an application scenario in the embodiment of the present application. The schematic diagram of the application scenario includes a satellite data server 210, a processing device 221 located on an object to be positioned 220, an image acquisition device 222 located on the object to be positioned 220, an inertial sensor 223 located on the object to be positioned 220, and satellite positioning data 224 located on the object to be positioned 220, and the application operation interface 2210 can be logged in or opened through the processing device 221. The processing device 221 and the satellite data server 210 may communicate with each other through a communication network, wherein the communication network may be a network capable of establishing a communication connection, such as a fourth generation mobile communication technology (4G), a fifth generation mobile communication technology (5G), or a WIreless communication technology (WIFI).
In the embodiment of the present application, the processing device 221 is an electronic device installed on an object to be positioned, and the electronic device may be a personal computer, a vehicle-mounted terminal, a tablet computer, a notebook computer, or the like. The processing device 221 receives image data acquired by the image acquisition device 222 on the object 220 to be positioned, receives velocity measurement data acquired by the inertial sensor 223, and receives satellite observation data observed by the satellite positioning device 224, where data transmission may be performed between the processing device 221 and the image acquisition device 223, between the processing device 221 and the inertial sensor 223, and between the processing device 221 and the satellite positioning device 224 by using a wired connection or a wireless connection, and embodiments of the present application are not limited specifically herein.
The operation interface 2210 corresponds to different application scenarios in the embodiment of the present application, and may include different contents, and in some possible scenarios in the embodiment of the present application, as shown in fig. 2b, which is an operable interface schematic diagram in a scenario in which an object to be positioned is positioned in the embodiment of the present application, when a "positioning" indication is triggered in the operable interface, the current position information is automatically positioned, and the current positioning information is displayed on the operable interface, or, when a destination address is searched in a search box to hope to perform navigation positioning, a changed position may be positioned in real time while a route is automatically planned, until the navigation positioning is finished.
The satellite data server 210 may be an independent physical server 210, or may be a cloud server 210 that provides basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, a cloud function, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform, and the like.
In the embodiment of the present application, when a positioning operation instruction for the object 220 to be positioned is initiated in the operation interface 2210 on the processing device 221, the processing device 221 receives the played real-time ephemeris information and the satellite observation data of the target reference station from the satellite data server 210, obtains the acquired image frames from the image acquisition device 222, obtains the acquired velocity measurement data from the inertial sensor 223, and obtains the observed satellite observation data from the satellite positioning device 224. And then based on the obtained various types of data, the position information of the object to be positioned 220 is fused and calculated.
The following describes a positioning process in the embodiment of the present application with reference to the drawings, where the positioning process in the embodiment of the present application can be applied to the processing device 221 shown in fig. 2a, and the specific positioning process is as follows:
referring to fig. 3a, which is a schematic flow chart of a positioning process in an embodiment of the present application, the following detailed description is made with reference to fig. 3 a:
step 301: the processing equipment responds to a positioning request initiated on an operable page and aiming at an object to be positioned, and obtains initial positioning information of the object to be positioned, wherein the initial positioning information at least comprises pose information of the object to be positioned.
When the processing equipment determines that a component for indicating to initiate a positioning request is triggered in an operation page, it determines that a positioning request for an object to be positioned is received, and further obtains initial positioning information of the object to be positioned, wherein the initial positioning information at least comprises pose information of the object to be positioned, and the pose information is a general term of position information and pose information, and in consideration of the time for initiating the positioning request, the initial positioning information obtained by the processing equipment has the following two conditions:
in case one, the processing device determines to position the object to be positioned for the first time.
Specifically, if the processing device determines that the object to be positioned is the primary positioning, the processing device acquires first network positioning information of the processing device, determines the first network positioning information as initial position information of the object to be positioned, determines initial posture information of the object to be positioned according to the deviation condition of an inertial coordinate system corresponding to the inertial sensor relative to a terrestrial coordinate system, uses the initial position information and the initial posture information as the initial positioning information of the object to be positioned, and both the processing device and the inertial sensor are installed on the object to be positioned.
It should be noted that, after the processing device is connected to the network, the location information determined by the background database according to the access location of the processing device in the network is acquired as the first network location information, where the first network location information can only represent the approximate location of the processing device and has a large location error.
In the embodiment of the application, because the processing device is installed on the object to be positioned, the network positioning information determined according to the condition that the processing device is accessed to the network can be regarded as the network positioning information of the object to be positioned.
In this embodiment of the present application, optionally, when the object to be positioned has a network connection function, the processing device may also directly obtain network positioning information determined for the object to be positioned itself, as the first network positioning information.
And secondly, the processing equipment determines that the object to be positioned is not positioned for the first time.
And if the processing equipment determines that the object to be positioned is not primarily positioned, acquiring historical target positioning information obtained when the object to be positioned is positioned last time, and determining the historical target positioning information as the initial positioning information of the object to be positioned at the current moment.
Specifically, after the processing device determines that the object to be positioned has been previously subjected to positioning correction processing, based on the currently obtained positioning request, historical target positioning information obtained when the object to be positioned is positioned last time is obtained and is used as initial positioning information of the object to be positioned at present.
For example, assume that after receiving a positioning request for an object to be positioned, the processing device determines that the positioning end time recorded in the history record of the object to be positioned last time is 13 of a certain date: 26: 27, the processing device then obtains 13: 26: and 27, processing the obtained and recorded historical target positioning information to be used as initial positioning information of the object to be positioned.
Therefore, the initial positioning information determined by the object to be positioned, or the network positioning information of the object to be positioned, or the position information at the end of the last positioning, considering the wide application of the current positioning technology, and under the scene that the object to be positioned moves and is positioned every time, the difficulty of positioning is reduced to a certain extent by determining the initial positioning information.
Step 302: the processing equipment carries out positioning correction processing on the initial positioning information based on the acquired satellite observation data, the speed measurement data and the associated feature point coordinate data which are matched with the object to be positioned, and obtains target positioning information of the object to be positioned, wherein the feature point coordinate data are obtained after feature point matching is carried out on the associated image frame.
Wherein the positioning correction process includes: updating the initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting satellite observation data and characteristic point coordinate data to obtain target positioning information.
Referring to fig. 3b, which is a schematic diagram of a positioning correction processing process in the embodiment of the present application, a detailed description of the positioning correction processing process is described in the following screenshot of fig. 3 b:
step 3021: the processing equipment acquires satellite observation data matched with an object to be positioned, acquires speed measurement data acquired by the inertial sensor, and determines characteristic point coordinate data based on image data acquired by the image acquisition equipment.
When acquiring satellite observation data matched with an object to be positioned, a processing device acquires second network positioning information of the processing device, and sends a data acquisition request to a satellite data server based on the second network positioning information, so that the satellite data server determines a target reference station corresponding to the object to be positioned based on the second network positioning information, and then receives ephemeris information and first type of satellite observation data sent by the target reference station through the satellite data server, wherein the first type of satellite observation data at least comprises: and acquiring pseudo-range information and a carrier phase observation value which are acquired after each positioning satellite is observed through the target reference station, then acquiring second satellite observation data observed by the satellite positioning equipment on the object to be positioned, and taking ephemeris information, the first satellite observation data and the second satellite observation data as the acquired satellite observation data matched with the object to be positioned.
Specifically, the second network positioning information is a network positioning result obtained at the current moment, the second network positioning information and the first network positioning information may refer to the same content, for example, both the second network positioning information and the first network positioning information may refer to network positioning information of the processing device, or may refer to different contents, for example, the first network positioning information is network positioning information directly corresponding to an object to be positioned, and the second network positioning information refers to network positioning information of the processing device.
The processing equipment sends a data acquisition request to a satellite data server based on second network positioning information representing the approximate position of the processing equipment by adopting network transmission modes such as 4G, 5G, WIFI and the like, so that the satellite data server determines a target reference station corresponding to an object to be positioned based on the second network positioning information, and acquires ephemeris information broadcasted by the satellite data server and first type satellite observation data observed by the target reference station, and meanwhile, the processing equipment acquires second type satellite observation data which are installed on the object to be positioned and observed by the satellite positioning equipment.
The satellite server takes the reference station with the coverage area containing the second network positioning information as a target reference station, the satellite data server can acquire satellite observation data corresponding to each reference station and can acquire ephemeris parameter tables, namely ephemeris information, in different satellite navigation systems, and then real-time navigation ephemeris is played to processing equipment requesting data.
Particularly, when the processing device locates the continuously moving device to be located, if it is determined that the device to be located is out of the range covered by the current target reference station based on the locating information obtained by locating, it is necessary to determine a new target reference station based on the current locating information again, and obtain satellite observation data obtained by observing the new target reference station.
When the processing equipment acquires the speed measurement data acquired by the inertial sensor, the processing equipment is connected with the inertial sensor arranged on the object to be positioned in a wired or wireless connection mode, so that the speed measurement data acquired by the inertial sensor in real time is acquired, wherein the speed measurement data comprises angular speed measurement data and acceleration measurement data.
It should be noted that, since the inertial sensor is mounted on the object to be positioned, the speed measurement data acquired by the inertial sensor is the speed measurement data of the object to be positioned.
When determining the coordinate data of the characteristic points based on the image data acquired by the image acquisition equipment, the processing equipment is firstly connected with the image acquisition equipment in a wired or wireless connection mode to obtain real-time image data acquired by the image acquisition equipment, then frames the image data subjected to denoising and distortion removal processing, takes the image frame acquired at the current moment and acquired at the later positioning moment as image frames matched with the object to be positioned, determines the characteristic points matched among the image frames, determines the coordinates of the characteristic points and takes the determined coordinates of the characteristic points as the coordinate data of the characteristic points related to the object to be positioned.
Referring to fig. 4a, which is a schematic flowchart of a process for determining coordinate data of a feature point in an embodiment of the present application, a process for determining coordinate data of a feature point associated with an object to be positioned based on obtained image data will be specifically described below with reference to fig. 4 a:
step 1: the processing equipment acquires image data acquired by image acquisition equipment on an object to be positioned, performs denoising processing on the image data by adopting a wiener filtering processing mode, and performs distortion removing processing on the image data based on internal parameters of the image acquisition equipment to obtain processed image data.
The processing device acquires image data acquired by an image acquisition device installed on an object to be positioned, and specifically, the processing device may acquire image data in Red, Green, Blue (RGB) format acquired by the image acquisition device through a Universal Serial Bus (USB), or through a High Definition Multimedia Interface (HDMI) connection line, or through another manner capable of implementing image data transmission.
Furthermore, the processing device performs denoising processing on the image data by using a wiener filtering processing mode aiming at the image data acquired in real time, and simultaneously performs distortion removal processing on the image data according to the internal parameters of the image acquisition device to obtain the image data subjected to denoising and distortion removal processing.
It should be noted that the internal parameter determining method of the image processing apparatus is a conventional technique in the art, and details of the obtaining method of the internal parameter are not described herein.
Therefore, the obtained image data is filtered and denoised, so that the noise of the image data can be reduced, the subsequent processing process can be carried out based on the high-quality image data, and the subsequent processing on the image data can be effectively carried out.
Step 2: the processing equipment carries out framing processing on the processed image data, obtains a first image frame acquired at the current moment and obtains a second image frame acquired at a positioning moment after the current moment.
The processing equipment frames the processed image data to obtain each processed image frame, and each image frame is associated with corresponding acquisition time.
For example, referring to fig. 4b, which is a schematic diagram of image frames before and after processing In the embodiment of the present application, based on the content illustrated In fig. 4b, it is assumed that n frames of images exist In the image data acquired by the processing device within 1S time, and as illustrated In the left side of fig. 4b, n frames of images I1, I2, I3 … In with noise points exist, after the filtering denoising and distortion removing processing, the noise points can be greatly removed, and then the processed image data illustrated In the right side of fig. 4b is obtained.
And in order to realize the positioning of the object to be positioned at the current moment, the processing equipment obtains a first image frame acquired at the current moment, obtains a second image frame acquired at a positioning moment after the current moment, and takes the first image frame and the second image frame as image frames related to the object to be positioned, wherein the second image frame can be one image frame which is acquired after the first image frame and is adjacent to the first image frame in each image frame acquired continuously.
And step 3: and extracting feature points included in the first image frame and the second image frame respectively by adopting an image feature point extraction algorithm, screening out target feature points successfully matched in the first image frame and the second image frame by adopting a feature point matching algorithm, and taking coordinate data of the target feature points in the first image frame and the second image frame respectively as the acquired feature point coordinate data matched with the object to be positioned.
After the processing device determines the first image frame and the second image frame, an image coordinate system is established based on the image frames to represent the feature point position coordinates.
Referring to fig. 4c, which is a schematic diagram of an image coordinate system established In the embodiment of the present application, taking the obtained image frame In as an example, the image coordinate system may be established based on the edge of the image frame.
The processing device adopts an image feature point extraction algorithm to respectively extract features of the first image frame and the second image frame, and extracts feature points included in the first image frame and the second image frame, wherein the adopted image feature point extraction algorithm can be as follows: Scale-Invariant feature transform (SIFT) algorithm, speeded-up Robust Features algorithms (SURFs), FAST feature point extraction and description algorithms (ordered FAST and rolling BRIEF, ORB), Binary Robust Scalable keypoint algorithms (BRISK), feature point description algorithms (BRIEF), and algorithms for corner detection (FAST).
Then, the image I for feature point extraction by using the feature extraction algorithmiI =1, 2, … … n, IiThe included feature points may be expressed in a form in which n is the total number of image frames.
Wherein the content of the first and second substances,representing an image IiThe total number of feature points in (a),in order to be the coordinates of the feature point 1,is a descriptor of feature point 1.
Further, the processing device screens out each target feature point successfully matched in the first image frame and the second image frame by using a feature point matching algorithm according to the feature points extracted from the first image frame and the second image frame respectively.
Specifically, the processing device may employ a brute force matching algorithm brute force, a Nearest-neighbor-rules classification algorithm (KNN), or a Nearest neighbor-based open source Library algorithm (Fast Library for Approximate neighbor Neighbors, Flann-based) or other matching algorithms. When performing feature point matching, the following three steps are generally required: firstly, extracting key points in an image frame, and searching pixels with certain characteristics in the image; secondly, calculating a feature point descriptor according to the obtained position information of the key point; and thirdly, matching according to the descriptors of the feature points.
When matching, the mahalanobis distance or the hamming distance of the two feature point descriptors can be used as a matching criterion, and two feature points of which the mahalanobis distance or the hamming distance of the feature point descriptors is lower than a correspondingly set threshold value are determined as matched feature points, wherein each matched feature point is composed of corresponding feature points and coordinates in the first image frame and the second image frame.
Specifically, fig. 4d is a schematic diagram of feature points matched in the embodiment of the present application, and fig. 4d shows a feature point matching algorithm applied to the pair IiAnd Ii+1The matching schematic diagram obtained after matching the characteristic points in the graph is that after matching is completed, 4 groups of successfully matched characteristic points are obtained, and one dotted line represents one group of matched characteristic points.
Therefore, by means of feature extraction and feature matching of the first image frame and the second image frame associated with the object to be positioned, feature points successfully matched in the first image frame and the second image frame can be determined, and then coordinate data of the feature points matched with the object to be positioned can be obtained, so that the pose change condition can be determined based on the matched feature points subsequently, and a basis is provided for subsequent positioning correction operation.
Step 3022: the processing equipment establishes a parameter matrix associated with the object to be positioned based on initial positioning information of the object to be positioned, speed measurement data, zero offset information of the inertial sensor and a carrier phase double-difference ambiguity parameter.
After the processing device acquires the initial positioning information of the object to be positioned, a related parameter matrix can be established based on the initial positioning information of the object to be positioned, the velocity measurement data acquired by the inertial sensor, the zero offset information of the inertial sensor and the carrier phase double-difference ambiguity parameter.
Specifically, the parameter matrix established by the processing device is in the following form:
wherein x represents the established parameter matrix,v and p are velocity information and position information of the object to be positioned under an Earth-center-Fixed coordinate system (ECEF),andthe method is used for correcting the measurement deviation of the angular velocity and the acceleration of the inertial sensor for the zero deviation of the gyroscope and the accelerometer of the inertial sensor.
In the embodiment of the present application, the attitude of the object to be positioned represents the euler angles between the three axes of the inertial sensor and the ECEF (earth-centered earth-fixed coordinate system), that is, the euler angles
In the formula (I), the compound is shown in the specification,、andconverting a coordinate system where three axes of the inertial sensor are located to an Euler angle rotated by ECEF around a z axis, a y axis and an x axis; inertiaThe conversion relation between the coordinate system of the three axes of the sensor and the ECEF can be expressed in a matrix form as follows:
when the initial value of each parameter in the parameter matrix is determined, the pose parameter is determined by adopting the following formula based on the Euler angle between the inertial sensor and the ECEF:
Wherein the content of the first and second substances,representing the posture of an object to be positioned, wherein Log is Log operation of Liqun SO3,is composed ofAn antisymmetric matrix of (a);for the carrier phase double-difference ambiguity parameter, 1 denotes a satellite 1 set as a reference satellite, m denotes the total number of satellites used for positioning, and the reference satellite can be selected according to actual processing requirements. The initial position information in the parameter matrix is determined by the initial positioning information of the object to be positioned, the initial value of the speed information in the parameter matrix can be set to 0, the initial values of other parameters in the parameter matrix can also be set to 0, and the updating and the correction are carried out in the subsequent estimation calculation.
For incremental matrices of the parameter matrix xTo representThe form is as follows:
step 3023: the processing equipment updates the initial positioning information of the object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtains an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix.
The processing equipment updates the initial position information of the object to be positioned based on the speed measurement data to obtain the intermediate positioning information, and firstly determines the posture information of the object to be positioned at the current moment based on the angular speed measurement data in the speed measurement data obtained at the current moment, the posture information determined at the positioning moment before the current moment and the time interval between the current moment and the previous positioning moment.
In the following description, only the current time is referred to as tkThe previous positioning time of the current time is tk-1For example, a process of updating the initial positioning information, the parameter matrix, and the parameter covariance matrix based on the velocity measurement data will be described.
Processing the velocity measurement data acquired by the inertial sensor, including angular velocity measurement data and acceleration measurement data, by a processing deviceThe velocity measurement data obtained at this time is as follows:
wherein the content of the first and second substances,characterizing angular velocity measurementsIn the inertiaDecomposition results of the x axis in three axes of the sensor;characterizing angular velocity measurementsDecomposing results of the y axis in three axes of the inertial sensor;characterizing angular velocity measurementsIn three axes of the inertial sensor, a decomposition result of a z axis;characterizing acceleration measurementsIn three axes of the inertial sensor, the decomposition result of the x axis;characterizing acceleration measurementsIn three axes of the inertial sensor, the decomposition result of the x axis;characterizing acceleration measurementsAnd in three axes of the inertial sensor, decomposing the result of the x axis.
Further, the processing device determines the posture information of the object to be positioned at the current time by using the following formula:
wherein the content of the first and second substances,is the value of the acceleration of the rotation of the earth,in order to update the time interval,is tk-1The time inertial sensor and the coordinate system transformation matrix of the ECEF are used for determining the time of the initial positioning,from a parameter matrixIs determined, andis tkThe coordinate system transformation matrix of the three axes of the moment inertial sensor and the ECEF can be further based onDetermining the value at t in the parameter matrixkAnd updating the attitude information of the object to be positioned.
Further, the processing device is based on the current time tkAcceleration measurement data and previous positioning time t in obtained speed measurement datak-1Speed information of (1), current time tkCorresponding gravity value information, and time interval, determining the current time tkVelocity information and position information.
In particular, the processing device is based on the current time tkAcceleration measurement data and previous positioning time t in obtained speed measurement datak-1Speed information of (1), current time tkCorresponding gravity value information, and time interval, determining the current time tkBased on the current time tkVelocity information of (1), previous positioning time tk-1Speed information, time interval, and previous positioning time tk-1Determines the current time tkThe location information of (1).
In specific implementation, the processing device determines t by adopting the following formulakSpeed information at time:
in the formula, g (t)k) Is at tkGravity value v (t) in the time ECEF coordinate systemk-1) Is tk-1Velocity information of the object to be positioned at the moment, v (t)k) Is tkThe speed information of the object to be positioned at the moment, namely the updated speed information of the object to be positioned; a (t)k) For an object t to be positionedk-1Acceleration measurement data of a moment.
At the time of obtaining tkAfter the velocity information of the moment, determining the position information of the object to be positioned based on the velocity information:
in the formula, r (t)k-1) Is tk-1The position of the user at the moment, r (t)k) Is tkAnd the position information of the object to be positioned at the moment, namely the updated position information of the object to be positioned.
After acquiring attitude information, speed information and position information of the current time based on the speed measurement data acquired at the current time, the processing equipment updates corresponding parameters in the parameter matrix based on the determined attitude information, speed information and position information corresponding to the current time to acquire an intermediate parameter matrix, and takes the attitude information and position information corresponding to the current time as intermediate positioning information of the current time.
Further, the processing device constructs a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude information in the current-time speed measurement data, updates an initial parameter covariance matrix according to the state transition matrix and an error matrix determined by attribute information of the inertial sensor, and obtains a middle parameter covariance matrix, wherein the initial parameter covariance matrix is a parameter covariance matrix obtained after last positioning is completed.
Specifically, when the object to be positioned is initially positioned, the initial parameter covariance matrix is a set initial value of the covariance matrix, and may be specifically a diagonal matrix preset as a set dimension.
And updating the parameter covariance matrix of the filtering algorithm by using the following formula, namely
Wherein the content of the first and second substances,representing the system state transition matrix in the filtering algorithm,the system noise can be directly obtained from the specification of the inertial sensor, and the filtering algorithm can be specifically a Kalman filtering algorithm.
System state transition matrixObtained by the following formula:
wherein, I3×3Is a 3 x 3 identity matrix of the cell,、F21and F23To calculateInAnd (4) a variable.
Wherein r iss(tk) Represents the distance between the position of the object to be currently positioned and the geocentric,,r (tk) Represents tkPosition information of the object to be positioned, Q (t)0, tk) The system noise can be directly obtained from the inertial sensor product specification, namely:
in the formula (I), the compound is shown in the specification,andthe system noise spectral density for accelerometers and gyroscopes can be obtained directly from the specifications of the inertial sensor.
Similarly, when the processing device receives the next time tk+1The process of step 3023 is repeated when the velocity measurement data is transmitted from the inertial sensor.
Therefore, based on the speed observation data acquired by the inertial sensor, the updating of the initial positioning information and the updating of the initial covariance matrix are realized, which is equivalent to updating the position information of the object to be positioned to the position determined based on the data acquired by the inertial sensor, so that the subsequent correction processing is not required to be carried out by large amplitude adjustment, and the positioning precision is ensured.
Step 3024: the processing equipment respectively adopts the satellite observation data and the characteristic point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtains a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
The processing equipment updates the initial position information of the object to be positioned based on the speed measurement data acquired by the inertial sensor to obtain intermediate positioning information, an intermediate parameter matrix and an intermediate parameter covariance matrix, and then iteratively corrects the intermediate positioning information, the intermediate parameter matrix and the intermediate parameter covariance matrix by respectively adopting satellite observation data and feature point coordinate data.
It should be noted that, in the embodiment of the present disclosure, when the processing device performs iterative modification, according to different data according to iterative modification, there are two iterative processing manners:
and firstly, the processing equipment carries out first positioning correction based on satellite observation data, and then carries out second positioning correction based on the coordinate data of the feature points.
Specifically, the processing device establishes a real-time dynamic RTK differential constraint relation based on satellite observation data, establishes a Kalman correction equation based on the RTK differential constraint relation, the intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtains a first parameter matrix obtained by correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtains a first parameter covariance matrix obtained by correcting the intermediate parameter covariance matrix.
In specific implementation, in order to perform positioning based on satellite observation data associated with an object to be positioned, a processing device needs to calculate information such as a satellite position, a velocity, a clock error rate, and the like in advance, in this embodiment of the application, the processing device calculates based on real-time navigation ephemeris information broadcast by a satellite data server and included in the satellite observation data, and calculates the satellite position, the velocity, the clock error rate, and the like at a current time (the current time may be determined by a processing device computing system time), where the ephemeris information of a satellite represents a set of parameters for calculating the satellite position, and may be transmitted in a form of a binary stream through communication methods such as 4G, 5G, or WIFI.
Further, when the processing device constructs the RTK differential constraint relationship based on the target reference station included in the satellite observation data and the pseudo-range and carrier phase observation values obtained by observing the processing device, the processing device establishes a residual matrix including a pseudo-range double-difference residual matrix and a carrier phase double-difference residual matrix as the established RTK differential constraint relationship according to the satellite observation data and the intermediate positioning information of the object to be positioned.
Specifically, the processing device determines position information of each positioning satellite according to satellite observation data, acquires position information of a target reference station, determines a first geometric distance between each positioning satellite and the target reference station respectively based on the position information of each positioning satellite, the position information of the target reference station and intermediate positioning information of an object to be positioned, determines a second geometric distance between each positioning satellite and the object to be positioned respectively, determines a pseudo-range double-difference estimation value and a carrier phase double-difference estimation value based on the first geometric distance and the second geometric distance, determines pseudo-range double-difference observation values between a reference satellite and other positioning satellites respectively based on pseudo-range information in the satellite observation data, and determines the observation of the target reference station and the object to be positioned respectively based on the carrier phase observation value in the satellite observation data, and then, establishing a pseudo-range double-difference residual matrix based on the difference value between the pseudo-range double-difference observed value and the pseudo-range double-difference estimated value, establishing a carrier-phase double-difference residual matrix based on the difference value between the carrier-phase double-difference observed value and the carrier-phase double-difference estimated value, and further establishing a residual matrix comprising the pseudo-range double-difference residual matrix and the carrier-phase double-difference residual matrix.
The formula for specifically establishing the residual matrix is as follows:
wherein the content of the first and second substances,a pseudorange double-difference residual is represented,representing the carrier phase double-difference residual error,representation processing device and satelliteM is the number of positioning satellites involved in positioning the object to be positioned,representing a first geometric distance between the satellite 1 and the target reference station b,representing pseudorange double-difference observations between satellite 1 and satellite 2,representing the carrier phase double-difference observed values of the satellite 1 and the satellite 2, and so on;in order to provide a double-differential ionospheric delay,in order to double-differenced the tropospheric delay,(ii) a Satellite 1 refers to a satellite. Since the calculation of the double differential ionospheric delay and the double reference tropospheric delay is a well-established technique in the art, it will not be described in detail herein.
And combining the obtained pseudo-range double-difference residual matrix and the carrier phase double-difference residual matrix to obtain a residual matrixAnd establishing a residual matrix and a preset first incremental matrixKalman correction equation between:
wherein HRTKFor the Jacobian matrix constructed in the Kalman filtering equation,representing the inertial sensors and the lever arm of the satellite antenna,representing the unit observation vector of the processing device to satellite m,(represents a position estimate of the object to be located,which represents the position of the satellite m and,is the carrier wavelength.
Then, based on the established Kalman correction equation, a first incremental matrix is solvedThe following were used:
in the formula (I), the compound is shown in the specification,is tkA moment first parameter covariance matrix prediction value is specifically an obtained intermediate parameter covariance matrix;for Kalman gain, RRTK(tk) For measuring error matrix, filtering parametersIs an integer and is obtained by fixing the ambiguity by adopting an MLAMBDA method;the first parameter covariance matrix obtained after correction.
The processing equipment obtains a first incremental matrixAnd then, taking the superposition result of the first incremental matrix and the intermediate parameter matrix as the corrected intermediate parameter matrix, namely the first parameter matrix, and determining each parameter corresponding to the intermediate positioning information in the first parameter matrix as the first correction result of the intermediate positioning information.
And the processing equipment performs second positioning correction based on the coordinate data of the characteristic points after completing the first positioning correction based on the satellite observation data.
Specifically, the processing equipment determines the pose increment information of the object to be positioned on the basis of the feature point coordinate data, wherein when the pose increment of the object to be positioned is determined, the processing equipment determines the posture increment information and the position increment information of the object to be positioned on the basis of the feature point coordinate data and the calibration parameters of the image acquisition equipment by adopting a random sampling consistency algorithm and a normal distribution transformation algorithm.
And then establishing a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
In particular, assume that the processing device is at tkFirst image frame and t obtained at timek+1The matched characteristic points between the second image frames obtained at the moment areAt tkThe coordinates of the moments in the image are,At tk+1The coordinates of the moments in the image areThe following equations are solved by RANdom SAmple Consensus (RANSAC) and Normal Distribution Transform (NDT) algorithms, i.e., the following equations are solved.
Wherein the content of the first and second substances,representing the attitude variable of the coordinates in the camera coordinate system, K being a calibration parameter of the image acquisition device on the object to be positioned,represented in the image acquisition device coordinate system tkAnd tk+1Attitude increment of the time coordinate; andindicating that the vehicle position is at tkAnd tk+1The position increment of the moment in the image acquisition coordinate system; n represents the number of matched feature points.
The pose variable under the coordinate system corresponding to the image acquisition equipment can be obtained by utilizing the formula calculationAnd(ii) a Meanwhile, the coordinate system of the inertial sensor is calculated according to the following formulaAndnamely:
wherein the content of the first and second substances,andthe relative position relationship between the coordinate system corresponding to the characterization image acquisition device and the coordinate system corresponding to the inertial sensor, that is, the transformation relationship between the two three-dimensional coordinate systems, can be obtained by calibration in advance.
And then, the processing equipment determines pose increment information of the object to be positioned based on the feature point coordinate data, establishes a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtains a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtains a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
In specific implementation, the processing device obtains a corresponding relationship between the attitude incremental information and a preset second incremental matrix based on the following formula:
in the formula (I), the compound is shown in the specification,is the inverse of the Right Jacobian matrix of the SO3 lie group,the right Jacobian matrix of SO3 lie groups,inIs thatIs determined by the parameters of (a) and (b),characterised by tkThe transposition of the pose of the object to be positioned at the moment,and the observed value of the attitude variation is obtained.
Further, the processing device constructs a correspondence between the position increment information and the second increment matrix based on the following formula:
wherein the content of the first and second substances,indicating position increment information;the parameter is a Jacobian matrix and is obtained by differentiating the parameter by an observation equation;representing t in the coordinates of the inertial sensork-tk+1The position increment in between is increased by the amount of the position increment,indicating the estimated position increment.
Each is determined using the following formulaThe respective intermediate parameters in the matrix:
in the formula (I), the compound is shown in the specification,is tiProjection of the vehicle's gravity vector at the moment in the ECEF coordinate system, a (t)i) Is a measure of the acceleration of the inertial sensor,is toObtained after resolution.
Further, a second delta matrix is establishedAnd pose incremental informationThe correspondence between them is as follows:
then, based on the Kalman correction equation, a second incremental matrix is solvedThe following were used:
in the formula (I), the compound is shown in the specification,to particularly refer to a first parameter covariance matrix;in order to be the basis of the kalman gain,for measuring the error matrix, it is determined by the intrinsic properties of the measuring device, which can generally be considered known;and fusing the covariance matrix of the filter after correction.
The processing device obtains a second incremental matrixAnd then, taking the superposition result of the second incremental matrix and the first parameter matrix as the corrected first parameter matrix, namely the second parameter matrix, wherein the parameter corresponding to the pose information in the second parameter matrix is the second correction result corresponding to the intermediate positioning information.
And secondly, the processing equipment performs first positioning correction based on the coordinate data of the feature points and performs second positioning correction based on the satellite observation data.
Specifically, the processing device may determine pose increment information of the object to be positioned based on the feature point coordinate data, establish a kalman correction equation based on the pose increment information, the intermediate parameter covariance matrix, and the first increment matrix for correcting the intermediate parameter matrix, obtain a first parameter matrix obtained by correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtain a first parameter covariance matrix obtained by correcting the intermediate parameter covariance matrix;
and establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the first parameter covariance matrix and a second increment matrix for correcting the first parameter matrix, obtaining a second parameter matrix obtained by correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained by correcting the first parameter covariance matrix.
The algorithm principle adopted in the process of positioning and correcting by using the satellite observation data is the same as that in the first mode, and meanwhile, the algorithm principle adopted in the process of positioning and correcting by using the feature point coordinate data is the same as that in the first mode, and the description is omitted here.
Therefore, based on satellite observation data and characteristic point coordinate data, the positioning information is corrected twice, the positioning precision is improved, the defect that the positioning system excessively depends on a single positioning factor is overcome, and the robustness of the positioning system is improved.
Step 303: and the processing equipment presents the target positioning information corresponding to the object to be positioned on the operable page.
And after the processing equipment finishes the positioning correction processing on the object to be positioned, presenting the correspondingly obtained target positioning information on an operable page so as to visually display the position information of the object to be positioned.
It should be noted that, in the embodiment of the present application, the positioning process of the object to be positioned may be a continuous process, and at each positioning time, the steps indicated in steps 301 and 303 are respectively adopted, so as to finally obtain the position information corresponding to each positioning time, and further determine the motion trajectory and the motion trend of the object to be positioned. Meanwhile, the calculated position information in one coordinate system can be converted to another coordinate system based on the conversion relationship between the coordinate systems according to the actual processing requirements, for example, the positioning information in the inertial sensor coordinate system is switched to the terrestrial coordinate system, wherein the conversion between the coordinate systems is a conventional technology in the art, and is not described in detail herein.
Therefore, the target positioning information of the object to be positioned is displayed on the operable page, the positioning result of the object to be positioned can be visually presented, and a reliable basis is provided for determining the movement trend and the position information of the object to be positioned.
It should be noted that the technical solution provided in the present application may be applied to positioning vehicles, unmanned aerial vehicles, robots, and other scenes in which positioning of objects is required, and the following describes a positioning process of a vehicle in a scene in which the vehicle is positioned with reference to fig. 5a to 5 c.
In a scenario where the vehicle is located, the processing device that performs the location correction processing may specifically be an in-vehicle terminal on the vehicle.
The image acquisition equipment can be a vehicle-mounted camera, and in the embodiment of the application, the front-mounted camera or the automobile data recorder camera can be selectively used as the vehicle-mounted camera for acquiring vehicle-mounted image data. Generally speaking, a vehicle-mounted camera mainly comprises an interior camera, a rear camera, a front camera, a side camera, a panoramic camera and the like, wherein in an automatic driving scene, the types of the front camera mainly comprise monocular and binocular, the binocular camera has a better distance measurement function and needs to be arranged at two different positions, the type of the panoramic camera is a wide-angle lens, 4 panoramic views are assembled around a vehicle for image splicing, and lane line perception can be realized by adding an algorithm; the rearview camera is a wide-angle or fisheye lens, mainly a rear-mounted reversing lens, and generally, the more complex the functions to be satisfied, the more the number of the cameras is required.
The satellite data server may be a part of a positioning system composed of a plurality of functional technologies, and provides satellite observation data and ephemeris information of a reference station to the outside, wherein the positioning system may be a product of multi-azimuth and deep crystallization of high and new technologies such as a satellite positioning technology, a computer network technology, a digital communication technology, and the like. The system consists of five parts, namely a reference station network, a data processing center, a data transmission system, a positioning navigation data broadcasting system and a user application system, wherein all reference stations and the data processing center are connected into a whole through the data transmission system to form a special network.
Referring to fig. 5a, which is a flowchart of an algorithm for vehicle positioning in the embodiment of the present application, based on the flowchart illustrated in fig. 5a, it can be known that:
the vehicle to be positioned is provided with an inertial sensor, a vehicle-mounted camera and satellite positioning data, wherein acceleration and angular velocity measurement data are acquired by the inertial sensor, integration processing is carried out, and position, velocity and direction information of the vehicle to be positioned can be obtained; aiming at vehicle-mounted image data provided by a vehicle-mounted camera, estimating the pose change of a vehicle to be positioned by extracting and matching image characteristic points of the vehicle-mounted image data, and further constructing a constraint relation of the pose change of the vehicle; the satellite observation data can be provided based on the satellite positioning equipment, and the RTK differential constraint relation of the pseudo range and the carrier phase can be constructed.
Furthermore, in the embodiment of the application, a correction equation is constructed based on a kalman filtering algorithm, correction of various types of positioning information is realized, which is equivalent to performing fusion filtering processing on various types of positioning information, and then based on the corrected positioning information, the positioning information of the vehicle to be positioned is determined, and a final positioning result is obtained.
Fig. 5b is a schematic diagram of a frame of a positioning system including various parts according to an embodiment of the present application.
The vehicle-mounted terminal can acquire data from multiple directions in the process of positioning the vehicle to be positioned, and the acquired data comprises the following data: the first type of satellite observation data are observed by a navigation ephemeris and a target reference station which are broadcast by a satellite data server; vehicle-mounted image data obtained from a vehicle-mounted camera; a second type of satellite observations obtained from a satellite positioning device, comprising pseudoranges and carrier phase observations; velocity measurements obtained from the inertial sensors include angular velocity measurements and acceleration measurements.
The satellite data server can receive a data acquisition request which is sent by the vehicle-mounted terminal and carries network positioning information of the vehicle to be positioned, determine a target reference station to which the vehicle to be positioned belongs based on the network positioning information of the vehicle, and further send first satellite observation data obtained by observation of the target reference station and satellite navigation ephemeris (ephemeris information) to the vehicle-mounted terminal.
Referring to fig. 5c, which is a schematic flow chart of the positioning process in the embodiment of the present application, the positioning process is described below with reference to fig. 5 c:
step 501: the vehicle-mounted terminal sends a request for acquiring ephemeris and observation data to a satellite data server through a communication network.
Step 502: the vehicle-mounted terminal acquires ephemeris information and first type satellite observation data played by the satellite data server.
Step 503: and the vehicle-mounted terminal calculates the satellite position, clock error, speed and clock error change rate according to the ephemeris information.
The in-vehicle terminal may calculate the satellite position, the clock bias, the velocity, and the clock bias change rate based on the ephemeris information at any time before the satellite position information, the clock bias, the velocity, and the clock bias change rate are used.
Step 504: the vehicle-mounted terminal establishes a parameter matrix comprising position information and attitude information.
Step 505: and the vehicle-mounted terminal assists the update of the motion state of the vehicle according to the speed measurement value acquired by the inertial sensor.
And the vehicle-mounted terminal assists the update of the motion state of the vehicle according to the angular velocity measurement data and the acceleration measurement data acquired by the inertial sensor. And then obtaining an updated intermediate parameter matrix and an updated intermediate parameter covariance matrix.
Step 506: and the vehicle-mounted terminal establishes RTK differential constraint according to the acquired ephemeris, the first type of satellite observation data and the second type of satellite observation data acquired by the vehicle-mounted satellite positioning equipment, and corrects the motion state of the vehicle by constructing a Kalman correction equation.
Step 507: and the vehicle-mounted terminal establishes a pose variable constraint relation according to vehicle-mounted image data sent by the vehicle-mounted camera, and corrects the running state of the vehicle by constructing a Kalman correction equation.
Step 508: and the vehicle-mounted terminal outputs the positioning information of the vehicle.
The execution sequence of step 506 and step 507 is not fixed, and the executed steps can be determined according to the actual data acquisition situation.
For example, assuming that satellite observation data is acquired first, positioning correction is performed based on the operation defined in step 506, and then positioning correction is performed using the operation defined in step 507;
for example, if the in-vehicle image data is acquired first, the positioning information is corrected by the operation defined in step 507, and then the positioning correction is performed by the operation defined in step 506.
Therefore, in the vehicle positioning process, the positioning of the vehicle can be realized based on the combined action of RTK differential constraint established by satellite observation data, speed measurement data acquired by an inertial sensor and vehicle-mounted image data. In the positioning process, the vehicle motion state is assisted by speed measurement information acquired by an inertial sensor, RTK differential constraint is selectively constructed based on information such as ephemeris and satellite observation data, Kalman filtering correction is performed, the positioning information is corrected, the positioning accuracy can reach centimeter level, image feature points can be extracted according to vehicle-mounted image data, the feature points are matched, the vehicle pose variation between adjacent image frames is estimated, the vehicle pose error is restrained by taking the vehicle pose variation as an observed quantity, and the positioning information is corrected again.
Referring to fig. 6, which is a schematic diagram of a logic structure of a positioning apparatus according to an embodiment of the present disclosure, the positioning apparatus 600 may include:
an obtaining unit 601, configured to obtain initial positioning information of an object to be positioned in response to a positioning request for the object to be positioned, where the positioning request is initiated on an operable page, and the initial positioning information at least includes pose information of the object to be positioned;
the processing unit 602 is configured to perform positioning correction processing on the initial positioning information based on the acquired satellite observation data, velocity measurement data and associated feature point coordinate data that are matched with the object to be positioned, so as to obtain target positioning information of the object to be positioned, where the feature point coordinate data is obtained by performing feature point matching on an associated image frame; wherein the positioning correction process includes: updating initial positioning information based on the speed measurement data to obtain intermediate positioning information, and performing iterative correction on the intermediate positioning information by respectively adopting satellite observation data and characteristic point coordinate data to obtain target positioning information;
the presenting unit 603 is configured to present the target positioning information corresponding to the object to be positioned on the operable page.
Optionally, based on the speed measurement data, the initial positioning information is updated to obtain intermediate positioning information, the intermediate positioning information is iteratively corrected by respectively using satellite observation data and feature point coordinate data, and when the target positioning information is obtained, the processing unit 602 is configured to:
acquiring satellite observation data matched with an object to be positioned, acquiring speed measurement data acquired by an inertial sensor, and determining feature point coordinate data based on image data acquired by image acquisition equipment;
establishing a parameter matrix associated with the object to be positioned based on initial positioning information of the object to be positioned, speed measurement data of the object to be positioned, zero offset information of an inertial sensor and a carrier phase double-difference ambiguity parameter;
updating initial positioning information of an object to be positioned based on the speed measurement data to obtain intermediate positioning information, and obtaining an intermediate parameter matrix and an intermediate parameter covariance matrix obtained after updating the parameter matrix;
and respectively adopting satellite observation data and feature point coordinate data to carry out iterative correction on the intermediate positioning information to obtain target positioning information of the object to be positioned, and obtaining a second parameter matrix obtained after correcting the intermediate parameter matrix and a second parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix.
Optionally, when iterative correction is performed on the intermediate positioning information by respectively using the satellite observation data and the feature point coordinate data, the processing unit 602 is configured to:
establishing a real-time dynamic RTK differential constraint relation based on satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, an intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and determining pose increment information of the object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, the first parameter covariance matrix and a second increment matrix used for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, when iterative correction is performed on the intermediate positioning information by respectively using the satellite observation data and the feature point coordinate data, the processing unit 602 is configured to:
determining pose increment information of an object to be positioned based on the feature point coordinate data, establishing a Kalman correction equation based on the pose increment information, an intermediate parameter covariance matrix and a first increment matrix for correcting the intermediate parameter matrix, obtaining a first parameter matrix obtained after correcting the intermediate parameter matrix and a first correction result of the intermediate positioning information, and obtaining a first parameter covariance matrix obtained after correcting the intermediate parameter covariance matrix;
and establishing a real-time dynamic RTK differential constraint relation based on the satellite observation data, establishing a Kalman correction equation based on the RTK differential constraint relation, the first parameter covariance matrix and a second increment matrix for correcting the first parameter matrix, obtaining a second parameter matrix obtained after correcting the first parameter matrix and a second correction result of the intermediate positioning information, and obtaining a second parameter covariance matrix obtained after correcting the first parameter covariance matrix.
Optionally, when the real-time dynamic RTK differential constraint relationship is established based on the satellite observation data, the processing unit 602 is configured to:
according to satellite observation data and intermediate positioning information of an object to be positioned, a residual matrix comprising a pseudo-range double-difference residual matrix and a carrier phase double-difference residual matrix is established and used as an established RTK differential constraint relation;
determining pose increment information of the object to be positioned based on the feature point coordinate data, wherein the pose increment information comprises the following steps:
and determining the attitude increment information and the position increment information of the object to be positioned by adopting a random sampling consistency algorithm and a normal distribution transformation algorithm based on the characteristic point coordinate data and the calibration parameters of the image acquisition equipment.
Optionally, when a residual matrix including a pseudorange double-difference residual matrix and a carrier phase double-difference residual matrix is established according to satellite observation data and intermediate positioning information of an object to be positioned, the processing unit 602 is configured to:
determining the position information of each positioning satellite according to satellite observation data, acquiring the position information of a target reference station, respectively determining a first geometric distance between each positioning satellite and the target reference station based on the position information of each positioning satellite, the position information of the target reference station and the intermediate positioning information of an object to be positioned, respectively determining a second geometric distance between each positioning satellite and the object to be positioned, and determining a pseudo-range double-difference estimation value and a carrier phase double-difference estimation value based on the first geometric distance and the second geometric distance;
respectively determining pseudo-range double-difference observed values between a reference satellite and other positioning satellites observed by a target reference station and an object to be positioned based on pseudo-range information in satellite observation data, and respectively determining carrier-phase double-difference observed values between the reference satellite and other positioning satellites observed by the target reference station and the object to be positioned according to carrier-phase observed values in satellite observation data;
establishing a pseudo-range double-difference residual matrix based on a difference value between the pseudo-range double-difference observed value and the pseudo-range double-difference estimated value, and establishing a carrier phase double-difference residual matrix based on a difference value between the carrier phase double-difference observed value and the carrier phase double-difference estimated value;
and establishing a residual error matrix comprising a pseudo-range double-difference residual error matrix and a carrier phase double-difference residual error matrix.
Optionally, when obtaining initial positioning information of an object to be positioned, the obtaining unit 601 is configured to:
if the object to be positioned is determined to be primary positioning, acquiring first network positioning information of the processing equipment, determining the first network positioning information as initial position information of the object to be positioned, determining initial posture information of the object to be positioned according to the deviation condition of an inertial coordinate system corresponding to the inertial sensor relative to a terrestrial coordinate system, taking the initial position information and the initial posture information as the initial positioning information of the object to be positioned, and installing the processing equipment and the inertial sensor on the object to be positioned;
and if the object to be positioned is determined to be non-primary positioning, acquiring historical target positioning information obtained when the object to be positioned is positioned last time, and determining the historical target positioning information as the initial positioning information of the object to be positioned at the current moment.
Optionally, when acquiring satellite observation data matched with the object to be positioned, the processing unit 602 is configured to:
acquiring second network positioning information of the processing equipment, and sending a data acquisition request to a satellite data server based on the second network positioning information so that the satellite data server determines a target reference station corresponding to an object to be positioned based on the second network positioning information;
receiving ephemeris information and first type satellite observation data sent by a target reference station through a satellite data server, wherein the first type satellite observation data at least comprises: pseudo-range information and a carrier phase observation value obtained after each positioning satellite is observed through a target reference station;
and acquiring second satellite observation data observed by the satellite positioning equipment on the object to be positioned, and taking the ephemeris information, the first satellite observation data and the second satellite observation data as the acquired satellite observation data matched with the object to be positioned.
Optionally, when obtaining feature point coordinate data associated with an object to be positioned, the processing unit 602 is configured to:
acquiring image data acquired by image acquisition equipment on an object to be positioned, denoising the image data by adopting a wiener filtering processing mode, and carrying out distortion removal processing on the image data based on internal parameters of the image acquisition equipment to obtain processed image data;
performing frame division processing on the processed image data to obtain a first image frame acquired at the current moment and a second image frame acquired at a positioning moment after the current moment;
and extracting feature points included in the first image frame and the second image frame respectively by adopting an image feature point extraction algorithm, screening out target feature points successfully matched in the first image frame and the second image frame by adopting a feature point matching algorithm, and taking coordinate data of the target feature points in the first image frame and the second image frame respectively as the acquired feature point coordinate data matched with the object to be positioned.
Optionally, the processing unit 602 is configured to, when the initial positioning information is updated based on the speed measurement data and the intermediate positioning information is obtained:
determining the attitude information of an object to be positioned at the current moment based on angular velocity measurement data in the velocity measurement data obtained at the current moment, the attitude information determined at a positioning moment before the current moment and the time interval between the current moment and the previous positioning moment;
determining the speed information of the current moment based on the acceleration measurement data, the speed information of the previous positioning moment, the gravity value information corresponding to the current moment and the time interval in the speed measurement data obtained at the current moment, and determining the position information of the current moment based on the speed information of the current moment, the speed information of the previous positioning moment, the time interval and the position information of the previous positioning moment;
and updating corresponding parameters in the parameter matrix based on the determined attitude information, speed information and position information corresponding to the current time to obtain an intermediate parameter matrix, and taking the attitude information and the position information corresponding to the current time as intermediate positioning information of the current time.
Optionally, the processing unit 602 is further configured to:
constructing a state transition matrix corresponding to a Kalman filtering algorithm based on acceleration measurement data and attitude information in the current-time speed measurement data;
and updating the initial parameter covariance matrix according to the state transition matrix and the error matrix determined by the attribute information of the inertial sensor to obtain an intermediate parameter covariance matrix, wherein the initial parameter covariance matrix is the parameter covariance matrix obtained after the last positioning is completed.
Having described the positioning method and apparatus of the exemplary embodiments of the present application, an electronic device according to another exemplary embodiment of the present application is next described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Based on the same inventive concept as the method embodiment, an electronic device is further provided in the embodiment of the present application, referring to fig. 7, which is a schematic diagram of a hardware component structure of an electronic device to which the embodiment of the present application is applied, and the electronic device 700 may at least include a processor 701 and a memory 702. The memory 702 stores program codes, and when the program codes are executed by the processor 701, the processor 701 is enabled to execute any one of the steps of the key point detection method.
In some possible implementations, a computing device according to the present application may include at least one processor, and at least one memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of keypoint detection according to various exemplary embodiments of the present application described above in the present specification. For example, a processor may perform the steps as shown in FIG. 3 a.
A computing device 800 according to this embodiment of the present application is described below with reference to fig. 8. As shown in fig. 8, computing device 800 is embodied in the form of a general purpose computing device. Components of computing device 800 may include, but are not limited to: the at least one processing unit 801, the at least one memory unit 802, and a bus 803 that couples various system components including the memory unit 802 and the processing unit 801.
Bus 803 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The storage unit 802 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 8021 and/or cache storage unit 8022, and may further include Read Only Memory (ROM) 8023.
Storage unit 802 can also include a program/utility 8025 having a set (at least one) of program modules 8024, such program modules 8024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The computing device 800 may also communicate with one or more external devices 804 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the computing device 800, and/or with any devices (e.g., router, modem, etc.) that enable the computing device 800 to communicate with one or more other computing devices. Such communication may be through input/output (I/O) interfaces 805. Moreover, the computing device 800 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 806. As shown, the network adapter 806 communicates with other modules for the computing device 800 over the bus 803. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the computing device 800, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Based on the same inventive concept as the above method embodiments, the various aspects of the keypoint detection method provided by the present application may also be implemented in the form of a program product, which includes program code for causing an electronic device to perform the steps in the keypoint detection method according to various exemplary embodiments of the present application described above in this specification, when the program product runs on the electronic device, for example, the electronic device may perform the steps as shown in fig. 3 a.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.