Automatic vehicle radar system with automatic alignment for azimuth, elevation and vehicle speed scale errors
1. A radar system (10) having an auto-alignment feature, the radar system (10) comprising a controller (34), the controller (34) configured to:
detecting an object (20) present in the field of view (16) using the radar sensor (14);
determining a measured rate of change of distance (22), a measured azimuth angle (24) and a measured elevation angle (26) to each of at least two of the objects (20);
determining a measured speed (32) of the radar system (10) using a speed sensor (30); and
determining, for each of the at least two objects (20), at least one of the following based on the rate of change of distance (22), the measured azimuth angle (24) and the measured elevation angle (26):
a velocity scale error (36) of the measured velocity (32);
an azimuthal misalignment (38) of the radar sensor (14); and
an elevation misalignment (40) of the radar sensor (14).
2. The radar system (10) of claim 1, wherein the controller (34) is configured to simultaneously determine at least two of:
the velocity scale error (36) of the measured velocity (32);
the azimuthal misalignment (38) of the radar sensor (14); and
the elevation misalignment (40) of the radar sensor (14).
3. The radar system (10) of claim 1 or 2, characterized in that the radar sensor (14) and the speed sensor (30) are configured to be mounted on a host vehicle (12).
4. The radar system (10) of claim 3, wherein the controller (34) is configured to output at least one of:
said measured rate of change of distance (22) to each of at least two said objects (20) relative to said host vehicle (12);
the measured azimuth angle (24) to each of at least two of the objects (20) relative to the host vehicle (12);
the measured elevation angle (26) to each of at least two of the objects (20) relative to the host vehicle (12); and
the measured speed (32) of the radar system (10) corresponds to a speed of the host vehicle (12).
5. The radar system (10) of claim 4, characterized in that the controller (34) is configured to determine, while the host vehicle (12) is traveling, at least one of:
the velocity scale error (36) of the measured velocity (32);
the azimuthal misalignment (38) of the radar sensor (14); and
the elevation misalignment (40) of the radar sensor (14).
6. The radar system (10) of claim 5, wherein the host vehicle (12) is moving while the host vehicle is traveling.
7. The radar system (10) of any of claims 1-6, wherein the controller (34) is further configured to determine an actual speed (42) of the radar system (10) based on the measured speed (32) and the speed scale error (36).
8. The radar system (10) of any of claims 1 to 7, characterized in that the controller (34) is further configured to determine an actual azimuth angle (44) to each of the at least two objects (20) based on the azimuth misalignment (38) and the measured azimuth angle (24) to each of the at least two objects (20).
9. The radar system (10) of any of claims 1 to 8, characterized in that the controller (34) is further configured to determine an actual elevation angle (46) to each of the at least two objects (20) based on the elevation misalignment (40) and the measured elevation angle (26) to each of the at least two objects (20).
10. The radar system (10) of any of claims 1 to 9, wherein each of the at least two objects (20) is characterized as a stationary object.
11. The radar system (10) of any of claims 1-10, wherein the controller (34) is further configured to:
determining a yaw rate (56) using a yaw rate sensor (52);
determining a sideslip angle (50) of the host vehicle (12) based on the yaw rate (56); and
determining the velocity scale error (36), the azimuth misalignment (38), and the elevation misalignment (40) further based on the side slip angle (50).
12. A method performed by the system of any of claims 1-11, the method comprising:
detecting an object (20) present in the field of view (16);
determining a measured rate of change of distance (22), a measured azimuth angle (24) and a measured elevation angle (26) to each of at least two of the objects (20);
determining the measured speed (32); and
determining, for each of the at least two objects (20), at least one of the following based on the rate of change of distance (22), the measured azimuth angle (24) and the measured elevation angle (26):
a velocity scale error (36) of the measured velocity (32);
an azimuthal misalignment (38) of the system (10); and
an elevation misalignment (40) of the system (10).
13. A system (10) comprising:
means for detecting an object (20) present in a field of view (16) of the system (10);
means for determining a measured rate of change of distance (22), a measured azimuth angle (24) and a measured elevation angle (26) to each of at least two of said objects (20);
means for determining a measured speed (32) of the system (10); and
means for determining, for each of the at least two objects (20), at least one of the following based on the rate of change of range (22), the measured azimuth angle (24) and the measured elevation angle (26):
a velocity scale error (36) of the measured velocity (32);
an azimuthal misalignment (38) of the system (10); and
an elevation misalignment (40) of the system (10).
Background
Automotive radar sensors are known to require alignment with the chassis of the vehicle and therefore accurate knowledge of the location of the detected object. The alignment process performed when assembling the vehicle does not compensate for pitch or elevation errors caused by heavy goods and yaw or orientation errors caused by misalignment of the wheels or chassis of the vehicle, which may result in a 'side-track' or 'step-and-fall' of the vehicle while driving.
Disclosure of Invention
According to one embodiment, a radar system with automatic alignment is provided that is suitable for use in an automotive vehicle. The system includes a radar sensor, a speed sensor, and a controller. The radar sensor is used to detect an object present in the field of view of the host vehicle to which the radar sensor is mounted in proximity. The radar sensor is operable to determine a measured rate of change of range (dRm), a measured azimuth angle (Am), and a measured elevation angle (Em) to each of at least three objects present in the field of view. The speed sensor is used to determine a measured speed (Sm) of the host vehicle. The controller is in communication with the radar sensor and the speed sensor. The controller is configured to simultaneously determine a speed scale error (Bs) of the measured speed, an azimuth misalignment (Ba) of the radar sensor, and an elevation misalignment (Be) of the radar sensor based on the measured rate of change of range, the measured azimuth angle, and the measured elevation angle to each of the at least three objects as the host vehicle moves.
Further features and advantages will appear more clearly on reading the following detailed description of preferred embodiments, given purely by way of non-limiting example and with reference to the accompanying drawings.
Drawings
The invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1 is a diagram of a radar system with automatic alignment according to one embodiment;
FIG. 2 is a traffic scenario that may be encountered by the system of FIG. 1, according to one embodiment;
FIG. 3 is a diagram of an algorithm executed by the system of FIG. 1, according to one embodiment;
FIG. 4 is a graph of results of an implementation of the system of FIG. 1, according to one embodiment;
FIG. 5 is a graph of results of an implementation of the system of FIG. 1, according to one embodiment;
FIG. 6 is a graph of results of an implementation of the system of FIG. 1, according to one embodiment;
FIG. 7 is a graph of results of an implementation of the system of FIG. 1, according to one embodiment;
FIG. 8 is a graph of results of an implementation of the system of FIG. 1, according to one embodiment;
Detailed Description
Fig. 1 illustrates a non-limiting example of a radar system 10 (hereinafter system 10). The system 10 is generally suitable for use in an autonomous vehicle (e.g., the host vehicle 12) and is equipped with an auto-alignment feature to align the radar sensor 14 with a reference frame established by the subject of the host vehicle 12. The performance and utility of an automotive radar system is improved if the radar target tracker algorithm (hereinafter tracker) has knowledge of (i.e., is programmed or calibrated with) the actual angular mounting orientation of the radar sensor relative to the field of view 16 as viewed by the radar sensor and/or the vehicle or structure in which the radar sensor is installed. Advantageously, this is done using an auto-alignment algorithm (hereinafter generally referred to as algorithm 18) which determines the actual or true angular orientation of the radar sensor.
The actual angular orientation is typically a small deviation from the expected or typical orientation utilized for pre-programming the tracker. The auto-alignment algorithm described herein is for use on the host vehicle as it observes or tracks stationary objects or targets as the host vehicle travels along a road. It has been observed that the auto-alignment algorithm described herein is an improvement over existing examples of auto-alignment algorithms in that existing examples take several minutes or more to complete the auto-alignment process, require stationary host vehicles with a predetermined arrangement of reference targets, and/or are prone to error when sequentially determining the correction factors needed to compensate for small deviations from an expected or typical orientation as the vehicle is driven, possibly introducing unknown errors.
Some known radar systems used on vehicles only perform azimuth auto-alignment, as those systems are only able to detect range and azimuth to a target or object. The radar system described herein is further capable of measuring elevation angles in addition to range and azimuth, and therefore elevation alignment is also desirable.
An automatic alignment method has been proposed which compares the detected distance change rate of a stationary target with the measured speed of the own vehicle and compensates for the azimuth angle to the stationary target. However, speeds typically have a 'speed ratio' or speed scale error, meaning that the measured speed is proportional to the actual speed with a percentage (e.g., 1%) error. This proportion error may be caused by, for example, worn tire rubber and/or wheels having non-standard radii. Depending on how the auto-alignment algorithm is configured, the effect of the velocity scale error on the estimated misalignment angle may be significant.
The auto-alignment algorithm described herein estimates velocity scale errors, azimuth alignment errors (azimuth misalignment), and elevation alignment errors (elevation misalignment) collectively or simultaneously. The simultaneous calculation is advantageous because it takes into account the cross-correlation of the errors. That is, the algorithms described herein are superior to algorithms that calculate these errors individually (e.g., one after the other for errors). Separate or sequential calculations suffer from cross-correlation of errors, since, for example, the azimuthal misalignment depends on the other two errors. To minimize inaccuracies, multiple iterations may be necessary, which is undesirably time consuming.
It is generally known to perform static calibration for measuring radar sensor mounting angles using a stationary subject vehicle and a known set of reference targets (e.g., corner reflectors located at carefully measured positions in an open space around the vehicle). However, this technique is considered inadequate because the dynamic longitudinal axis of the subject vehicle is not readily determined from visual inspection of a stationary vehicle. For example, when a vehicle moves in a straight line down a road, it may 'side-navigate', meaning that it appears that the longitudinal axis of the vehicle may actually point in a very different direction as the vehicle moves, as determined based on the visual symmetry of the vehicle body. As such, azimuthal misalignment will occur regardless of how carefully the test measurements are made. Changes in cargo load may also affect the elevation angle of the radar sensor, which may be different than when static corrections are made.
With continued reference to fig. 1, the radar sensor 14 is used to detect an example of an object 20 present in the field of view 16 proximate the host vehicle 12 in which the radar sensor 14 is installed. Radar sensor 14 is operable to determine or measure various values or variables from the return radar signals reflected by object 20, including but not limited to measured rate of change of range 22(dRm), measured azimuth angle 24(Am), and measured elevation angle 26(Em) to object 20. As will be described in more detail below, the algorithm 18 requires at least three (3) instances of the object 20 for automatic alignment, and thus each of the at least three (3) instances of the object 20 must be present in the field of view 16.
FIG. 2 illustrates a non-limiting example of a traffic scene 28 that the host vehicle 12 may experience when the system 10 attempts to automatically align the radar sensor 14. As will also be described below, the automatic alignment process implemented by the algorithm 18 is greatly simplified when each of the at least three objects is not moving (i.e., can be characterized as stationary). By way of example and not limitation, the objects 20 used by the system 10 as reference points for automatic alignment may include a stop sign 20A, a speed limit sign 20B, and/or a stopped vehicle 20C. By way of further example, because the approach speed is communicated to the system 10 through vehicle-to-vehicle (V2V) communication, the configuration and operation of which is recognized by those skilled in the art, the approach vehicle 20D will not be a preferred example for the automatically-aligned object 20 unless the speed of the approach vehicle 20D is known to the system 10.
The system 10 also includes a speed sensor 30 for indicating or determining a measured speed 32(Sm) of the host vehicle 12. By way of example and not limitation, the speed sensor 30 may be the same sensor used to determine what speed is indicated on a speedometer display (not shown) of the host vehicle 12, which determination will be based on the rotational speed of the wheels of the host vehicle, as will be appreciated by those skilled in the art.
The system 10 also includes a controller 34 in communication with the radar sensor 14 and the speed sensor 30. The controller 34 may include a processor (not specifically shown) such as a microprocessor or other control circuitry such as analog and/or digital control circuitry, including an Application Specific Integrated Circuit (ASIC) for processing data, as should be apparent to those skilled in the art. The controller 34 may include a memory (not specifically shown) including non-volatile memory, such as electrically erasable programmable read-only memory (EEPROM), for storing one or more routines, thresholds, and captured data. The one or more routines may be executed by the processor to perform the steps for determining an error correction factor or offset to automatically align the radar sensor 14 based on signals received by the controller 34, as described herein.
As part of the automatic alignment process, the controller 34 is programmed with the algorithm 18, and thus the controller 34 is configured to determine a speed scale error 36(Bs) of the measured speed 32, an azimuth misalignment 38(Ba) of the radar sensor 14, and an elevation misalignment 40(Be) of the radar sensor 14, collectively or simultaneously, based on the measured rate of change in range 22, the measured azimuth angle 24, and the measured elevation angle 26 to each of the at least three instances of the object 20. Advantageously, the algorithm 18 performs automatic alignment of the radar sensor 14 as the host vehicle 12 moves. Note that the algorithm 18 described herein is advantageous over alignment schemes that would align the radar sensor 14 only when the host vehicle is stopped and/or when an arrangement of targets that are pre-positioned at known locations is presented, because the algorithm 18 is able to correct for dynamic conditions of the host vehicle 12, such as varying cargo loads affecting the azimuth wheel misalignment and/or affecting the elevation angle of the radar sensor 14.
Controller 34 may be further programmed or further configured to determine an actual velocity 42(Sa) based on measured velocity 32 and velocity scale error 36, an actual azimuth angle 44(Aa) to object 20 based on azimuth misalignment 38 and measured azimuth angle 24, and an actual elevation angle 46(Ea) to object 20 based on elevation misalignment 40 and measured elevation angle 26. Details of these calculations will also be presented below.
The algorithm 18 may collect a sufficient number of detections of the object 20 at a single instant, or may collect detections at numerous times. At some times, suitable detections may not be found and these times may be ignored. By collecting data at many times, the destructive effects of errors not included in the algorithmic model are 'averaged'. The data from these multiple time instants may be batch processed or a recursive filter may be used. In either case, the equations shown below form the core of the implementation, and one skilled in the art can successfully implement a batch or recursive form of the method.
The radar sensor 14 described herein is assumed to be mounted on the host vehicle 12 without loss of generality. A three-dimensional (3D) orthogonal cartesian coordinate system is used with the origin of coordinates located at the radar sensor 14. The positive x-axis is directed horizontally forward, parallel to the dynamic longitudinal axis of the vehicle. The positive y-axis is directed to the right side of the vehicle in the horizontal lateral direction. The positive z-axis points downward and is orthogonal to the x-axis and the y-axis.
The actual azimuth angle 44 of the line of sight vector of the radar sensor 14 is defined as the angle: a vertical plane containing the positive x-axis needs to be rotated (using the sign rule defined by the right hand rule) through that angle about the z-axis in order to contain the detection or line of sight vector in that rotated vertical plane. The actual elevation angle 46(Ea) of the line of sight vector of the radar sensor 14 is defined as an angle: the vector contained at the intersection of the x-y plane and the perpendicular plane of the azimuth rotation needs to be rotated up through this angle to coincide with the detection or line of sight vector. Detection above the x-y plane has a positive elevation angle. This rule is consistent with the right hand rule about the y-axis.
The singularity of this representation of azimuth and elevation angles (e.g., at a point on the z-axis) is not a problem in automotive applications where the radar has a somewhat limited vertical field of view.
The definitions of 'actual' (i.e. a value without measurement error) variable names or symbols and 'measured' (i.e. indicated by measurements made by the radar sensor 14) variable names or symbols as used herein are defined as follows:
dRa (i), dRm (i): actual distance rate of change 48 and measured distance rate of change 22 for the ith object detection;
aa (i), am (i): the ith detected actual azimuth 44 and measured azimuth 24;
ea (i), Em (i): the ith detected actual elevation angle 46 and the measured elevation angle 26;
ua (i), Va (i), Wa (i): the actual longitudinal, lateral and vertical components of the actual velocity vector of the radar sensor 14 relative to the ground at the time of observing the ith detection;
um (i), vm (i), Wm (i): a measured longitudinal component, a measured transverse component and a measured vertical component of the measured velocity vector of the radar sensor 14 relative to the ground while observing the ith detection;
ut (i), Vt (i), Wt (i): the longitudinal, lateral and vertical of the indicated velocity vector detected by the ith object (i.e., target) relative to the ground;
ys (i): the sideslip angle 50 of the host vehicle 12 at the time of observing the ith object detection, where the sideslip angle is the angle between the horizontal host vehicle velocity vector (i.e., vector [ Ua Va 0]) and the x-axis;
ba: measuring the deviation error of the azimuth angle, namely azimuth misalignment 38;
be: measuring the deviation error of the elevation angle, namely the elevation misalignment 40; and
bs: the speed scale error 36 of the speed of the host vehicle.
The error model considered here can be summarized as:
am (i) ═ aa (i) + Ba: model equation 1 for azimuthal misalignment
Em (i) ═ ea (i) + Be: model equation 2 for elevation misalignment
And
sm (i) ═ 1+ Bs sa (i): model equation 3 for velocity scaling error when observing the ith test
In the models of azimuthal and elevation misalignment shown above, the misalignment is expressed as a constant deviation error of the measured angle. In the velocity scale error model, the measured velocity is modeled as the actual velocity corrupted by the velocity scale error 36. Since a value of Bs-0 corresponds to zero measurement error, a (1+ Bs) form of the scale factor is useful.
The actual range rate 48 depends on the velocity vector of the radar sensor 14 relative to the ground and the instance of the detected object 20, along with the actual azimuth angle 44 and the actual elevation angle 46 of the detected object relative to the radar sensor 14. For the ith test, equation 4 defines the actual distance change rate 48 as
dra (i) (ut (i)) ua (i)) cos [ aa (i)) cos [ ea (i)) + (vt) (i) -va (i)) sin [ aa (i)) ]cos [ ea (i)) ] - (wt (i) -wa (i)) sin [ ea (i)) ] equation 4
Because the object 20 (i.e., the object of interest) is intended or considered to be stationary, the values of Ut (i), Vt (i), and Wt (i) for all values of (i) are assumed to be all equal to zero. Applying the error model defined above yields the following equation, which can be implemented in batch or recursive form at multiple times (as indicated in the foregoing) using multiple radar detections. As noted above, relative motion between the radar sensor 14 and the stationary target (object 20) is necessary, so the actual longitudinal velocity ua (i) of the radar sensor is assumed to be non-zero. Combining equations 1-4 yields equation 5, from which equation 5 the following can Be used to determine the errors Bs, Ba, and Be:
drm (i) + um (i) cos [ am (i) ] cos [ em (i) ] + vm (i) sin [ am (i) ] ] cos [ em (i) ] ═ H (i, 1) H (i, 2) H (i, 3) ]) trans [ Bs Ba Be ] equation 5
Wherein
H (i, 1) ═ um (i) cos [ am (i) ]cos [ em (i) ]equation 6
H (i, 2) ═ um (i) sin [ am (i) ] · cos [ em (i) ] + vm (i) cos [ am (i) ] [ ] equation 7
H (i, 3) ═ um (i) cos [ am (i) ] [ sin [ em (i) ] -vm (i) ] [ sin [ am (i) ] [ sin [ em (i) ] ] equation 8 and
tan [ ] is the matrix transposition equation 9
In the derivation of equation 5, the measured values of the longitudinal and lateral velocities um (i) and vm (i) of the radar sensors are assumed to be subject to the same velocity scale error as sm (i), i.e., um (i) ═ 1+ Bs) · ua (i) and vm (i) ═ 1+ Bs va (i), and the actual and measured vertical velocities wa (i) and wm (i) of the radar sensors are assumed to be zero.
Equation 10 is a simplified version of equation 5, which is suitable for operating conditions where the host vehicle is traveling in a straight line, i.e., the actual lateral velocity of the sensor is approximately zero. Equation 10 is derived from equation 5 by setting vm (i) equal to zero but dividing by um (i), thus
drm (i)/um (i) + cos [ am (i) ] ═ cos [ em (i) ] ═ F (i, 1) F (i, 2) F (i, 3) ], trans [ Bs Ba Be ] equation 10
Wherein
F (i, 1) ═ cos [ am (i)) ] ═ cos [ em (i)) ] equation 11
F (i, 2) ═ -sin [ am (i)) ] · cos [ em (i)) ] equation 12
And
f (i, 3) ═ -cos [ am (i)) ] × sin [ em (i)) ] equation 13
To solve equation 5, the following signals or values are required: A) radar measurements drm (i), am (i) and em (i), which are provided by radar sensors, and B) vehicle speed components um (i) and vm (i), recall that wm (i) is 0. The host vehicle module may directly measure the host vehicle speed Sm, but may not be able to directly measure the sideslip angle 50 (Ys). The sideslip angle 50 may be calculated based on a combination of other variables, such as the measured speed 32, a yaw rate sensor 52, a steering angle sensor 54, and the like. Accordingly, the system 10 may include a yaw rate sensor 52 for determining a yaw rate 56 of the host vehicle 12. Accordingly, controller 34 is further configured to determine a sideslip angle 50(Ys) of host vehicle 12 based on yaw rate 56, and further determine a speed scale error 36, an azimuth misalignment 38, and an elevation misalignment 40 based on sideslip angle 50. There are many ways to do this, as will be appreciated by those skilled in the art. Regardless of which method is used there, the algorithm receives the output values for Sm and Ys from the host vehicle module. The measured vehicle speed component required in equation 5 is calculated using Um ═ Sm × cos [ Ys ] and Vm ═ Sm × sin [ Ys ]. Ys can be ignored if the host vehicle is traveling straight or nearly straight on a flat road. Thus, the velocity component is determined to um (i) ═ sm (i), and vm (i) ═ 0. A non-limiting example of a diagram 58 of the algorithm is shown in fig. 3.
The batch least squares problem can be formed by vertically stacking a number of equations 5 or 10 to form an array with one equation for each ith detection. Thus, for the ith assay, um (i), vm (i), Wm (i), dRm (i), am (i), Em (i) are collected for a total of N assays, where N is greater than or equal to 3(N ≧ 3). For equation 5, the least squares problem resulting in a batch solution takes the form:
D1H P equation 13
Wherein
And
p ═ trans [ Bs Ba Be ] equation 16
The Estimation of P (EP) is then performed using equation 17, where
EP ═ inv [ trans [ H ] × trans [ H ] × D1 equation 17
Where inv [ ] is the matrix inversion operation.
For equation 10, the least squares problem resulting in a batch solution may take the form of equation 18, where
Similar to equation 13, the way to solve equation 18 as a least squares problem is to rewrite it in the form:
D2F P equation 19
Wherein
And
equation 22 for P ═ trans [ Bs Ba Be ]
An Estimate of P (EP) is then provided in equation 23, where
EP-inv [ trans [ F ] trans [ F ] D2 equation 23
The method of solving equation 5 may include the steps of:
a) collecting radar measurement results dRm (i), am (i) and Em (i), wherein i is 1, … … N, and N is more than or equal to 3;
b) collecting vehicle module outputs sm (i) and ys (i), i being 1, … … N;
c) determining vehicle speed components Um (i) and Vm (i) using Um ═ Sm _ cos [ Ys ] and Vm ═ Sm _ sin [ Ys ], wherein the substitution can be simplified by using Um (i) ═ Sm (i) and Vm (i) ═ 0 if the vehicle 12 is moving straight; and
d) the Estimate of P (EP) is determined using equations 13-17.
In another embodiment of the algorithm 18, the batch solution shown above is solved at each time instant using the detection from only that time instant. This requires testing at each time instant to ensure that the least squares problem is sufficiently well-behaved for the solution to be tried. The simple form of the test requires a minimum number of detections, with sufficient diversity in detected or measured azimuth and elevation angles. The single-instant estimates for Bs, Ba and Be are then used to drive a low-pass filter to produce a slowly time-varying estimate of these parameters. This implementation has the benefit of being relatively simple, but has the disadvantage that it discards valid detection data at the moment there is insufficient detection for a single-moment problem solution.
The algorithm 18 may also be implemented as a recursive least squares or Kalman (Kalman) filter. Implementations with window intervals of interest or with declining memory over longer intervals are possible. One skilled in the art can easily imagine how to formulate such a filter based on the principal equation (equation 5) or the simplified equation (equation 10) shown above.
The algorithm 18 uses raw or measured radar detection of measured range rate 22, measured azimuth 24, and measured elevation to objects (i.e., objects) that are considered stationary. This determination that the target or object is stationary relies on the speed signal from the host vehicle, which is assumed to be corrupted by the speed scale error 36. It also depends on the measured angle, which is assumed to have a deviation error due to misalignment. Fortunately, the determination of the standstill/movement is relatively insensitive to assumed small alignment errors. However, it has been observed that the speed scale error is not so because a stationary target has been determined by the tracker to be a moving target when the speed of the host vehicle is relatively high (e.g., greater than 100 kph). Thus, it is recognized that it is preferable to perform self-alignment at a speed that is not too great in magnitude (e.g., less than 60 kph). At lower speeds, the magnitude of the vehicle speed scale error is small enough for stationary targets to be correctly classified as stationary, or the stationary/moving threshold increases with increasing vehicle speed in such a way as to account for the maximum expected level of speed scale error. The auto-alignment algorithm is most accurate when operating under conditions where the lateral and vertical components of the velocity of the radar sensor relative to the ground are almost zero. Thus, the ideal condition is a straight line trajectory on a smooth asphalt.
Estimation problems such as described in this document depend on observability conditions for success. A parameter is observable if and only if sufficient information is present in the observed amount for the uniquely identified parameter. In the batch formulation of the present algorithm, observability is related to the rank of the Nx3 matrix (i.e., which is a noise error free version) (i.e., it needs to be 3). It has been found that observability conditions are met if there are at least three detections with sufficient azimuthal and elevational angular diversity. The auto-alignment algorithm described herein proposes that the parameters are observable under the assumption of a sufficiently rich and diverse detection set.
The implementation of the algorithm described herein requires the estimation of the three cartesian components Um, Vm and Wm of the velocity of the radar sensor relative to the ground. Although the measured vehicle speed signal is assumed to be available (possibly corrupted by speed scale errors), the measurement/estimation of these three quantities requires some model of the vehicle dynamics, as well as other sensors such as yaw rate sensors, pitch rate sensors, steering wheel sensors, etc. Well-known algorithms can be used for the object.
The algorithm 18 described herein is most useful when it includes a confidence indication in addition to the misalignment estimation. The confidence indicator signals to the user that the estimate was misaligned indicating whether they are ready to be used and trusted. The algorithm will typically start by providing an estimate of the desired amount of slight error, but the error in the estimate should decrease rapidly to a steady state level. Once this steady state level is obtained, the algorithm should signal a high confidence in the estimate. If a problem occurs and the estimate does not appear to converge to a useful value, a low confidence should be signaled. A low confidence should also be signaled during the initial transient period prior to successful convergence.
Two schemes for identifying conditions of convergence or high confidence are now described. In one aspect, short-term and long-term averages are calculated for the estimated deviation values. If these match, successful convergence is indicated. In another approach, the range rate residual error (i.e., the difference between the predicted range rate and the measured range rate for those stationary objects) is monitored. Ideally, the short-term average of these range rate residual errors would converge to a minimum value and indicate successful convergence when that value is reached.
The algorithm 18 has been tested using simulated data (where the actual error parameter values are known) and using real sensor data (where the actual error parameters are not known).
FIG. 4 shows the results of 60 simulation runs, each representing a different level of actual simulated azimuthal deviation between-3.0 and +3.0 degrees. In each simulation run, the simulated velocity scale error was 5.0%, the simulated elevation angle deviation was 2.0 degrees, and the simulated range rate had a deviation of-0.1 meters/second. Both azimuth and elevation angles and range rate analog measurements are additionally corrupted by zero mean gaussian noise with standard deviations of 1.0 degrees (azimuth), 2.0 degrees (elevation) and 0.1 meters/second (range rate). For each simulated run that deviates from the level at a particular azimuth, enough data points are simulated to allow the algorithm estimate to converge. The actual or true azimuth offset varies between-3.0 and +3.0 degrees and the estimated azimuth offset is generated by the algorithm 18. In this plot, the horizontal axis labeled "simulation index" represents different simulation runs, each with a particular value of simulated azimuth deviation as given by the corresponding value of actual azimuth deviation.
Fig. 5-7 show estimates of azimuth angle deviation, elevation angle deviation, and velocity scale error, respectively, obtained from an exemplary radar sensor for a single data file. In these plots, the horizontal axis labeled "analog index" represents time (expressed in number of radar scans). Since this is true sensor data, the actual or true value of the error parameter is not known. The plot shows a visually reasonable convergence to a value within the expected range.
Fig. 8 shows that the estimate measurements are significantly improved by the obtained estimates compared to the initial assumed value of zero for all error parameters being estimated. Specifically, after compensating the error parameter, the residual error (which is the disparity between the measured range rate and the predicted range rate) is small.
Accordingly, a radar system (system 10), a controller 34 for the system 10, and a method of operating the system 10 are provided that automatically aim the radar sensor 14 on the host vehicle 12 by solving for errors in the measured range rate of change (dRm), the measured azimuth angle (Am), and the measured elevation angle (Em) simultaneously (i.e., not individually or sequentially) as the host vehicle 12 is moving. Since all error sources are considered simultaneously, estimation schemes in which the quantity of interest is estimated jointly are generally preferred over alternatives. Good estimates of the error parameters estimated by the algorithm 18 are crucial for tracking and fusion systems using radar sensors, as they allow important quantities (vehicle speed, azimuth and elevation angle) to compensate for the errors present there.
While the present invention has been described in accordance with its preferred embodiments, it is not intended to be so limited, but rather only to the extent set forth in the following claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种扇扫雷达多目标连续跟踪方法