Method, device, equipment and storage medium for evaluating camera calibration position
1. A method for evaluating a camera calibration position of a target object, comprising:
acquiring a position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprises at least a first camera and a second camera, and the first camera and the second camera are located at different positions of the target object;
acquiring a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively;
determining a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image based on calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region, respectively; and
evaluating the nominal positions of the first camera and the second camera based on a difference of the first image area and the second image area.
2. The method of claim 1, wherein evaluating the nominal positions of the first and second cameras comprises:
respectively determining a first characteristic value of the first image area and a second characteristic value of the second image area based on pixel values of all pixel points in the first image area and the second image area; and
evaluating the nominal positions of the first camera and the second camera based on a difference of the first feature value and the second feature value.
3. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,
wherein determining the first feature value comprises:
determining a plurality of first indicators for a plurality of pixel values in the first image region based on a first plurality of pixel values for a plurality of pixel points in the first image region; and
determining the first feature value based on the plurality of first indicators and weights corresponding to the plurality of first indicators; and is
Wherein determining the second feature value comprises:
determining a plurality of second indicators for a plurality of pixel values in the second image region based on a second plurality of pixel values for a plurality of pixel points in the second image region; and
determining the second feature value based on the plurality of second indices and weights corresponding to the plurality of second indices.
4. The method of claim 2, wherein acquiring the first image and the second image comprises:
shooting the reference area by using the first camera and the second camera to determine a first original image and a second original image; and
transforming the first and second original images to determine top view images in the first and second original images, respectively, for the reference region.
5. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,
wherein determining the first image region comprises:
determining a first spatial transformation matrix of the first camera relative to the object coordinate system; and is
Determining the first image region based on the first spatial transformation matrix, a first parameter of the first camera, and the position of the reference region in the object coordinate system; and
wherein determining the second image region comprises:
determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and is
Determining the second image region based on the second spatial transformation matrix, second parameters of the second camera, and the position of the reference region in the object coordinate system.
6. The method of claim 2, wherein evaluating the nominal positions of the first and second cameras comprises:
in response to the difference being below a first threshold, determining a first evaluation score; and
in response to the difference being above the first threshold, determining a second evaluation score, wherein the second evaluation score is lower than the first evaluation score.
7. The method of claim 6, further comprising:
in response to the difference being above the first threshold and below a second threshold, adjusting the nominal positions of the first camera and the second camera in the object coordinate system;
obtaining the adjusted first and second image regions based on the adjusted calibration positions of the first and second cameras and the position of the reference region in the object coordinate system; and
updating the evaluation of the nominal positions of the first camera and the second camera based on the adjusted difference of the first image area and the second image area.
8. The method of claim 5, wherein
The first parameters comprise one or more of focal length, distortion, thickness of a lens of the first camera; and is
The second parameter includes one or more of a focal length, distortion, thickness of a lens of the second camera.
9. The method of claim 3, wherein
The plurality of first indicators for the first plurality of pixel values comprise one or more of a mean, a maximum, a minimum, a mean square error of the first plurality of pixel values, and
the second plurality of indices for the second plurality of pixel values include one or more of a mean, a maximum, a minimum, a mean square error of the second plurality of pixel values.
10. An apparatus for evaluating a camera calibration position of a target object, comprising:
a position acquisition module configured to acquire a position of a reference region in an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprises at least a first camera and a second camera, and the first camera and the second camera are at different positions of the target object;
an image acquisition module configured to acquire a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively;
an image region determination module configured to determine a first image region in the first image corresponding to the reference region and a second image region in the second image corresponding to the reference region based on calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region, respectively; and
a calibration evaluation module configured to evaluate the calibration positions of the first camera and the second camera based on a difference of the first image area and the second image area.
11. The apparatus of claim 10, wherein the calibration evaluation module is further configured to:
respectively determining a first characteristic value of the first image area and a second characteristic value of the second image area based on pixel values of all pixel points in the first image area and the second image area; and
evaluating the nominal positions of the first camera and the second camera based on a difference of the first feature value and the second feature value.
12. The apparatus of claim 11, wherein the first and second electrodes are disposed in a substantially cylindrical configuration,
wherein determining the first feature value comprises:
determining a plurality of first indicators for a plurality of pixel values in the first image region based on a first plurality of pixel values for a plurality of pixel points in the first image region; and
determining the first feature value based on the plurality of first indicators and weights corresponding to the plurality of first indicators; and is
Wherein determining the second feature value comprises:
determining a plurality of second indicators for a plurality of pixel values in the second image region based on a second plurality of pixel values for a plurality of pixel points in the second image region; and
determining the second feature value based on the plurality of second indices and weights corresponding to the plurality of second indices.
13. The apparatus of claim 11, wherein acquiring the first image and the second image comprises:
shooting the reference area by using the first camera and the second camera to determine a first original image and a second original image; and
transforming the first and second original images to determine top view images in the first and second original images, respectively, for the reference region.
14. The apparatus of claim 10, wherein the first and second electrodes are disposed on opposite sides of the substrate,
wherein determining the first image region comprises:
determining a first spatial transformation matrix of the first camera relative to the object coordinate system; and is
Determining the first image region based on the first spatial transformation matrix, a first parameter of the first camera, and the position of the reference region in the object coordinate system; and
wherein determining the second image region comprises:
determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and is
Determining the second image region based on the second spatial transformation matrix, second parameters of the second camera, and the position of the reference region in the object coordinate system.
15. The apparatus of claim 11, wherein evaluating the nominal positions of the first and second cameras comprises:
in response to the difference being below a first threshold, determining a first evaluation score; and
in response to the difference being above the first threshold, determining a second evaluation score, wherein the second evaluation score is lower than the first evaluation score.
16. The apparatus of claim 15, further comprising:
an adjustment module configured to adjust the nominal positions of the first camera and the second camera in the object coordinate system in response to the difference being above the first threshold and below a second threshold;
an image region adjustment module configured to obtain the adjusted first and second image regions based on the calibrated positions of the first and second cameras and the adjusted position of the reference region in the object coordinate system; and
an evaluation update module configured to update the evaluation of the nominal positions of the first camera and the second camera based on the adjusted difference of the first image area and the second image area.
17. The apparatus of claim 14, wherein
The first parameters comprise one or more of focal length, distortion, thickness of a lens of the first camera; and is
The second parameter includes one or more of a focal length, distortion, thickness of a lens of the second camera.
18. The apparatus of claim 12, wherein
The plurality of first indicators for the first plurality of pixel values comprise one or more of a mean, a maximum, a minimum, a mean square error of the first plurality of pixel values, and
the second plurality of indices for the second plurality of pixel values include one or more of a mean, a maximum, a minimum, a mean square error of the second plurality of pixel values.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.
Background
In the field of intelligent transportation, it is necessary to install several cameras on a vehicle to assist the operation of the vehicle. For example, a lane keeping assist system is installed in a vehicle, and the system is equipped with a plurality of cameras, and gives an alarm to remind a driver of adjusting a vehicle driving route once it is shown that the vehicle is not driving according to lane markings based on pictures captured by the cameras. In addition, in the unmanned field, an important source of vehicle processing data is the information captured by the camera. Therefore, the position of the camera relative to the vehicle is crucial, and how to effectively evaluate the accuracy of the nominal position of the camera relative to the vehicle is one goal that designers desire to achieve.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, and a computer-readable storage medium for evaluating a camera calibration position of a target object.
According to a first aspect of the present disclosure, a method for evaluating a camera calibration position of a target object is provided. The method comprises the following steps: acquiring the position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprises at least a first camera and a second camera, and the first camera and the second camera are positioned at different positions of the target object; acquiring a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively; determining a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image based on the calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region, respectively; and evaluating the nominal positions of the first camera and the second camera based on a difference of the first image area and the second image area.
According to a second aspect of the present disclosure, an apparatus for evaluating a camera calibration position of a target object is provided. The device includes: a position acquisition module configured to acquire a position of a reference region under an object coordinate system, wherein the object coordinate system is established based on a target object, the target object comprises at least a first camera and a second camera, and the first camera and the second camera are at different positions of the target object; an image acquisition module configured to acquire a first image and a second image with the first camera and the second camera, respectively, wherein the first image and the second image are captured for the reference region, respectively; an image region determination module configured to determine a first image region in the first image corresponding to the reference region and a second image region in the second image corresponding to the reference region based on the calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region, respectively; and a calibration evaluation module configured to evaluate the calibration positions of the first camera and the second camera based on a difference of the first image area and the second image area.
According to a third aspect of the present disclosure, there is provided an electronic device comprising one or more processors; and storage means for storing the one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to the first aspect of the disclosure.
According to a fourth aspect of the present disclosure, there is provided a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method according to the first aspect of the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising computer program instructions to implement a method according to the first aspect of the present disclosure by a processor.
According to the method and the device, when the camera calibration position of the target object is evaluated, the same reference area is captured by different cameras, and the quantification of the camera calibration position precision is realized.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 shows a flow chart of a method for assessing a camera calibration position of a target object according to an example embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of images of the same reference area taken by different cameras according to an exemplary embodiment of the present disclosure;
FIG. 4 shows a block diagram of an apparatus for evaluating a camera calibration position of a target object according to an example embodiment of the present disclosure; and
FIG. 5 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
As described above, in the field of intelligent transportation, whether the position of the camera relative to the vehicle is accurate or not is related to the user experience of the intelligent transportation user. If the position of the camera relative to the vehicle is inaccurate, once such inaccurate data is used to perform the operation calculations of the intelligent transportation, it may result in inaccurate calculated results. Particularly, in the field of unmanned driving, if these inaccurate results are utilized, a serious safety accident may be induced.
There are a number of methods in the prior art for calibrating the position of a camera relative to a vehicle. However, after calibration, the calibrated data may become inaccurate due to vibration and the like during the driving of the vehicle. In existing solutions, the user of the vehicle is required to periodically go to the vehicle's after-market department to evaluate the nominal position of the camera. This would add additional time cost to the user, which greatly degrades the user experience.
In view of the above problems, embodiments of the present disclosure provide a solution for evaluating a camera calibration position of a target object. Exemplary embodiments of the present disclosure will be described in detail below with reference to fig. 1 to 5.
Fig. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure can be implemented. As shown in fig. 1, in environment 100, a vehicle 110 is on a site. For example, the vehicle 110 may be traveling on a road in a traffic network or parked in a parking lot. In the environment of FIG. 1, the nominal position of the camera on vehicle 110 may be evaluated.
In the context of the present disclosure, the term "vehicle" may take various forms. The vehicle 110 may be an electric vehicle, a fuel-powered vehicle, or a hybrid vehicle. In some embodiments, the vehicle 110 may be a car, truck, trailer, motorcycle, bus, agricultural vehicle, recreational vehicle, or construction vehicle, among others. In some embodiments, the vehicle 110 may take the form of, for example, a ship, aircraft, train, or the like. In some embodiments, the vehicle 110 may be a home vehicle, a passenger vehicle of an operational nature, or a freight vehicle of an operational nature, among others. In some embodiments, vehicle 110 may be a vehicle equipped with certain autonomous driving capabilities, where autonomous driving capabilities may include, but are not limited to, assisted driving capabilities, semi-autonomous driving capabilities, highly autonomous driving capabilities, or fully autonomous driving capabilities.
As shown in fig. 1, vehicle 110 is provided with a plurality of cameras, such as a left camera 132 located on the left side of vehicle 110, a front camera 134 located on the front side of vehicle 110, a right camera 136 located on the right side of vehicle 110, and a rear camera 138 located on the rear side of vehicle 110. It should be understood that the number and location of the cameras is merely illustrative and not limiting. More or fewer cameras may be disposed on vehicle 110, and some of these cameras may be located on the same side of vehicle 110. The present disclosure is not limited in this respect.
In some embodiments, the camera may be a fisheye camera having a viewing angle close to or equal to 180 °. Of course, the camera may be of other types as long as the camera can capture a clearly usable image.
Fig. 2 illustrates a flow diagram of a method 200 of scoring a nominal position of a camera according to some embodiments of the present disclosure. The method 200 may be performed by various types of computing devices in the vehicle 110.
At block 202, the position of the reference region 120 in an object coordinate system O-xyz established based on the target object is acquired.
In some embodiments, the target object may be the vehicle 110 shown in FIG. 1. The object coordinate system O-xyz may be established at the center of the rear axle of the vehicle 110. That is, the origin O of the object coordinate system O-xyz may be set at the center of the rear axle of the vehicle 110. However, it should be understood that such an arrangement is merely illustrative. The object coordinate system O-xyz may be set at any position of the vehicle 110 according to different usage scenarios. Embodiments of the present disclosure are not limited in this regard. Although the object coordinate system O-xyz is shown in the top view of FIG. 1 in two dimensions, it should be understood that the object coordinate system O-xyz is a three-dimensional coordinate system established on the vehicle 110. The z-axis of the three-dimensional coordinate system may be a direction parallel to the direction of gravity, and the x-axis and the y-axis of the three-dimensional coordinate system may be parallel to the width direction and the length direction of the vehicle. It should be understood that this is also merely illustrative, and the specific manner of establishing object coordinate system O-xyz is not limited by embodiments of the present disclosure.
In some embodiments, the reference region 120 may be a virtual region selected in the object coordinate system O-xyz, such as a cube. For example, in the embodiment of FIG. 1, reference region 120 is a cube with sides parallel to the axis of object coordinate system O-xyz. It should be understood that this is merely for computational simplicity. In other embodiments, the reference area 120 may be in other forms, such as a cylinder, a sphere, and so forth.
In other embodiments, the reference area 120 may also be an actual reference object in the same space as the vehicle 110, such as a road block, a road sign, a street light, a flag pole, a person, another vehicle, etc. The specific form of the reference area 120 is not limited by the embodiments of the present disclosure.
In some embodiments, as shown in fig. 1, the first and second cameras may be cameras located on adjacent sides of the vehicle 110, such as a left camera 132 located on the left side and a front camera 134 located on the front side of the vehicle 110, respectively. Alternatively, the first camera may be, for example, a right camera 136 located on the right side of the vehicle 110, and the second camera may be, for example, a rear camera 138 located on the rear side of the vehicle 110. It should be understood, however, that this is by way of illustration only and not by way of limitation. In other embodiments, the first camera and the second camera may also be cameras located on the same side of the vehicle 110. For example, if multiple cameras are mounted on the left side of the vehicle 110, the first and second cameras may also be cameras mounted at different locations on the left side of the vehicle 110. As another example, the first camera and the second camera may be mounted on different non-adjacent sides of the vehicle 110, for example, the first camera may be the left camera 132 of the vehicle and the second camera may be the right camera 136 of the vehicle, as long as such first camera and second camera can simultaneously capture the reference area 120 of the object coordinate system O-xyz.
For illustrative purposes only, embodiments of the present disclosure will continue to be described below with the left camera 132 and the front camera 134 as examples.
Referring back to fig. 2, at block 204, a first image 332 and a second image 334 are acquired for the reference region 120 with the left camera 132 and the front camera 134, respectively. Referring to fig. 1, the reference area 120 may be disposed at the front left of the vehicle 110 so that the left camera 132 and the front camera 134 can simultaneously photograph the reference area 120.
Fig. 3 shows a schematic diagram of a first image 332 and a second image 334 of the reference area 120 taken by the left camera 132 and the front camera 134 according to an exemplary embodiment of the present disclosure.
Referring back to FIG. 2, at block 206, a first image region 322 corresponding to the reference region 120 in the first image 332 and a second image region 324 corresponding to the reference region 120 in the second image 334 are determined based on the nominal positions of the left camera 132 and the front camera 134 in the object coordinate system O-xyz, and the position of the reference region 120 in the object coordinate system O-xyz, respectively.
The position of left camera 132 relative to object coordinate system O-xyz may be calibrated in various ways. That is, the transformation matrix from the left camera coordinate system established on the left camera 132 to the object coordinate system O-xyz is known. Similarly, the position of the front camera 134 relative to the object coordinate system O-xyz may be calibrated in various ways, and thus the transformation matrix from the front camera coordinate system established on the front camera 134 to the object coordinate system O-xyz is also known. Thus, referring to FIG. 1, from the nominal position of the left camera 132 and the acquired position of the reference region 120 relative to the object coordinate system O-xyz, the first image region 322 of the reference region 120 in the first image 332 captured by the left camera 132 can be determined. Similarly, based on the nominal position of the front camera 134 and the acquired position of the reference region 120 relative to the object coordinate system O-xyz, the second image region 324 of the reference region 120 in the second image 334 captured by the front camera 134 can be determined.
As shown in fig. 3, in some embodiments, the first image region 322 may be a rectangular region in the first image 332. Similarly, the second image region 324 may be a rectangular region in the second image 334. Although the first image area 322 and the second image area 324 are shown as having the same size, in other embodiments, the two may have different sizes.
Referring back to FIG. 2, at block 208, the nominal positions of the left camera 132 and the front camera 134 are evaluated based on the difference of the first image area 322 and the second image area 324.
In some embodiments, pixel values of respective pixel points in the first image region 322 and the second image region 324 may be obtained separately. The difference between the pixel values obtained from the two regions is taken as a basis and a corresponding evaluation score is given based on this difference.
Various methods can be used toThe difference between the first image area 322 and the second image area 324 is measured. In some embodiments, the difference between the pixels derived for the two regions may take into account certain metrics for the pixels, and the pixels characterizing the respective regions are synthesized based on these metrics. For example, referring to fig. 3, if the first image region 322 corresponding to the reference region 120 in the first image 332 is a 4 × 4 region, the pixel values of the 16 pixels are sequentially acquired. In some embodiments, the pixel value may be an RGB value of the pixel point. After acquiring these pixel values, some statistical values of these pixel values may be obtained as an indicator of each pixel value acquired in the first image area 322, for example, the maximum value V among these pixel valuesmaxMinimum value VminAverage value VmeanMean square error VmseAnd so on.
In some embodiments, these indicators may be given corresponding weights, respectively, as the feature value C1 of the first image area 322. For example, the characteristic value C1 may be calculated using formula (1):
C1=w1·Vmax+w2·Vmin+w3·Vmean+w4·Vmse (1)
wherein, w1、w2、w3And w4Respectively corresponding to a maximum value VmaxMinimum value VminAverage value VmeanSum mean square error VmseWeight w ofi. It should be understood that although only the maximum value V1 is considered when calculating the feature value C1 of the first image region 322maxMinimum value VminAverage value VmeanSum mean square error VmseThe several indicators may include more or less indicators, depending on the implementation scenario. The specific number and meaning of the indicators are not limited by the embodiments of the present disclosure. In addition, each index corresponds to a specific weight wiAnd may also be adjusted according to the particular usage scenario.
Similar to the first image region 322, a feature value C2 associated with the second image region 324 may also be obtained. By calculating the difference between the characteristic value C1 of the first image area 322 and the characteristic value C2 of the second image area 324, the degree of difference between the left camera 132 and the front camera 134, which can be installed at different positions of the vehicle 110, is evaluated to determine whether the calibrated positions of the cameras are accurate.
For example, if the difference between the first characteristic value C1 and the second characteristic value C2 is below a certain threshold, indicating that the calibrated positions of the left camera 132 and the front camera 134 are relatively accurate, a higher evaluation score may be assigned. A lower evaluation score may be assigned if the difference between the first characteristic value C1 and the second characteristic value C2 is above a certain threshold, indicating that the calibrated positions of the left camera 132 and the front camera 134 are relatively inaccurate. In some embodiments, if the difference between the first characteristic value C1 and the second characteristic value C2 is too large, a warning may be provided to a user or maintainer of the vehicle 110 to ensure safe driving of the vehicle 110.
According to the embodiment of the disclosure, whether the calibration position of the camera is accurate or not can be effectively quantified, so that the precision of the calibration position can be intuitively evaluated.
In some embodiments, the first image 332 and the second image 334 are acquired by pre-processing the captured images to convert them into top view images. Specifically, the reference area 120 is photographed by the left camera 132 and the front camera 134, thereby acquiring a first original image and a second original image, respectively. The first and second original images are then transformed to determine the top view image for the reference region 120 in the first and second original images, respectively. The pre-processing of the image may be accomplished using various methods known or developed in the future. The particular manner of pretreatment is not limited by the embodiments of the present disclosure. According to such an implementation, the distortion caused by the camera being in different positions of the vehicle 120 can be eliminated for the images captured by the same reference area 120, thereby further improving the usability of the captured images.
In some embodiments, determining the first image region 322 in the first image 332 that corresponds to the reference region 120 includes: a spatial transformation matrix T1 of the left camera coordinate system of the left camera 132 with respect to the object coordinate system O-xyz is determined and the first image region 322 is calculated from the spatial transformation matrix T1, the parameters of the left camera 132 and the position of the reference region 120 in the object coordinate system O-xyz. In some embodiments, determining the second image region 324 in the second image 334 that corresponds to the reference region 120 includes: a spatial transformation matrix T2 of the front camera coordinate system of the front camera 134 with respect to the object coordinate system O-xyz is determined and the second image region 324 is calculated from the spatial transformation matrix T2, the parameters of the front camera 134 and the position of the reference region 120 in the object coordinate system O-xyz.
In some embodiments, the parameters of the left camera 132 and/or the front camera 134 include one or more of the focal length, distortion, thickness of the lens. It should be understood that the camera parameters listed herein are merely illustrative and not limiting.
In a further embodiment, the inaccuracy indicative of the positions pointed by the left camera 132 and the front camera 134 may also be optimized by fine tuning if the determined difference between the characteristic value C1 of the first image area 322 and the characteristic value C2 of the second image area 324 is smaller than a predetermined threshold value.
When fine adjustment is required, trial adjustments may first be made to the nominal positions of the left camera 132 and/or the front camera 134. In some embodiments, only one of the left camera 132 and the front camera 134 may be adjusted. In other embodiments, the left camera 132 and the front camera 134 may be adjusted simultaneously. Subsequently, an adjusted first image region 322 'and a second image region 324' are obtained based on the adjusted calibration positions and the positions of the reference regions in the object coordinate system O-xyz. Similar to the above-mentioned steps, the evaluation of the nominal positions of the left camera 132 and the front camera 134 is updated based on the difference between the adjusted first image area 322 'and the second image area 324'. For example, an updated evaluation score may be determined based on a difference between the adjusted first image region 322 'and the second image region 324'.
If the updated evaluation score is higher than the original evaluation score, i.e., the difference between the adjusted first image area 322 'and the second image area 324' is smaller than the difference between the original first image area 322 and the second image area 324, which means that the tentative adjustment of the calibration positions of the left camera 132 and the front camera 134 is meaningful, the original calibration positions may be replaced with the updated calibration positions, and the updated calibration positions may be used for other processing. Such adjustments are beneficial to further optimize the calibration between the left camera 132 and the front camera 134.
If the updated evaluation score is lower than the original evaluation score, i.e., the difference between the adjusted first image area 322 'and the second image area 324' is greater than the difference between the original first image area 322 and the second image area 324, this means that the trial adjustment of the nominal positions of the left camera 132 and the front camera 134 was unsuccessful, at which point the original nominal positions are not updated or replaced and the next trial adjustment can begin.
According to such an implementation, the nominal position of the camera on the vehicle 110 may be optimized. As such optimization may occur at any time during the life cycle of the vehicle 110, such as when the vehicle 110 is parked in a parking lot or traveling in a traffic lane. Real-time optimization can thus be efficiently achieved. This means that the user of the vehicle 110 does not need to spend additional time adjusting the calibration position of the camera, thereby greatly improving the user experience.
Although aspects of the present disclosure are described above with reference to left camera 132 and front camera 134, it is understood that three or more cameras on vehicle 110 are considered in evaluating the nominal positions of the cameras, so long as the cameras can simultaneously capture images of the same reference area 120. The detailed description is omitted.
Fig. 4 schematically shows a block diagram of an apparatus 400 for evaluating a camera calibration position of a target object according to an exemplary embodiment of the present disclosure. Specifically, the apparatus 400 includes: a position acquisition module 402 configured to acquire a position of the reference region in an object coordinate system, wherein the object coordinate system is established based on a target object, the target object includes at least a first camera and a second camera, and the first camera and the second camera are at different positions of the target object; an image acquisition module 404 configured to acquire a first image and a second image with a first camera and a second camera, respectively, wherein the first image and the second image are captured for a reference area, respectively; an image region determination module 406 configured to determine a first image region corresponding to the reference region in the first image and a second image region corresponding to the reference region in the second image based on the calibration positions of the first camera and the second camera in the object coordinate system and the position of the reference region, respectively; and a calibration evaluation module 408 configured to evaluate the calibration positions of the first camera and the second camera based on the difference of the first image area and the second image area.
In some embodiments, the calibration evaluation module is further configured to: respectively determining a first characteristic value of the first image area and a second characteristic value of the second image area based on pixel values of all pixel points in the first image area and the second image area; and evaluating the calibration positions of the first camera and the second camera based on the difference between the first characteristic value and the second characteristic value.
In some embodiments, determining the first feature value comprises: determining a plurality of first indexes of a first plurality of pixel values based on the first plurality of pixel values of a plurality of pixel points in a first image region; and determining a first feature value based on the plurality of first indices and weights corresponding to the plurality of first indices; and determining the second feature value comprises: determining a plurality of second indicators for a second plurality of pixel values based on the second plurality of pixel values for the plurality of pixel points in the second image region; and determining a second feature value based on the plurality of second indices and the weights corresponding to the plurality of second indices.
In some embodiments, acquiring the first image and the second image comprises: shooting a reference area by using a first camera and a second camera to determine a first original image and a second original image; and transforming the first and second original images to determine top view images in the first and second original images, respectively, for the reference region.
In some embodiments, determining the first image region comprises: determining a first spatial transformation matrix of the first camera relative to the object coordinate system; determining a first image area based on the first spatial transformation matrix, a first parameter of the first camera and the position of the reference area in the object coordinate system; and determining the second image region comprises: determining a second spatial transformation matrix of the second camera relative to the object coordinate system; and determining a second image region based on the second spatial transformation matrix, the second parameter of the second camera, and the position of the reference region in the object coordinate system.
In some embodiments, evaluating the nominal positions of the first camera and the second camera comprises: in response to the difference being below a first threshold, determining a first evaluation score; and in response to the difference being above the first threshold, determining a second evaluation score, wherein the second evaluation score is lower than the first evaluation score.
In some embodiments, the apparatus 400 further comprises: an adjustment module configured to adjust the calibration positions of the first camera and the second camera in the object coordinate system in response to the difference being above a first threshold and below a second threshold; an image region adjustment module configured to obtain adjusted first and second image regions based on the adjusted calibration positions of the first and second cameras and the position of the reference region in the object coordinate system; and an evaluation updating module configured to update the evaluation of the calibration positions of the first camera and the second camera based on the difference of the adjusted first image area and the second image area.
In some embodiments, the first parameter comprises one or more of a focal length, distortion, thickness of a lens of the first camera; and the second parameter comprises one or more of a focal length, distortion, thickness of a lens of the second camera.
In some embodiments, the plurality of first indices for the first plurality of pixel values comprise one or more of a mean, a maximum, a minimum, a mean square error of the first plurality of pixel values, and the plurality of second indices for the second plurality of pixel values comprise one or more of a mean, a maximum, a minimum, a mean square error of the second plurality of pixel values.
According to the technical scheme of the embodiment of the disclosure, the same reference area 120 is photographed based on at least two cameras on the vehicle 110, and according to the difference in the captured images, the precision of the calibration position of the cameras is quantified in real time, and the calibration position can be adjusted, thereby ensuring the accuracy of the driving data. The scheme of the disclosure can be used for not only manually driven vehicles but also unmanned vehicles.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 501 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by the computing unit 501, one or more steps of the method 200 described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the method 200 by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种大范围单目跟踪扫描设备