Forward intersection measuring method and system based on structural parameters of vision system
1. A forward intersection measurement method based on structural parameters of a vision system is characterized by comprising the following steps:
respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
determining a fitting function between the focal length and the focal-length tangential distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier expressed in the world coordinate system is located at the actual position;
after a plurality of target images including the point to be measured are obtained by using the vision sensor, stereo field error correction is carried out on the corresponding image point to be measured of the point to be measured in each target image, and each image point vector to be measured after the stereo field error correction is expressed in the world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the vision system are different; the vision sensor is arranged on the carrier; the focus vector refers to a vector from the rotation center of the carrier to the focus;
performing front intersection measurement on the point to be measured based on a focal cut distance obtained through the fitting function, a main optical axis rotation radius in the structural parameters of the visual system, a gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to three-dimensional field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is located at the rotation center of the carrier, and the directions of the coordinate axes of the world coordinate system are determined according to the directions of the coordinate axes when the gyro device is located at the initial position.
2. The vision system structural parameter-based forward rendezvous measurement method according to claim 1, wherein the expressing a main target point vector when a carrier is located at an actual position in a world coordinate system according to a posture of a target relative to an image plane and a preset position of the carrier in the vision system respectively comprises:
calibrating the focal length and the principal point coordinates of the vision system;
obtaining the exterior orientation elements between each target and the image plane in any postureWherein m is the identity of the target,1,2, 3.; n is an identifier of the pose of the target, and n is 1,2, 3.; the position of each target is predetermined, and the posture of each target is acquired by back-meeting or according to the gyro module when the carrier is in the actual position;
calculating a rotation transformation matrix from the coordinate system of the target with n gesture marks m to the world coordinate system
Wherein the content of the first and second substances,a rotation transformation matrix representing a coordinate system of the target in the n poses to a coordinate system of the vision sensor;a rotation transformation matrix representing the coordinate system of the vision sensor to the coordinate system of the carrier actual bits;a rotation transformation matrix which represents a coordinate system of the carrier actual position corresponding to the n-posture target to a coordinate system when the gyro device is located at an initial position;a rotation transformation matrix representing a coordinate system of the gyro device at an initial position to the world coordinate system; the carrier is a holder or a mechanical arm, and the gyro device and the visual sensor are carried by the carrier; the vision sensor comprises a lens; the coordinate system of the carrier when the carrier is positioned at the preset position is consistent with the world coordinate; the actual position of the carrier is based on the gyro value of the gyro deviceDetermining; the coordinate system of the gyro device at the initial position is determined according to the built-in navigation module of the gyro device;
according to the preset position of the carrier, acquiring a rotation transformation matrix from the preset position of the carrier corresponding to the target marked as the n posture to the actual position of the carrier
Wherein the content of the first and second substances,representing a rotation transformation matrix from a preset position of a carrier corresponding to a target with n gesture identifications being m to a position when the carrier is located at an actual position;a rotation transformation matrix representing the initial bit of the gyro device to the preset bit of the carrier;representing a rotation change matrix from the initial position of the gyro device corresponding to the target identified as n gestures to a main target point when the carrier is located at an actual position;
when the carrier is positioned at a preset position, the origin O of the world coordinate system is positioned in the world coordinate systemWTo the point of tangency P of the main optical axis and the main optical axis rotating ballpIs represented asThe origin O of the world coordinate systemWTo the tangent point PpCorresponding focal point FpIs represented asWhere ρ is0Is the primary optical axis radius of rotation; dzThe focal length is set; the focal length is the distance between the focal point corresponding to the tangent point and the tangent point; the primary optical axis spin ball is determined based on the tangential common-ball intersection;
based on theAnd saidIn the world coordinate axis, the origin O of the world coordinate system is definedWA main target point A when the carrier is positioned at a preset positionpIs represented as
For a target identified as m, from a preset position in the world coordinate systemTo the origin O of the world coordinate systemWTo the main target point A when the carrier is in the actual positionmVector of (2)The rotation matrix of (a) is:
vector of main target pointFrom an initial position to
The translation vector between the target coordinate system and the world coordinate system is expressed as:
wherein:andto representEach of (1); a. themOrigin O of coordinate system as target mtm(ii) a The main target point A when the target with the mark m is in a certain posturemCorresponding focal point FmAnd the main target point AmOn the same main optical axis; wherein A ismFmIndicates the focal point FmCorresponding main target point AmThe distance between them;and the translation amount between the world coordinate system and the target coordinate system is represented.
3. The method according to claim 2, wherein the determining a fitting function between the focal length and the focal length based on the focal length, the focus point and the external orientation element in the visual system structural parameters, the gyro value of the gyro device in the visual system, and the main target point vector when the carrier expressed in the world coordinate system is located in the actual position comprises:
for each target, the vision sensor acquires a plurality of sample images of the target comprising any posture at any target focal length; in any two sample images of the target, the distance and the posture of the target relative to the visual sensor are different;
respectively acquiring each focal length d corresponding to each target focal length based on the sample image of each attitude of each target acquired at each target focal lengthzThe solution specifically includes:
for a target identified as m, based on the sample image of n poses acquired at any target focal length; obtaining a main target point A on the target of the n postures when the carrier is positioned at the actual positionnCoordinates in the coordinate system of the target identified as m
Based on the main target point AnCoordinates in the coordinate system of the target identified as mExpressing the main target point A in the world coordinate systemnCoordinates of (2)
Based on each sample image of the target with n gesture identifications of m acquired under the target focal length, acquiring a focus F corresponding to a tangent point of the main optical axis and the main optical axis rotating ball when the carrier is located at an actual positionnCoordinates in the coordinate system of the target
Based on focus FnAt the targetCoordinates in the coordinate system ofExpressing the focus F in the world coordinate systemnCoordinates of (2)
Expressing the main target point A in the world coordinate system based on the sample image of the target with n gestures acquired under the target focal lengthnAnd said focal point FnThe tangent point of the corresponding main optical axis and the main optical axis rotating ball
Wherein the main target point AnThe focal point FnAnd said tangent point PnAre all positioned on the same main optical axis;
for targets with n-pose identifiers m, based on the main target point AnCoordinates in the world coordinate systemThe focal point FnCoordinates in the world coordinate systemAnd said tangent point PnCoordinates of (2)Three points of tangent are obtainedCollinearity equations:
wherein λ represents a constant;
radius of rotation ρ to be based on the main optical axis0And the focal cut distance dzThe expressed main target AnThe coordinates of (1), the focal point FnAnd said tangent point PnSubstituting the coordinates into the tangent three-point collinear equation to obtain:
wherein the content of the first and second substances,
to get solvedTaylor expansion is performed, keeping the first term, and obtaining:
wherein the content of the first and second substances,
and
wherein the content of the first and second substances,
wherein
Wherein the content of the first and second substances,andto representEach term of the matrix;
the error equation is expressed as:
wherein the constant term vector isThe coefficient column matrix is
Solving the nth iteration correction number to be Xn=(An TΛn)-1Λn TLn;
Resolving X according to the correction number term of the error equationn=[d(dz),dρ0]TAnd obtaining the focal length d corresponding to the target focal length through iterative operationzAnd main optical axis rotation radius ρ0The solution of (1);
obtaining each focal cut distance d corresponding to each target focal lengthzBased on the focal-cut distance d corresponding to each target focal distancezObtaining the focal length f and the focal cut distance dzThe fitting function of (1).
4. The method as claimed in claim 3, wherein after acquiring a plurality of target images including points to be measured by using a vision sensor in the vision system, performing stereo field error correction on corresponding points to be measured in each target image of the points to be measured, and expressing each point to be measured after stereo field error correction in the world coordinate system, specifically comprises:
acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system;
determining the image coordinates of the corresponding image point of the point to be measured in each target image in the visual sensor coordinate system based on each target image
Using the previously acquired image side vertical axis correction data W [ W ]x,Wy]TCorrecting the image coordinate of the image point to be detectedObtaining a first coordinate of each image point to be detected in the vision sensor coordinate system after correcting the vertical axis of the image spaceCorrecting the coordinates of each image point to be detected by using axial correction data aObtaining a second coordinate of each image point to be detected in the vision sensor coordinate system after correction
Expressing a preset bit vector from the origin of the world coordinate system to the origin of the image plane coordinate system as [0, rho ] in the world coordinate system0,-f-dz]TThereafter, the first coordinates in the vision sensor coordinate system are comparedAnd the second coordinateConverting the image point to be measured into the world coordinate system to obtain a third coordinate of each image point to be measured corresponding to the point to be measured in the world coordinate systemAnd fourth coordinate
5. The method as claimed in claim 4, wherein the step of performing forward rendezvous measurement on the point to be measured based on the fitting function, the structural parameters of the vision system, and each image point to be measured expressed in the world coordinate system after the correction of the stereo field error, to obtain the coordinate values of the point to be measured in the world coordinate system specifically includes:
calibrating the focal length of the visual sensor, and determining the focal length f of the visual sensor;
based on the predetermined focal length f and the focal cut distance dzDetermining the focal-cut distance d corresponding to the focal length fz;
Based on the origin O of the world coordinate system when the carrier is at the preset positionWTo the focus FpVector of (2)The origin O of the world coordinate system when the carrier is positioned at the actual positionWTo the focus corresponding to the point to be measuredThe vector of (a) is expressed as:wherein the content of the first and second substances,a rotation transformation matrix from the preset position of the carrier to the main target point when the carrier is positioned at the actual position;
based on the point B to be measurednCoordinates in the world coordinate systemPoint to be measured BnThe third coordinate of each corresponding image point to be measured in the world coordinate systemAnd the point B to be measurednCorresponding focal point FnAnd obtaining a three-point collinearity equation:
obtaining by solution:
taylor expansion was performed to obtain:
wherein the content of the first and second substances,
the error equation is expressed as:
wherein, constant term
Array matrix of coefficients
The nth iteration correction is solved as follows: chi shapen=(Λn TΛn)-1Λn TLn;
Correcting the x according to the nth iterationnAnd iteratively operating to obtain a point B to be measured corresponding to the third coordinatenFirst solution of
Based on the point B to be measurednPoint B to be measurednThe fourth coordinate of each corresponding image point to be measured in the world coordinate systemAnd the point B to be measurednCorresponding focal point FnBased on the method, the point B to be measured corresponding to the fourth coordinate is obtainednSecond solution of
According to the first solutionAnd the second solutionDetermining the point B to be measurednThe coordinate value in the world coordinate system is
6. The vision system configuration parameter-based frontal intersection measurement method of claim 5, wherein said vision system comprises two or more of said vision sensors;
correspondingly, the acquiring a plurality of target images including points to be measured by using the vision sensor in the vision system specifically includes:
and synchronously acquiring a plurality of target images comprising the points to be measured by utilizing the vision sensors, and synchronously and parallelly processing the image data of the target images acquired by the vision sensors.
7. The vision system configuration parameter-based frontal intersection measurement method of claim 5, wherein said vision system comprises two or more of said vision sensors; the detection points are dynamic points;
correspondingly, after the forward intersection measurement is performed on the point to be measured to obtain the coordinate value of the point to be measured in the world coordinate system, the method further includes:
and obtaining the motion vector of the point to be measured based on the coordinate of the point to be measured in the world coordinate system independently determined by each vision sensor.
8. A vision system structural parameter based frontal intersection measurement system, comprising:
the coordinate expression module is used for respectively expressing a main target point vector when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target in the visual system relative to the image plane and the preset position of the carrier; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
a function determining module, configured to determine a fitting function between the focal length and the focal tangential distance based on a focal length, a focal point, and an external orientation element in the structural parameters of the visual system, a gyro value of a gyro device in the visual system, and a main target point vector when the carrier is located in an actual position in the world coordinate system;
the error correction module is used for performing stereo field error correction on the image points to be detected corresponding to the points to be detected in each target image after acquiring a plurality of target images comprising the points to be detected by using the visual sensor, and expressing each image point vector to be detected after the stereo field error correction in the world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the vision system are different; the vision sensor is arranged on the carrier; the focus vector refers to a vector from the rotation center of the carrier to the focus;
the intersection measurement module is used for performing front intersection measurement on the point to be measured on the basis of the focal cut distance obtained through the fitting function, the main optical axis rotating radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain the coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position; the vision system includes one or more vision sensors.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program performs the steps of the method for vision system based forward cross-talk measurement of structural parameters according to any of claims 1 to 7.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, implements the steps of the method for forward cross-measurement based on vision system configuration parameters according to any one of claims 1 to 7.
Background
With the expansion of the application requirements of machine vision technology and AI technology in the field of fine agriculture, the requirements for high-precision and control-point-free measurement in the automatic intelligent operation of agriculture and other industries are higher and higher.
Forward-meeting photogrammetry based on monocular or monocular cameras typically requires control points or other sensor-assisted measurements. However, on one hand, there is usually no condition for ideal target setting in agriculture or other actual life scenarios, and on the other hand, the adoption of other sensor assistance not only has high equipment cost, but also the environmental conditions of the scenario limit the application scenario and measurement accuracy of the vision system. In summary, the existing forward intersection measurement method is difficult to accurately obtain the position of the point to be measured without a control point.
Disclosure of Invention
The invention provides a forward rendezvous measurement method and system based on structural parameters of a vision system, which are used for solving the defect that the position of a point to be measured is difficult to accurately obtain under the condition of no control point in the prior art and realizing more accurate acquisition of the position of the point to be measured based on the structural parameters of the vision system under the condition of no control point.
The invention provides a forward intersection measuring method based on structural parameters of a vision system, which comprises the following steps:
respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of the target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
determining a fitting function between the focal length and the focal-length tangential distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier expressed in the world coordinate system is located at the actual position;
after a plurality of target images including the point to be measured are obtained by using the vision sensor, stereo field error correction is carried out on the corresponding image point to be measured of the point to be measured in each target image, and each image point vector to be measured after the stereo field error correction is expressed in the world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the vision system are different; the vision sensor is arranged on the carrier; the focus vector refers to a vector from the rotation center of the carrier to the focus;
performing front intersection measurement on the point to be measured based on a focal cut distance obtained through the fitting function, a main optical axis rotation radius in the structural parameters of the visual system, a gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to three-dimensional field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is located at the rotation center of the carrier, and the directions of the coordinate axes of the world coordinate system are determined according to the directions of the coordinate axes when the gyro device is located at the initial position.
The invention also provides a forward rendezvous measurement system based on the structural parameters of the vision system, which comprises:
the coordinate expression module is used for respectively expressing main target point vectors when the vector is positioned at an actual position in a world coordinate system according to the posture of the target in the visual system relative to the image plane and the preset position of the vector; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target through a main optical axis;
the function determining module is used for determining a fitting function between the focal length and the focal length-to-shear distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and a main target point vector when the carrier is located at an actual position in the world coordinate system;
the error correction module is used for performing stereo field error correction on the image points to be detected corresponding to the points to be detected in each target image after acquiring a plurality of target images comprising the points to be detected by using the visual sensor, and expressing each image point vector to be detected after the stereo field error correction in the world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the vision system are different; the vision sensor is arranged on the carrier; the focus vector refers to a vector from the rotation center of the carrier to the focus;
the intersection measurement module is used for carrying out forward intersection measurement on the point to be measured based on the fitting function, the visual system structure parameters and each image point to be measured expressed in the world coordinate system and subjected to stereo field error correction, so as to obtain a coordinate value of the point to be measured in the world coordinate system; wherein the vision system comprises one or more vision sensors;
the main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is located at the rotation center of the carrier, and the directions of the coordinate axes of the world coordinate system are determined according to the directions of the coordinate axes when the gyro device is located at the initial position.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the vision system structure parameter-based forward intersection measurement method.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for forward cross-measurement based on vision system configuration parameters as described in any of the above.
According to the method and the system for measuring the forward rendezvous based on the structural parameters of the vision system, a world coordinate system is established according to the structural parameters of the vision system under the condition that a forward control point is not needed, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for measuring the forward rendezvous space point based on the structural parameters of the vision system (or the same hand eye system) is innovated, so that more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the method for measuring the forward rendezvous based on the structural parameters of the vision system is about 2.5%.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for measuring frontal encounter based on structural parameters of a vision system according to the present invention;
FIG. 2 is a schematic diagram of the focal cut distance calculation for multi-target intersection in the method for measuring frontal intersection based on the structural parameters of the vision system according to the present invention;
FIG. 3 is a schematic diagram of a point to be measured by the forward rendezvous measurement method based on the structural parameters of the vision system provided by the invention;
FIG. 4 is a second flowchart of the method for forward intersection measurement based on the structural parameters of the vision system according to the present invention;
FIG. 5 is a schematic structural diagram of a front-meeting measurement system based on structural parameters of a vision system according to the present invention;
fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Fig. 1 is a schematic flow chart of a method for measuring a forward-encounter based on structural parameters of a vision system according to the present invention. The method for forward rendezvous measurement based on the structural parameters of the vision system according to the invention is described below with reference to fig. 1. As shown in fig. 1, the method includes: 101, respectively expressing main target point vectors when a carrier is positioned at an actual position in a world coordinate system according to the posture of a target relative to an image plane and a preset position of the carrier in a visual system; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a vision sensor in a vision system on a target through a main optical axis.
It should be noted that before the forward rendezvous measurement method based on the structural parameters of the vision system, a plurality of targets in any postures can be laid out at preset positions.
For any target, after defining the coordinate system of the target as the coordinate system of the point to be measured, the focal points of the image point of the visual angle sensor in the visual system and the actual point on the target can be obtained.
The actual points on the target are all other points of the target except the primary target point.
And 102, determining a fitting function between the focal length and the focal-tangential distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and the main target point vector when the carrier is actually positioned in the world coordinate system.
103, acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system, performing stereo field error correction on the corresponding points to be measured in each target image, and expressing each point to be measured after the stereo field error correction in a world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the visual system are different; a vision sensor disposed on the carrier; the focus vector refers to a vector from the center of rotation to the focus.
It should be noted that the visual sensor in the embodiment of the present invention may be a video camera.
And step 104, performing front intersection measurement on the points to be measured based on the focal cut distance obtained through the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system after the stereo field error correction, and obtaining the coordinate values of the points to be measured in the world coordinate system.
The main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the main optical axis and the focal length are determined based on a vision sensor; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier of the visual system, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position.
It should be noted that the carrier may be a pan-tilt or a robotic arm. The visual sensor may be a camera or the like that can capture images.
In the process of measuring by using a holder (or a mechanical arm) as a carrier of a camera, the effect of each vector in the process is analyzed, and when the focal distance is fixed, the focus vector is uniquely determined by a normal vector (namely, a rotating radius) of a tangent point of a main optical axis and a vector (a focal length) from the tangent point to the focus. Therefore, the rotation center of the pan/tilt head (or the robot arm) is conditioned as the origin of world coordinates. By performing a frontal encounter based on this world coordinate system, no target is required, but the focal length and the primary optical axis radius of rotation need to be accurately measured.
For this purpose, the coordinates of the center of the sphere can be obtained by geometric algebraic transformation, i.e., by algebraic transformation of the beam intersection into tangential co-spherical intersection. Through research on intersection data, it is found that through proper setting of experimental conditions, the precise structural parameters of the visual system can be directly obtained, including: focal-cut distance and primary optical axis radius of rotation.
After the accurate focal-length cutting distance and the main optical axis rotation radius are obtained, a world coordinate system can be established by using the two structural parameters, and the focus vector and the image point vector are expressed in the world coordinate system for front intersection.
According to the embodiment of the invention, a world coordinate system is established according to the structural parameters of the visual system, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for carrying out forward rendezvous space point measurement based on the structural parameters of the visual system (or the same hand eye system) is innovated, so that more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the forward rendezvous measurement method based on the structural parameters of the visual system is about 2.5%.
Based on the content of each embodiment, the main target point vector when the carrier is located at the actual position is respectively expressed in the world coordinate system according to the posture of the target relative to the image plane and the preset position of the carrier in the visual system, which specifically includes: and calibrating the focal length and the principal point coordinates of the vision sensor.
Obtaining the exterior orientation element between each target and the image plane in any postureWherein m is the mark of the target, and m is 1,2, 3.; n is the mark of the target gesture, and n is 1,2, 3.; the position of each target is predetermined, and the pose of each target is acquired by back-crossing or from a gyro when the carrier is in the actual position.
It should be noted that the exterior orientation element may include six, three of which may describe the vision system focus coordinates; the other three angular elements that can describe the spatial attitude of the photographic beam.
Obtaining a rotation transformation matrix between a coordinate system of a target with n postures and an identifier of m and a world coordinate system
Wherein the content of the first and second substances,a rotation transformation matrix representing the coordinate system of the target in the n poses to the coordinate system of the vision sensor;a rotation transformation matrix representing a coordinate system of the vision sensor to a coordinate system of the carrier;a rotation transformation matrix from a coordinate system of the carrier corresponding to the target in the n postures to a coordinate system when the gyro device is located at an initial position;a rotation transformation matrix representing the coordinate system to the world coordinate system when the gyro device is located at the initial position; the carrier is a holder or a mechanical arm, and the gyro device and the visual sensor are carried on the carrier; the vision sensor comprises a lens; carrierThe coordinate system of the preset position of the body is consistent with the world coordinate; the actual position of the carrier is determined according to the gyro value of the gyro device; and the coordinate system of the gyro device at the initial position is determined according to the built-in navigation module of the gyro device.
According to the preset position of the carrier, a rotation transformation matrix from the coordinate system of the actual position of the carrier to the coordinate system of the preset position of the carrier is obtained
Wherein the content of the first and second substances,representing a rotation transformation matrix from a preset position of the carrier corresponding to the target with the n postures and the identifier of m to a main target point when the carrier is positioned at an actual position;a rotation transformation matrix from the initial position of the gyro device to the preset position of the gyro device;representing a rotation change matrix from the initial position of the gyro device corresponding to the target marked as n postures to the main target point when the carrier is positioned at the actual position; the main target point is a projection point of a main point of the vision sensor on the target through a main optical axis.
It should be noted that the output value of the gyro module when the carrier is located at the actual position and the sum of the output values can be used as the basisThe pose of each target is obtained.
Fig. 2 is a schematic diagram of focal cut distance calculation for multi-target intersection in the method for measuring frontal intersection based on the structural parameters of the vision system according to the present invention. As shown in FIG. 2, An、An-1Respectively represent n posturesAnd the mark of the n-1 gesture is a main target point on the target of m; a. then-i、An-jRespectively representing the main target points marked as m-1 in the n-i posture and the n-j posture; o istmOrigin, X, of a coordinate system representing a target identified as mtm、YtmAnd ZtmRespectively representing the directions of three coordinate axes of a coordinate system of the target marked as m; o isiRepresenting the origin, X, of a coordinate system in which the gyro device is located in an initial positioni、YiAnd ZiRespectively representing the directions of three coordinate axes of a coordinate system of the navigation module; o iswRepresenting the centre of rotation of the carrier and also the origin of the world coordinate system, Xw、YwAnd ZwDirections of three coordinate axes of a world coordinate system are respectively; o isbnIs the origin, X, of the coordinate system of the gyro unit with the carrier in the actual positionbn、YbnAnd ZbnThe three coordinate axis directions of the coordinate system of the gyro device when the carrier is located at the actual position.
OcnIs the origin, X, of the image plane coordinate system when the carrier is in the actual positioncn、YcnAnd ZcnThe directions of three coordinate axes of an image plane coordinate system when the carrier is positioned at an actual position are respectively; pn-1、Pn、Pn-iAnd Pn-jRespectively is the tangent point of the main optical axis and the main optical axis rotating ball; fnAnd Fn-1The rear focuses of any point group and the image point group on the target which are respectively in the n posture and the n-1 posture and are marked as m; fn-iAnd Fn-jThe back focal point of any point group and the image point group of the n-i posture and the n-j posture on the target marked as m-1 respectively.
When the carrier is at the preset position, the origin O of the world coordinate system is determined in the world coordinate systemWTo the point of tangency P of the main optical axis and the main optical axis rotating ballpIs represented asThe origin O of the world coordinate systemWTo tangent point PpCorresponding focal point FpIs represented asWhere ρ is0Is the primary optical axis radius of rotation; dzThe focal length is set; the focal length is the distance between the focal point corresponding to the tangent point and the tangent point; the main optical axis spin ball is determined based on a common ball tangent.
It should be noted that the forward intersection measurement method based on the structural parameters of the vision system of the present invention involves two intersections: firstly, the traditional rear intersection of the target point and the image point can determine the external orientation elements between each target and the image plane in any posture based on the traditional rear intersection of the target point and the image pointAnd an attitude angle; wherein A isnFnRepresenting the distance between the focus and the corresponding main target point; secondly, by using the external orientation element and the gyro value, the vector of the main target point and the related quantity thereof when the expression vector is positioned at any actual position, such as two rotation transformation matrixes (the rotation transformation matrix from the coordinate system of the target to the world coordinate system)And a rotation transformation matrix between the carrier actual position coordinate system and the preset position coordinate system
Based onAndin the world coordinate system, the origin O of the world coordinate system is definedWThe main target point A when the carrier is at the preset positionpIs represented as
For a target identified as m, from a preset bit in the world coordinate system To the origin O of the world coordinate systemWTo the main target point A when the carrier is in the actual positionmVector of (2)The rotation matrix of (a) is:
vector of main target pointFrom an initial position to
The translation vector between the target coordinate system and the world coordinate system is expressed as:
wherein:andto representEach of (1); a. themCoordinate system origin O of target for identification mtm(ii) a When the target with mark m is in a certain postureMain target point AmCorresponding focal point FmAnd the main target point AmAre positioned on the same main optical axis; wherein A ismFmIndicates the focal point FmCorresponding main target point AmThe distance between them;and the translation amount between the world coordinate system and the target coordinate system is represented.
According to the embodiment of the invention, the vector of the main target point when the carrier is positioned at the actual position is expressed in the world coordinate system based on the rotation transformation matrix among the coordinate systems, each vector can be expressed in the same world coordinate system, a data basis can be provided for acquiring the coordinate of the point to be measured in the world coordinate system, and the consistency of photogrammetry can be improved.
Based on the content of each embodiment, determining a fitting function between the focal length and the focal length-to-shear distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of the gyro device in the visual system, and the main target point vector when the vector expressed in the world coordinate system is located in the actual position, specifically includes: for each target, the visual sensor acquires a plurality of sample images of the target with any posture at any target focal length; and in any two sample images of the target, the distance and the posture of the target relative to the visual sensor are different.
Respectively acquiring each focal length d corresponding to each target focal length based on the sample image of each target acquired at each target focal lengthzThe solution specifically includes: for a target marked as m, acquiring a transformation matrix of a posture coordinate system relative to a world coordinate system based on a sample image of n postures acquired at any target focal lengthObtaining a main target point A on the target of the n postures when the carrier is positioned at the actual positionnCoordinates in a target coordinate system
Based on main target point AnCoordinates in the coordinate system of the target identified as mExpressing the main target point A in the world coordinate systemnCoordinates of (2)
For the target with the mark m, based on each sample image of the target with the mark n gesture acquired under the target focal length, the focal point F corresponding to the tangent point of the main optical axis and the main optical axis rotating ball when the carrier is located at the actual position can be acquirednCoordinates in a target coordinate systemWherein the focal point FnAnd the main target point AnCorrespondingly, are located on the same main optical axis.
Based on focus FnCoordinates in the coordinate system of the target identified as mExpressing focus F in world coordinate systemnCoordinates of (2)
Expressing a main target point A in a world coordinate system based on a sample image of a target with n-posture marks m acquired under a target focal lengthnAnd a focal point FnCorresponding main optical axis and tangent point of main optical axis rotating ball
Wherein, the main target point AnFocal point FnAnd point of tangency PnAll located on the same main optical axis.
For targets with n-attitude identifiers m, based on main target point AnCoordinates in the world coordinate SystemFocal point FnCoordinates in the world coordinate SystemAnd point of tangency PnCoordinates of (2)Obtaining a tangent three-point collinearity equation:
where λ represents a constant.
Radius of rotation ρ to be based on the main optical axis0And the focal cut distance dzExpressed primary target AnCoordinate, focus FnCoordinate and tangent point P ofnSubstituting the coordinates into a tangent three-point collinear equation to obtain:
wherein the content of the first and second substances,
to get solvedTaylor expansion is performed, keeping the first term, and obtaining:
wherein the content of the first and second substances,
and
wherein the content of the first and second substances,
wherein the content of the first and second substances,
wherein the content of the first and second substances,andto representThe entries of the matrix.
The error equation is expressed as:
wherein the constant term vector isThe coefficient column matrix is
Solving the nth iteration correction number to be Xn=(Λn TΛn)-1Λn TLn。
According to Xn=[d(dz),dρ0]TAnd obtaining the focal length d corresponding to the target focal length through iterative operationzThe solution of (1).
In addition, the focal length d is obtainedzAnd main optical axis rotation radius ρ0The method of (3) is not limited to the above one, and other solutions can be applied to the present invention.
Acquiring each focal cut distance d corresponding to each target focal lengthzBased on the focal length d of each target focal length corresponding to the target focal lengthzTo obtain the focal length f and the focal length dzThe fitting function of (1).
It should be noted that the focal length f and the focal length d can be establishedzDifference mapping matrix ofFitting function dz(f) In that respect Focal length f and focal tangential distance dzThe specific fitting method is not particularly limited in the embodiment of the present invention.
It should be noted that the invention directly utilizes two structural parameters of the visual system as variables to express the main target point vector, introduces the gyro value to jointly express the main target point vector when the vector is positioned at any set actual position, and uses the main target point vector as a translation vector converted into a world coordinate system. Then tangent lines are intersected with a sphere to obtain a structural parameter focal tangent distance dz and a main optical axis rotation radius rho0. In the prior art, the coordinates of the rotation center are obtained by directly intersecting the front main target point, the first rear focus and the tangent point, and then the rotation radius is obtained. No gyro value was introduced for the measurement.
According to the embodiment of the invention, by acquiring the fitting function of the focal length and the focal length, when the forward intersection measurement is carried out on any point to be measured in the predetermined measurement space, the focal length and the acquired fitting function of the focal length and the focal length are used for quickly determining the focal length, and the coordinates of the point to be measured in the world coordinate system are more accurately and more quickly acquired based on the image point, the focal length and the main optical axis rotating radius.
Based on the content of the above embodiment, after acquiring a plurality of target images including points to be measured by using a vision sensor in a vision system, performing stereo field error correction on corresponding points to be measured in each target image, and expressing each point to be measured after stereo field error correction in a world coordinate system, specifically including: and acquiring a plurality of target images comprising points to be measured by using a vision sensor in a vision system.
Based on each target image, determining the image coordinates of the corresponding image point to be measured in each target image in the visual sensor coordinate system
Using the previously acquired image side vertical axis correction data W [ W ]x,Wy]TCorrecting image coordinate of image point to be measuredObtaining a first coordinate after correcting the vertical axis of the image space of each image point to be detected in the coordinate system of the vision sensorCorrecting the coordinates of each image point to be measured by using the axial correction data aObtaining the corrected second coordinate of each image point to be detected in the coordinate system of the vision sensor
Expressing a preset bit vector from the origin of the world coordinate system to the origin of the image plane coordinate system as [0, rho ] in the world coordinate system0,-f-dz]TThen, the first coordinate in the vision sensor coordinate system is determinedAnd second coordinatesConverting the coordinate into a world coordinate system to obtain a third coordinate of each image point to be measured in the world coordinate system corresponding to the image point to be measuredAnd fourth coordinate
According to the embodiment of the invention, the accuracy of the acquired coordinate of the point to be measured in the world coordinate system can be further improved by performing three-dimensional field error correction on the point to be measured and converting the point to the world coordinate system.
Based on the content of the above embodiment, performing forward intersection measurement on the point to be measured based on the fitting function, the structural parameters of the visual system, and each image point to be measured expressed in the world coordinate system after the stereo field error correction to obtain coordinate values of the point to be measured in the world coordinate system, specifically including: and calibrating the focal length of the visual sensor to determine the focal length f of the visual sensor.
Based on the predetermined focal length f and focal cut distance dzDetermining the focal length d corresponding to the focal length fz。
Based on the origin O of the world coordinate system when the carrier is at the preset positionWTo the focal point FpVector of (2)The origin O of the world coordinate system when the carrier is at the actual positionWTo the focus of the point to be measuredThe vector of (a) is expressed as: wherein the content of the first and second substances,and converting the preset bits of the carrier into a rotation matrix of the carrier positioned at the actual bits.
FIG. 3 is a schematic diagram of the method for measuring points to be measured based on the forward rendezvous measurement method based on the structural parameters of the vision system, as shown in FIG. 3, based on the points B to be measurednCoordinates in the world coordinate SystemTo-be-measured pointBnThe third coordinate of each corresponding image point to be measured in the world coordinate systemAnd point B to be measurednCorresponding focal point FnAnd obtaining a three-point collinearity equation:
obtaining by solution:
taylor expansion was performed to obtain:
wherein the content of the first and second substances,
the error equation is expressed as:
wherein, constant term
Array matrix of coefficients
The nth iteration correction is solved as follows: chi shapen=(Λn TΛn)-1Λn TLn。
Correcting the x according to the nth iterationnAnd iteratively operating to obtain a point B to be measured corresponding to the third coordinatenFirst solution of
Based on point B to be measurednPoint B to be measurednThe fourth coordinate of each corresponding image point to be measured in the world coordinate systemAnd point B to be measurednCorresponding focal point FnBased on the method, the point B to be measured corresponding to the fourth coordinate is obtainednSecond solution of
According to the first solutionAnd the second solutionDetermining a point B to be measurednThe coordinate values in the world coordinate system are
According to the embodiment of the invention, any point to be measured is measured in a forward intersection manner by the multi-image light beam adjustment based on the acquired fitting function of the focal length and the expression of each vector in the world coordinate system, and the coordinate of the point to be measured in the world coordinate system is obtained, so that the point to be measured can be more accurately and efficiently photogrammetrically measured based on the structural parameters of the vision system under the condition of no front control point.
Based on the disclosure of the above embodiments, the vision system includes two or more vision sensors.
Correspondingly, a visual sensor in a visual system is utilized to acquire a plurality of target images including points to be measured, and the method specifically comprises the following steps: and synchronously acquiring a plurality of target images comprising points to be measured by using each vision sensor, and synchronously and parallelly processing image data of each target image acquired by each vision sensor.
According to the embodiment of the invention, more data can be acquired through the plurality of vision sensors, so that the coordinates of the point to be measured in the world coordinate system can be acquired more accurately based on more data.
Based on the content of the above embodiments, the vision system includes two or more vision sensors; the detection points may be dynamic points.
Correspondingly, the forward intersection measurement is carried out on the point to be measured, and after the coordinate value of the point to be measured in the world coordinate system is obtained, the method further comprises the following steps: and based on the coordinate of the point to be measured in the world coordinate system independently determined by each vision sensor, combining the world coordinates obtained by other vision sensors, and further obtaining the motion vector of the point to be measured.
Specifically, for each vision sensor in the vision system, the vision sensor can independently acquire a plurality of target images including points to be measured.
The coordinates of a point to be measured in the world coordinate system can be independently determined based on each target image acquired by each vision sensor.
And obtaining the motion vector of the point to be measured based on the difference of the measured quantity of the point to be measured in the world coordinate system independently determined by each vision sensor.
According to the embodiment of the invention, the dynamic point to be measured is independently measured by the plurality of vision sensors, so that the motion vector of the dynamic point to be measured can be obtained.
FIG. 4 is a second flowchart of the method for forward intersection measurement based on the structural parameters of the vision system according to the present invention. As shown in fig. 4, after the forward rendezvous measurement method based on the structural parameters of the vision system is started, the main target point vector when the vector is located at the actual position is expressed in a world coordinate system according to the posture of the target relative to the image plane and the preset position of the vector in the vision system.
The following steps need to be acquired: intrinsic parameters of the vision sensor; the external orientation elements of the right angles between each target and the image plane in any posture; a rotation transformation matrix from the coordinate system of the target to a world coordinate system; and a rotation transformation matrix of the coordinate system when the carrier is positioned at the actual position to the coordinate system when the carrier is positioned at the preset position.
And expressing the main target point vector when the carrier is positioned at the actual position in a world coordinate system by using the rotation transformation matrix and the structural parameters.
And secondly, intersecting multi-target collinear equations, and accurately calculating the focal length and the main optical axis rotation radius. The method specifically comprises the following steps: converting the front main target point vector corresponding to the image principal point of the sample image of each target in any posture from the coordinate system of the target to a world coordinate system; establishing collinear equation common-sphere intersection respectively for the multiple target postures, and accurately calculating the focal cut distance and the main optical axis rotation radius; and respectively calculating focal cutting distances under different focal lengths, and fitting to establish a difference value mapping matrix and a fitting function of the focal lengths and the focal cutting distances.
Again, the pixel stereo field error is corrected and converted to world coordinates. The method specifically comprises the following steps: respectively correcting two image point vectors to be detected generated by points to be detected in the target image by using the image vertical axis correction data and the axial correction data; and converting the two image point vectors to be measured into a holder coordinate system.
Finally, a front-meeting measurement is performed for any object to be measured in the measurement space. The method specifically comprises the following steps: calibrating the focal length of a vision sensor used for measurement, and acquiring the focal length and the main optical axis rotation radius under the focal length from a fitting function of the focal length and the focal length; expressing the focus of the carrier at a preset position in a world coordinate system by using known structural parameters of a visual system; expressing world coordinates of the focus and the image point of each actual bit based on the actual bit and the preset bit; obtaining a collinear condition equation based on the point to be measured and the corresponding focus and image point; and obtaining the coordinates of the point to be measured in the world coordinate system.
Fig. 5 is a schematic structural diagram of a forward-rendezvous measurement system based on structural parameters of a vision system according to the present invention. The vision system structural parameter-based forward intersection measurement system provided by the present invention is described below with reference to fig. 5, and the vision system structural parameter-based forward intersection measurement system described below and the vision system structural parameter-based forward intersection measurement method described above may be referred to correspondingly. As shown in fig. 5, the apparatus includes: a coordinate representation module 501, a function determination module 502, an error correction module 503, and an intersection measurement module 504.
A coordinate expression module 501, configured to respectively express, in a world coordinate system, a main target point vector when a vector is located in an actual position according to a posture of a target in a visual system relative to an image plane and a preset position of the vector; wherein, the vector of the main target point when the carrier is positioned at the actual position is the vector from the origin of the world coordinate system to the main target point when the carrier is positioned at the actual position; the main target point refers to a projection point of a main point of a visual sensor in the visual system on a target plane through a main optical axis;
a function determining module 502, configured to determine a fitting function between the focal length and the focal-length cut-off distance based on the focal length, the focus and the external orientation elements in the structural parameters of the visual system, the gyro value of the gyro device in the visual system, and the main target point vector when the vector expressed in the world coordinate system is located in the actual position.
The error correction module 503 is configured to, after acquiring a plurality of target images including points to be measured by using the vision sensor, perform stereo field error correction on corresponding points to be measured in each target image, and express each point to be measured after the stereo field error correction in a world coordinate system; in any two target images, the postures and focus vectors of the points to be measured relative to the visual system are different; a vision sensor disposed on the carrier; the focus vector refers to the vector from the center of rotation of the carrier to the focus.
And the intersection measurement module 504 is configured to perform forward intersection measurement on the to-be-measured point based on the focal cut distance obtained through the fitting function, the main optical axis rotation radius in the structural parameter of the visual system, the gyro value of the gyro device, and each to-be-measured image point expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain a coordinate value of the to-be-measured point in the world coordinate system.
The main optical axis rotation radius is determined by tangent common-sphere intersection transformation based on the main optical axis; the point to be measured is any point in a predetermined measurement space; the origin of the world coordinate system is positioned at the rotation center of the carrier of the visual system, and the directions of all coordinate axes of the world coordinate system are determined according to the directions of all coordinate axes when the gyro device is positioned at the initial position.
According to the embodiment of the invention, a world coordinate system is established according to the structural parameters of the visual system, the forward rendezvous measurement is carried out on the point to be measured, and on the basis of linear space analysis, a method for carrying out forward rendezvous space point measurement based on the structural parameters of the visual system (or the same hand eye system) is innovated, so that the standard-free and more accurate photogrammetry on the point to be measured is realized, and experiments prove that the error between the coordinate of the point to be measured and the real coordinate obtained by the forward rendezvous measurement method based on the structural parameters of the visual system is about 2.5%.
Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. The processor 610 may invoke logic instructions in the memory 630 to perform a method of frontal encounter measurement based on the parameters of the vision system structure, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; determining a fitting function between the focal length and the focal length-to-cut distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is positioned at an actual position; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and performing front intersection measurement on the points to be measured based on the focal cut distance obtained by the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain the coordinate values of the points to be measured in the world coordinate system.
In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the method for forward cross-measurement based on structural parameters of a vision system provided by the above methods, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; determining a fitting function between the focal length and the focal length-to-cut distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is positioned at an actual position; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and performing front intersection measurement on the points to be measured based on the focal cut distance obtained by the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain the coordinate values of the points to be measured in the world coordinate system.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program, which when executed by a processor is implemented to perform the above-mentioned methods for providing forward cross-talk measurement based on structural parameters of a vision system, the method comprising: respectively expressing main target point vectors when the carrier is positioned at an actual position in a world coordinate system according to the posture of a target in a visual system relative to an image plane and the preset position of the carrier; determining a fitting function between the focal length and the focal length-to-cut distance based on the focal length, the focal point and the external orientation elements in the structural parameters of the visual system, the gyro value of a gyro device in the visual system and a main target point vector when a carrier expressed in a world coordinate system is positioned at an actual position; after a plurality of target images including points to be measured are obtained by using a vision sensor, stereo field error correction is carried out on the corresponding points to be measured in each target image, and each point to be measured after the stereo field error correction is expressed in a world coordinate system; and performing front intersection measurement on the points to be measured based on the focal cut distance obtained by the fitting function, the main optical axis rotation radius in the structural parameters of the visual system, the gyro value of the gyro device and each image point to be measured expressed in the world coordinate system and subjected to the stereo field error correction, so as to obtain the coordinate values of the points to be measured in the world coordinate system.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种航空影像空中三角测量作业方法