Intelligent forking method and system
1. An intelligent forking method is characterized by comprising the following steps:
acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
the fork claw runs according to the first path, a second picture of a working area and real-time position coordinates of the fork claw are collected in real time, the installation position of the camera at the tail end of the robot is obtained by adopting a hand-eye calibration method, namely, the transformation relation between a camera coordinate system and a robot fork claw flange coordinate system is obtained, the relative position relation between the camera coordinate system and a world coordinate system is determined, and the pose of the fork claw under the robot world coordinate system is determined through transformation;
extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
2. The method according to claim 1, characterized in that said conversion of image coordinates into actual three-dimensional coordinates comprises in particular the steps of:
converting the image coordinates to cartesian coordinates:
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
converting the camera coordinates to actual three-dimensional coordinates:
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
3. The method of claim 1, wherein path planning is performed by a SLAM optimization method based on a probabilistic model, the probabilistic model being:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
4. The method according to claim 1, wherein the video camera is an industrial camera with a CCD and/or CMOS light sensing chip.
5. The method of claim 1, wherein the camera is mounted on a prong.
6. An intelligent forking system, comprising:
the data acquisition module is used for acquiring a working area picture in real time by using the camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of the fork claw and the camera in real time, acquiring a transformation relation of a flange coordinate system of the fork claw and the camera, and acquiring a second coordinate of a conveying terminal point;
the coordinate conversion module is connected with the data acquisition module and used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through conversion from the image coordinate to an actual three-dimensional coordinate;
and the path planning module is connected with the coordinate conversion module and the data acquisition module, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate which are updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to the transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
7. The system according to claim 6, wherein the conversion of the image coordinates into actual three-dimensional coordinates in the coordinate conversion module comprises in particular the steps of:
converting the image coordinates to cartesian coordinates:
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
converting the camera coordinates to actual three-dimensional coordinates:
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is rotation, the T matrix is translation matrix, and f is the focal length of the camera.
8. The system of claim 6, wherein the path planning module performs path planning by a SLAM optimization method based on a probabilistic model, the probabilistic model being:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
9. The system according to claim 6, wherein the camera is an industrial camera with a CCD and/or CMOS light sensing chip.
10. The system of claim 6, wherein the camera is mounted on the prongs.
Background
Material cargo handling in large enterprise plants is an important part of industrial production. The loading, unloading and carrying of materials are auxiliary links in the production process of manufacturing enterprises, but the loading, unloading and carrying of materials are indispensable links for mutual connection among processes, workshops and factories.
The conventional industry often uses a manual forklift to carry goods, so that the efficiency is low, the potential safety hazard is large, and a series of problems of low labor efficiency, large potential safety hazard and the like exist. In recent years, based on the establishment of an intelligent production factory, the research direction of material handling has been developed towards computers and automatic identification robots, so that the design and manufacturing level of material handling and transportation reaches a new level.
Forking is a common method for goods transportation, the intelligent forklift has great advantages in the aspects of sorting speed, a management software platform, warehouse entry and exit speed, management efficiency and user experience, the intelligent forklift is guided into more and more in an intelligent transportation scene, manual driving is not needed for the intelligent forklift, a bar code technology, a wireless local area network technology and a data acquisition technology are combined, and meanwhile navigation modes such as electromagnetic induction, machine vision, laser radar and the like are adopted as auxiliary RFID identification, so that the intelligent forklift can run on a complex path and can perform reliable tracking on multiple stations, and the operation is convenient. The intelligent forklift is more intelligent, flexible and flexible, and has the basic characteristics of low cost, high efficiency and safe operation. The customized requirements of enterprises can be met, manual carrying or manual forklifts are replaced, and the advantages are obvious.
Based on the technical scheme, the intelligent robot forking method is designed based on economic, practical and appropriate advance as basic positioning, based on relevant products such as artificial intelligence, computer control, intelligent mobile robots, visual servos and advanced technologies, and realizes the automatic processes of stable clamping, loading, carrying, unloading and warehousing of products through positioning navigation and image recognition, so that the production efficiency is improved, and the safety accident probability is reduced.
Disclosure of Invention
In view of the above, an objective of the present invention is to provide an intelligent forking method, which can fully automatically fork an object to be transported and transport the object to a destination.
In order to achieve the purpose, the technical scheme of the invention is as follows: an intelligent forking method comprises the following steps:
acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
the fork claw runs according to the first path, a second picture of a working area and real-time position coordinates of the fork claw are collected in real time, the robot drives the camera to move together, so that the relative relation between a camera coordinate system and a world coordinate system of the robot is always changed, and the relative position relation between the camera and the fork claw of the robot is kept unchanged due to the rigid connection of the camera. Therefore, the installation position of the camera at the tail end of the robot is obtained by a hand-eye calibration method, namely the transformation relation between a camera coordinate system and a robot fork claw flange coordinate system, the relative position relation between the camera coordinate and a world coordinate system is determined, and the pose of the fork claw in the robot world coordinate system is determined through transformation. (ii) a
Extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
Further, the conversion of the image coordinates to actual three-dimensional coordinates specifically includes the steps of:
converting the image coordinates to cartesian coordinates:
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
converting the camera coordinates to actual three-dimensional coordinates:
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
Further, path planning is carried out through a SLAM optimization method based on a probability model, wherein the probability model comprises the following steps:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │ B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Further, the video camera is an industrial camera with a CCD and/or CMOS photosensitive chip.
Further, the camera is mounted on the prongs.
The invention also aims to provide an intelligent forking system which can be used for automatically transporting cargoes in a factory.
In order to achieve the purpose, the technical scheme of the invention is as follows: an intelligent forking system comprising:
the data acquisition module is used for acquiring a working area picture in real time by using the camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of the fork claw and the camera in real time, acquiring a transformation relation of a flange coordinate system of the fork claw and the camera, and acquiring a second coordinate of a conveying terminal point;
the coordinate conversion module is connected with the data acquisition module and used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through conversion from the image coordinate to an actual three-dimensional coordinate;
and the path planning module is connected with the coordinate conversion module and the data acquisition module, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate which are updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to the transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
Further, the conversion of the image coordinates to the actual three-dimensional coordinates in the coordinate conversion module specifically includes the following steps:
converting the image coordinates to cartesian coordinates:
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
converting the camera coordinates to actual three-dimensional coordinates:
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
Further, the path planning module plans the path by using a SLAM optimization method based on a probabilistic model, where the probabilistic model is:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │ B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Further, the video camera is an industrial camera with a CCD and/or CMOS photosensitive chip.
Further, the camera is mounted on the prongs.
Compared with the prior art, the invention has the following advantages:
the invention provides an intelligent forking method and an intelligent forking system, which can freely move in a working area, can avoid obstacles in real time, can run on a complex path and reliably track multiple stations, are convenient to operate, and simultaneously realize the requirements of automatically grabbing objects to be transported, safely transferring the objects to a temporary storage point, placing the objects to an appointed position and the like.
Drawings
To more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. The drawings in the following description are examples of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive exercise.
FIG. 1 is a block diagram of an intelligent forking system of the present invention;
FIG. 2 is a schematic diagram of the transformation of image coordinates to actual three-dimensional coordinates according to the present invention;
FIG. 3 is a process diagram of the transformation relationship between the fork and the flange coordinate system of the camera according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the scope of protection of the present invention.
The examples are given for the purpose of better illustration of the invention and are not intended to limit the invention to the examples. Therefore, those skilled in the art should make insubstantial modifications and adaptations to the embodiments of the present invention in light of the above teachings and remain within the scope of the invention.
Example 1
Referring to fig. 1, a diagram of an intelligent forking system according to the present invention is shown, the system including: the data acquisition module 1 is used for acquiring a working area picture in real time by using a camera, extracting features, acquiring a first image coordinate of an object to be conveyed and a third image coordinate of an obstacle in the working area in real time, acquiring position coordinates of a fork claw and the camera in real time, acquiring a transformation relation of the fork claw and a flange coordinate system of the camera, and acquiring a second coordinate of a conveying terminal point;
in this embodiment, the camera is an industrial camera with a CCD and/or CMOS photosensitive chip, and the camera is mounted on the fork claw.
The coordinate conversion module 2 is connected with the data acquisition module 1 and is used for converting the first image coordinate and the third image coordinate into a first coordinate and a third coordinate through the conversion from the image coordinate to the actual three-dimensional coordinate;
in this embodiment, the conversion from the image coordinate to the actual three-dimensional coordinate specifically includes the following steps:
converting the image coordinates to cartesian coordinates:
establishing a camera coordinate system by using a video camera, and converting the Cartesian coordinates into camera coordinates:
converting the camera coordinates to actual three-dimensional coordinates:
wherein (u, v) is an image coordinate, (x, y) is a two-dimensional cartesian coordinate, (x, y, z) is a three-dimensional cartesian coordinate, and (c), (d) and (d) are combined to form a three-dimensional cartesian coordinateu 0,v 0) Initial coordinates in image coordinates, (x)C,yC,zC) As camera coordinates: (x w ,y w ,z w ) The coordinates of the point w in the actual three-dimensional coordinate system are shown, the R matrix is a rotation matrix, the T matrix is a translation matrix, and f is the focal length of the camera.
And the path planning module 3 is connected with the coordinate conversion module 2 and the data acquisition module 1, and is used for planning a first path from the fork claw to the object to be transported according to the first coordinate and the third coordinate updated in real time and the transformation relation between the fork claw and the flange coordinate system of the camera, and planning a second path from the object to be transported to a transport destination according to the third coordinate, the second coordinate and the transformation relation between the fork claw and the flange coordinate system of the camera after the fork claw is forked according to the point to be transported.
Further, the path planning module plans the path by using a SLAM optimization method based on a probability model, wherein the probability model is as follows:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │ B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
Example 2
Based on the system of embodiment 1, the present embodiment provides an intelligent forking method, including the following steps:
s1: acquiring a working area picture by using a camera, extracting features, obtaining a first image initial coordinate of an object to be conveyed and a third initial image coordinate of an obstacle in a working area, and obtaining a second coordinate of a conveying terminal point;
in this embodiment, the camera is an industrial camera with a CCD and/or CMOS photosensitive chip, and the camera is mounted on the fork claw, which may be a mechanical fork claw commonly used at present.
S2: acquiring initial position coordinates of the fork claw, converting the first image initial coordinates and the third initial image coordinates into first initial coordinates and third initial coordinates through conversion from image coordinates to actual three-dimensional coordinates, and planning a first path from the fork claw to an object to be conveyed;
in this embodiment, a camera is used to collect a scene image of a field work, feature extraction is performed on the image, a deviation of a workpiece coordinate system is calculated through an internal algorithm, and then data is transmitted to a robot, the data guides the robot to establish a new workpiece coordinate system, and a specific process principle can refer to fig. 2:
image coordinate system (u,v) Is a two-dimensional plane coordinate system defined on an image, which is mainly calculated in units of pixels in image description, and can also be calculated in units of actual physical length, i.e. applied to cartesian coordinates, as shown in fig. 2, an image coordinate system (c: (c) (c))u,v) Has an initial coordinate ofu 0,v 0) The coordinate axis direction is shown in the figure; actual physical coordinate system (x,y) The origin of (a) is at the center point O of the physical size of the image, and corresponds to the median of the two-axis maximum values in pixel units (cu 0,v 0) If the coordinate axis direction is consistent with the pixel coordinate axis direction and the cartesian coordinate has a negative value, the transformation relationship between two coordinates of any point in the image is as follows:
the above formula can be expressed as a homogeneous coordinate matrix:
then, a camera coordinate system (c) can be establishedX C ,Y C ,Z C ) The camera coordinate system is the optical center point of the optical lensO C Is the origin of coordinates of a system of coordinatesZ C The axis (lens axis) is perpendicular to the image plane and passes through the central O point of the image coordinate system, of the camera coordinate systemX C 、Y C Two axes are respectively flatTravelling in image coordinate systemx、yThe coordinate value of the axis, the outside point W in the camera coordinate system is (X C ,Y C ,Z C ) Projected points in the image coordinate systemmThe coordinate value of (A) isu m ,v m ) Or (a)x m ,y m ) And converting the points in the Cartesian coordinate system into points in the camera coordinate system:
the coordinate value of the point W in the camera coordinate system shown in FIG. 2 is: (X C ,Y C ,Z C ) Which maps points in a Cartesian coordinate system of the imagemHas the coordinates of (x,y,z) And obtaining a formula according to the geometrical relation:
whereinfThe focal length of the industrial camera is shown, the coordinate value z = f can be known by the similar triangle principle, and the above formula is expressed as a homogeneous matrix equation:
actual three-dimensional coordinates (X W ,Y W ,Z W ) The system is a reference coordinate system arbitrarily set by a user, generally set at a position where the position of an object and the calculation are convenient to describe in millimeter units, and in order to describe the pose parameters of the object to be transported, the system is adopted in the embodiment to select the actual three-dimensional coordinates as the robot coordinate system, and meanwhile, the transformation calculation between the two coordinate systems is reduced. As shown in the description of point W is (X W ,Y W ,Z W ) The transformation of the camera coordinate system to the actual three-dimensional coordinates is: in the figure, point W is at the actual three-dimensional coordinate (X W ,Y W ,Z W ) The coordinate value of (A) isx w ,y w ,z w ) Converting the coordinate value of the actual three-dimensional coordinate of the point W to a coordinate value of the camera coordinate system (c)X C ,Y C ,Z C ) The coordinate transformation formula is described by a homogeneous equation as follows:
wherein the R matrix is a rotation matrix and the T matrix is a translation matrix, wherein R | T is a 3 × 4 matrix. Then the pixel coordinates of any known point in the image can be obtained by the above equation to obtain the corresponding actual three-dimensional coordinate value:
wherein the content of the first and second substances,z C is a constant, also point W in camera coordinatesz C The values of the coordinates of the axes are,。
in this embodiment, the coordinate values of any point in the space described by the two different coordinate systems are different, and the mapping relationship described by converting the coordinate value of the point from one coordinate system to the other coordinate system becomes coordinate transformation.
Further, in the embodiment, an image processing algorithm adopts a Canny algorithm to obtain an edge image of an object to be transported, a Hough transformation algorithm is applied to detect the posture of the object to be transported, a Hu moment algorithm is used to detect the gravity center position coordinate of the object to be transported, the traditional Canny algorithm is improved in self-adaptive edge detection due to the influence of the external environment on a shot image, the obtained image can be processed into a better edge image in real time in a changing environment, finally, the image position and posture parameter of the object to be transported is obtained by processing the image, the actual position and posture parameter of a container in a robot coordinate system is obtained through the coordinate transformation from the image coordinate system to an actual three-dimensional coordinate system, feedback is provided for intelligent transportation of a transportation robot, and an initial first path for obstacle avoidance is planned according to a first coordinate and a third coordinate in the same coordinate system;
further, the SLAM technology is synchronous positioning and map building, in this embodiment, path planning is performed by a SLAM optimization method based on a probabilistic model, and the probabilistic model is:
wherein the content of the first and second substances,p(A k ) Is an eventA k The probability of (a) of (b) being, p(B│A k ) Is an eventA k Conditional event has occurredBThe probability of (a) of (b) being,p(A k │ B) Is an eventBOccurrence of conditional eventsA k The probability of (a) of (b) being,mis all possible numbers of occurrences of an event.
S3: the fork claw operates according to the first path, a second picture of the working area and real-time position coordinates of the fork claw are collected in real time, and a transformation relation between the fork claw and a flange coordinate system of the camera is obtained;
in practical use, since the robot fork carries the camera to move, the relative relation between the camera coordinate system and the actual three-dimensional coordinate system of the robot is always changed, and the relative position relation between the camera and the robot actuator is kept unchanged due to the rigid connection of the camera. Therefore, the purpose of this step is to obtain the installation position of the camera at the end of the robot, i.e. the transformation relationship between the camera coordinate system and the robot end flange coordinate system, and the different pose relationships between the camera coordinate system and the actual three-dimensional coordinates of the robot can be obtained by the current pose state of the robot end flange coordinate system and the above-mentioned phenotype result, and the calibration method generally adopted is as follows: and adjusting the robot to enable the camera to shoot the same target in different poses, and obtaining transformation parameters of the camera relative to the tail end of the robot according to the pose of the robot and external parameters of the camera relative to the target.
Specifically, four coordinate systems are referenced in this step, namely a base coordinate system, a fork-claw coordinate system, a camera coordinate system, and a calibration object coordinate system, as shown in fig. 3.
Wherein the baseHcal represents the conversion relation from a basic coordinate system to a calibration object coordinate system, and comprises a rotation matrix and a translation vector; camHtool represents the conversion relationship from the camera coordinate system to the fork claw coordinate system; these two transformation relationships are invariant during the movement of the fork jaws; the camHcal can be obtained by camera calibration; the baseHtool can be derived from a robotic system.
The fork jaws are then controlled to move from position 1 to position 2:
base = baseHtool (1)* tool(1)
tool(1) = inv(camHtool)*cam(1)
cam(1) = camHcal(1)*obj
combining the above three formulas:
base = baseHtool (1)* inv(camHtool)* camHcal(1)*obj
after moving to the fork claw to position 2:
base = baseHtool (2)* inv(camHtool)* camHcal(2)*obj
because base and obj are fixed, so:
baseHtool (1)* inv(camHtool)* camHcal(1)=baseHtool (2)* inv(camHtool)* camHcal(2)
the camHcal can be obtained by obtaining external parameters through camera calibration, the baseHtool is known and can be read out from a common robot, the camHtool is unknown, multiple groups of data of different camera positions can be taught through hands and eyes, the cvsolve of opencv can be called to solve multiple groups of linear over-definite equation sets, and a camHtool matrix is solved.
S4: extracting features according to the second picture acquired in real time to acquire a first image coordinate of the object to be conveyed, a second image coordinate of a conveying terminal point and a third image coordinate of an obstacle in a working area;
in the actual carrying process, because the positions of the fork claw and the camera are changed all the time, the obtained picture information is different, so that the picture is updated in real time, and different obstacle image coordinates and the coordinates of the object to be carried are extracted;
s5: respectively converting the first image coordinate, the second image coordinate and the third image coordinate into a first coordinate, a second coordinate and a third coordinate, and updating the first path in real time;
in this step, referring to the steps in steps S2 and S3, different coordinate systems are converted into the same coordinate system, and a first path from the fork to the object to be transported is planned according to the first coordinate, the third coordinate and the transformation relationship between the fork and the flange coordinate system of the camera, which are updated in real time;
s6: the fork claw reaches an object point to be carried according to the first path updated in real time to fork;
s7: and planning a second path to the conveying terminal point in real time according to the second coordinate and the third coordinate, and finishing conveying according to the second path.
Since the object to be transported needs to be transported to the destination after the forking of the object to be transported is completed in step S6, steps S2 and S3 are repeated again, and a second path from the object to be transported to the transport destination is planned based on the third coordinate, the second coordinate, and the conversion relationship between the fork claw and the flange coordinate system of the camera, thereby completing the transportation.
Preferably, the fork jaws can also be moved to the fork jaw assigned idle position by the same procedure after the transport has been completed.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and those skilled in the art can make various modifications without departing from the spirit and scope of the present invention.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:导管管端高精度检测定位方法