Real-time obstacle avoidance system based on RGB-D

文档序号:8355 发布日期:2021-09-17 浏览:18次 中文

1. An RGB-D based real-time obstacle avoidance system, the system comprising:

the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module.

The image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the barrier and respectively outputting the depth information and the RGB information to the calibration module, the common barrier detection module and the special barrier detection module;

the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;

the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles and performing secondary judgment through the verification module to ensure that the obstacles are true obstacles and not false detections, and finally accurately detecting the obstacles and outputting position information of the obstacles;

the special obstacle detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting obstacle position information;

the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;

the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize the obstacle avoidance function.

2. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the RGB-D camera is installed at the top of the mobile transfer robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.

3. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the real-time obstacle avoidance system adopts a multithreading technology, parallel calculation is carried out among modules, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.

4. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:

step 1-1: the RGB-D camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.

Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value under the depth camera coordinate system corresponding to the pixel points.

Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.

Step 1-4: and calculating Y values of the four pixel points under a horizontal depth camera coordinate system through the camera installation angle. If the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.

Step 1-5: and starting to calibrate the ground background information after the calibration angle is finished.

Step 1-6: and obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value.

Step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.

5. The RGB-D based real-time obstacle avoidance system according to claim 4, wherein the specific calculation process of calculating Y values of four pixel points in the horizontal depth camera coordinate system in steps 1-2 and 1-4 is as follows:

whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.

6. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;

the preprocessing module comprises the following steps:

step 2-1: three consecutive depth maps are acquired from the image acquisition module.

Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.

Step 2-3: and the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value of the difference value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0.

Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.

Step 2-5: and performing morphological closing operation processing on the obtained binary map P1 and binary map P2.

The contour extraction module comprises the following steps:

step 3-1: the contour of the obstacle and the convex hull information in the binary map P1 and the binary map P2 are calculated, respectively.

Step 3-2: the areas of the convex hulls in the binary image P1 and the binary image P2 and the pixel positions of the centers of the convex hulls are respectively calculated, and the convex hulls below different area threshold filtering thresholds are set according to different sections where the pixels at the centers of the convex hulls are located.

Step 3-3: and calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering. (ii) a

The obstacle coordinate output module includes the steps of:

step 4-1: and (3) calculating the minimum circumscribed frame of the residual convex hull after filtering in the step (3-2).

And 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames.

And 4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame.

And 4-4, mapping the rectangular box on the depth map into the RGB map.

The mapping formula is as follows:

wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.

And 4-5: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module to verify whether the rectangular frame is the ground or the barrier.

And 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and the next step 4-7 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.

And 4-7: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the horizontal depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.

7. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the processing procedure of the verification module comprises the following steps:

step 5-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, storing the engine, and loading and deploying the engine by using c + +.

Step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.

8. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the special obstacle detection module for detecting obstacles comprises the following steps:

step 6-1: loading a trained yolov5 model (the model identification categories are pedestrians and mobile transfer robots), constructing an inference engine through TensorRT inference, and storing the engine.

Step 6-2: and the loading and deployment engine receives the RGB image of the image acquisition module and carries out reasoning.

Step 6-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.

Step 6-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a depth camera coordinate system.

Step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.

9. The RGB-D based real-time obstacle avoidance system according to claim 1, wherein: the obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.

10. The RGB-D based real-time obstacle avoidance system of claim 9, wherein: the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:

step 7-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.

Step 7-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the one-level or two-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.

And 7-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.

And 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.

Background

At present, intelligent unmanned carrying robots are rapidly developing, and mobile robot technology is closely related to industrial production. When the intelligent transfer robot is in an actual complex and changeable factory environment, accurately detecting obstacles and effectively avoiding the obstacles are one of basic capabilities required by the mobile robot. Especially for critical obstacles such as people, extra attention is needed to avoid accidents. In the prior art, obstacle avoidance usually depends on sensors such as laser radar and ultrasonic radar. However, the laser radar is expensive, and the size of the obstacle which cannot be detected by the ultrasonic radar is very limited.

Compared with a vision camera of a sensor such as a laser radar and an ultrasonic wave, the vision camera is low in price, can obtain RGB information and depth information of the whole view plane in real time, has the advantages of wide detection range, large information capacity and the like, and is widely applied to a vision obstacle avoidance technology. At present, a binocular camera, an RGB-D camera and a TOF camera are commonly adopted by a mobile robot to carry out visual obstacle avoidance. Compared with a binocular camera, the RGB-D camera and the TOF camera have the characteristics of being slightly influenced by object colors and obtaining a high-resolution depth map, so that the RGB-D camera is widely applied to the aspect of visual obstacle detection. However, as is known to require fast computing power for visual image processing, the current obstacle avoidance method of a mobile robot that employs an RGB-D camera to avoid obstacles has the disadvantages of insufficient processing speed, neglecting the obstacle avoidance of space obstacles, and meanwhile, the visual obstacle detection is easily affected by the external environment, has a high false detection rate, and cannot implement different obstacle avoidance strategies for different obstacles. Therefore, for the existing visual obstacle avoidance method, a real-time obstacle avoidance system based on an RGB-D camera needs to be provided.

Disclosure of Invention

The invention provides a real-time obstacle avoidance system based on RGB-D (red, green and blue) -D, aiming at the problem that the existing mobile robot is high in detection false detection rate and insufficient in real-time property due to the fact that space obstacles are not considered and the existing mobile robot is easily influenced by the outside. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of the camera and reduce the false detection rate of the obstacle detection, and the whole system module has good real-time performance through multi-thread technology parallel processing.

The purpose of the invention is realized by the following technical scheme: the invention provides a real-time obstacle avoidance system based on RGB-D, which comprises:

the system comprises an image acquisition module, a calibration module, a common obstacle detection module, a special obstacle detection module, a verification module, an obstacle avoidance decision module and a movement control module.

The image acquisition module is used for acquiring original depth information of the RGB-D camera and RGB information of the barrier and respectively outputting the depth information and the RGB information to the calibration module, the common barrier detection module and the special barrier detection module;

the calibration module is used for acquiring the depth information acquired by the image acquisition module to calibrate the ground background and calibrate the camera installation angle so as to acquire the ground depth information and the camera installation angle information;

the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles and performing secondary judgment through the verification module to ensure that the obstacles are true obstacles and not false detections, and finally accurately detecting the obstacles and outputting position information of the obstacles;

the special obstacle detection module is used for detecting travelers and other robots according to the depth information and the RGB information output by the image acquisition module and outputting obstacle position information;

the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module to generate an obstacle avoidance strategy;

the mobile control module is used for controlling the movement of the robot according to the obstacle avoidance strategy output by the obstacle avoidance decision module to realize the obstacle avoidance function.

Further, the RGB-D camera is installed at the top of the mobile carrying robot, and the inclination angle of the camera ensures that any part of the robot body cannot be seen in the visual field range of the camera.

Furthermore, the real-time obstacle avoidance system adopts a multithreading technology, modules perform parallel calculation, and the detection frame rate is automatically adjusted according to the running speed of the mobile carrying robot.

Further, the calibration module acquires ground depth information and camera installation angle information, and comprises the following steps:

step 1-1: the RGB-D camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.

Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value under the depth camera coordinate system corresponding to the pixel points.

Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.

Step 1-4: and calculating Y values of the four pixel points under a horizontal depth camera coordinate system through the camera installation angle. If the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-2 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.

Step 1-5: and starting to calibrate the ground background information after the calibration angle is finished.

Step 1-6: and obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through a threshold filtering and hole filling method, and setting the depth value with the filling numerical value being zero as the upper limit of the depth numerical value.

Step 1-7: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.

Further, the specific calculation process of calculating the Y values of the four pixel points in the horizontal depth camera coordinate system in step 1-2 and step 1-4 is as follows:

whereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.

Further, the common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module;

the preprocessing module comprises the following steps:

step 2-1: three consecutive depth maps are acquired from the image acquisition module.

Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.

Step 2-3: and the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value of the difference value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0.

Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.

Step 2-5: and performing morphological closing operation processing on the obtained binary map P1 and binary map P2.

The contour extraction module comprises the following steps:

step 3-1: the contour of the obstacle and the convex hull information in the binary map P1 and the binary map P2 are calculated, respectively.

Step 3-2: the areas of the convex hulls in the binary image P1 and the binary image P2 and the pixel positions of the centers of the convex hulls are respectively calculated, and the convex hulls below different area threshold filtering thresholds are set according to different sections where the pixels at the centers of the convex hulls are located.

Step 3-3: and calculating the Y value of the central pixel of the convex hull in the horizontal depth camera coordinate system and the Y value of the same pixel point in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering. (ii) a

The obstacle coordinate output module includes the steps of:

step 4-1: and (3) calculating the minimum circumscribed frame of the residual convex hull after filtering in the step (3-2).

And 4-2, removing the overlapped rectangular frames and combining the connected rectangular frames.

And 4-3, when a plurality of rectangular frames exist, only keeping the rectangular frame with the minimum depth average value in the rectangular frame.

And 4-4, mapping the rectangular box on the depth map into the RGB map.

The mapping formula is as follows:

wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.

And 4-5: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the threshold condition into a verification module to verify whether the rectangular frame is the ground or the barrier.

And 4-6: if the output of the verification module is 0, the step 4-1 is returned if the object in the rectangular frame is judged to be a non-obstacle, and the next step 4-7 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.

And 4-7: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the horizontal depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.

Further, the processing procedure of the verification module comprises:

step 5-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, storing the engine, and loading and deploying the engine by using c + +.

Step 5-2: when image data is sent in, the engine is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, otherwise, the image data is judged to be an obstacle output 1.

Further, the special obstacle detection module detects an obstacle, including the steps of:

step 6-1: loading a trained yolov5 model (the model identification categories are pedestrians and mobile transfer robots), constructing an inference engine through TensorRT inference, and storing the engine.

Step 6-2: and the loading and deployment engine receives the RGB image of the image acquisition module and carries out reasoning.

Step 6-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.

Step 6-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a depth camera coordinate system.

Step 6-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the obstacle points under the depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.

Furthermore, the obstacle avoidance decision module is respectively provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.

Further, the obstacle avoidance decision module processes the obstacle avoidance information to make an obstacle avoidance decision, and the specific process is as follows:

step 7-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.

Step 7-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the special obstacle detection module is processed, and if the obstacle appears in the one-level or two-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.

And 7-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.

And 7-4: and judging which one of the three levels of obstacle avoidance areas the obstacle is in according to the obstacle distance information obtained after final processing, judging to use a corresponding obstacle avoidance strategy according to the area where the obstacle is located, outputting the strategy to a motion control module to control the mobile carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.

The invention has the beneficial effects that: the invention provides a reliable, rapid, high-precision and low-false-detection real-time obstacle avoidance system. The system can make different obstacle avoidance decisions aiming at obstacles with different priorities, can set different ROI (region of interest) areas according to different terrains to reduce calculated amount, can perform secondary judgment on the basis of obstacle detection, can improve the detection precision of the camera and reduce the false detection rate of the obstacle detection, and the whole system module has good real-time performance through multi-thread technology parallel processing.

Drawings

Fig. 1 is a general flow chart of the implementation of the present invention.

Fig. 2 is a schematic view of an intelligent mobile handling machine and camera mounting and camera coordinate system.

Fig. 3 is a flow chart of pretreatment of a general obstacle detection module.

Fig. 4 is a flowchart of obstacle distance output of the general obstacle detection module.

Fig. 5 is a flow chart of a special obstacle handling module.

Fig. 6 is a flow chart of an obstacle avoidance decision module.

Detailed Description

The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.

The invention provides a real-time obstacle avoidance system based on RGB-D. The system comprises an image acquisition module, a calibration module, a common obstacle detection module, a verification module, a special obstacle detection module, an obstacle avoidance decision module and a movement control module as shown in figure 1.

As shown in FIG. 2, the RGB-D camera sensor is mounted on the top of the intelligent transfer robot in actual engineering, and the mounting angle of the RGB-D camera sensor is inclined downwards, and any part of the vehicle body cannot be seen in the visual field range of the camera.

The calibration module is used for acquiring the depth information obtained by the image acquisition module, calibrating the ground background and calibrating the camera installation angle so as to acquire the ground depth information and the camera installation angle information;

the common obstacle detection module is used for processing the depth information and the RGB information output by the image acquisition module, extracting obstacles through the common obstacle detection module, performing secondary judgment through the verification module, and finally accurately detecting the obstacles and outputting position information of the obstacles;

the verification module is used for carrying out secondary judgment on the obstacles detected by the common obstacle detection module to ensure that the obstacles are true obstacles and not false detections;

the special obstacle detection module is used for pedestrians and other carrying robots to output specific position information;

the obstacle avoidance decision module is used for processing obstacle position information output by the common obstacle detection module and the special obstacle detection module and indicating the mobile carrying robot to execute obstacle avoidance operation so as to realize an obstacle avoidance function;

the mobile control module is used for processing the obstacle avoidance strategy output by the obstacle avoidance decision module to control the movement of the robot to realize an obstacle avoidance function;

the system firstly operates a calibration module to calibrate ground background information and an installation angle and stores data in an industrial personal computer of the intelligent transfer robot, meanwhile, a light filter is attached to a camera to filter visible light in actual engineering application, and light rays emitted by no light source in an operation field can be guaranteed to be directly emitted to the camera.

The calibration module for acquiring the ground depth information and the camera installation angle information comprises the following steps:

step 1-1: the camera faces a flat ground, and the camera depth information is acquired through the image acquisition module.

Step 1-2: and starting to calibrate the installation angle of the camera, randomly selecting four pixel values in the middle of the depth map, and converting and solving the Y value of the space coordinate under the depth camera coordinate system corresponding to the pixel point.

Step 1-3: the camera mounting angle is selected starting from 0 and incrementing by 1 degree each time until 180 degrees ends.

Step 1-4: and calculating the Y values of the four pixel points under the coordinate system of the horizontal depth camera through the angle.

Step 1-5: if the absolute value of the difference value between the Y of the four pixel points is within the threshold range, the angle is saved, otherwise, the step 1-3 is returned, wherein the horizontal depth camera coordinate system is obtained by rotating the depth camera coordinate system around the origin of the coordinate system until the Z axis is parallel to the ground.

Step 1-6: and starting to calibrate the ground background information after the calibration angle is finished.

Step 1-7: obtaining a depth map, limiting the depth value with the numerical value exceeding 7000 through threshold filtering and a hole filling method, wherein the depth value with the filling numerical value of zero is the upper limit of the depth value, and the depth value exceeding 7000 can be changed into 7000 in practical implementation.

Step 1-8: and counting 600 frames of processed depth maps, calculating the mean value of the depth values of each pixel point, wherein each pixel point corresponds to the Y value under the coordinate system of the horizontal depth camera and the maximum difference value of the pixel points in each row of pixels and respectively storing the Y value and the maximum difference value.

Specifically, the specific calculation process of the Y value of the pixel points in the step 1-4 under the coordinate system of the horizontal depth camera is

WhereinIs a depth camera reference matrix for converting the camera coordinate system to the pixel coordinate system, (f)x,fy) Representing the relationship of the camera coordinate system to the imaging plane, representing the scaling in the u-axis and the v-axis, (C)x,Cy) Is the camera optical center, theta is the mounting tilt angle of the camera (right hand coordinate system),is a pixel point (u) on the depth imaged,vd) Three-dimensional coordinates in a horizontal depth camera coordinate system. ZdIs that (u)d,vd) The depth values on the pixel points.

After the calibration is finished, subsequent common obstacle and special obstacle detection and obstacle avoidance decision processing can be carried out. The common obstacle detection and the special obstacle detection are carried out simultaneously without mutual interference.

The common obstacle detection module divides the obstacle through the preprocessing module according to the depth information of the RGB-D camera, extracts the outline and the convex hull of the obstacle through the outline extraction module, and finally outputs the three-dimensional position information of the obstacle through the obstacle coordinate output module.

The common obstacle detection module comprises a preprocessing module, a contour extraction module and an obstacle coordinate output module.

As shown in fig. 3, which is a flowchart of the preprocessing module and the contour extraction module, the depth map sent by the image acquisition module is received and twice downsampled to reduce the processing data amount, and then the original depth map is divided into two processing depth maps according to different thresholds, one is responsible for detecting short and short obstacles, and the other is responsible for detecting higher obstacles, so that the false detection rate can be reduced, and the detection efficiency can be improved.

The specific treatment steps are as follows:

step 2-1: three consecutive depth maps are acquired from the image acquisition module.

Step 2-2: and performing double down sampling and morphological dilation processing on the three acquired depth maps.

Step 2-3: the three depth maps are respectively differed with the calibrated ground background information in the set ROI area, the numerical value of the point meeting the threshold value is set to be 255, and the numerical value of the point not meeting the threshold value is set to be 0

Step 2-4: and extracting a second depth map named as a binary map P1, and superposing the three depth maps to obtain a binary map P2.

The principle of stacking the three depth maps is as follows:

the number of the same pixel point is set to 255 when the number of the same pixel point is only 255, and the value of the same pixel point is set to 0 if the number of the 255 is less than 3.

Step 2-5: and performing morphological closing operation processing on the obtained binary images P1 and P2.

Step 2-6: and respectively calculating the outline of the obstacle and convex hull information in the binary image.

Step 2-7: and respectively calculating the area of the convex hull and the pixel position of the center of the convex hull in the two binary images, and setting the convex hull below different area threshold filtering thresholds according to different intervals of the pixel at the center of the convex hull.

Step 2-8: and calculating Y of the convex hull center pixel under the horizontal depth camera coordinate system, comparing the Y with the same pixel point Y value in the stored ground information, and reserving the convex hull which is smaller than the ground Y and reaches a certain threshold value, otherwise, filtering.

And starting to calculate the three-dimensional coordinate information of the obstacle in the space range conforming to the obstacle avoidance after the obstacle convex hull information exists.

Fig. 4 is a flowchart of the obstacle coordinate output module, which includes the following steps:

step 3-1: and loading the trained second gshost classification model, constructing an inference engine by using TensorRT, and storing the engine.

Step 3-2: the engine is loaded and deployed in c + +.

Step 3-3: and calculating the minimum outline frame of the residual convex hull after filtering.

Step 3-4: and removing the overlapped rectangular frames and combining the connected rectangular frames.

Step 3-5: when a plurality of rectangular frames exist, only the rectangular frame with the minimum depth average value in the rectangular frame is reserved.

Step 3-6: and mapping the rectangular box on the depth map into the RGB map.

The mapping formula is as follows:

wherein ZdIs that (u)d,vd) Depth values on the pixel points;is a depth map pixel (u)d,vd) Three-dimensional coordinates under a depth camera coordinate system;is a pixel point (u)d,vd) Three-dimensional coordinates under an RGB camera coordinate system; (u)c,vc) Is thatPixel coordinates on an RGB image. R and T are the rotation matrix and translation matrix of the depth camera to RGB camera, respectively. KdIs a depth camera internal reference matrix, KcIs the internal reference matrix of the RGB camera.

Step 3-7: and calculating the area of the rectangular frame, and sending the rectangular frame in RGB with the area size meeting the condition to the verification module.

Step 3-8: when the image data enters the verification module, the engine deployed in the step 3-2 is used for reasoning and judging whether the image data is a non-obstacle output 0 or not, and otherwise, judging that the image data is an obstacle output 1.

Step 3-9: if the output of the verification module is 0, the step 3-3 is returned to if the object in the rectangular frame is judged to be a non-obstacle, and the next step 3-10 is executed if the output 1 indicates that the object in the rectangular frame is an obstacle.

Step 3-10: and calculating the value of each pixel point in the rectangular frame of the obstacle on the X coordinate axis under the coordinate system of the depth camera, removing the pixel points of which the numerical values on the X coordinate axis are out of the threshold range, finding the nearest distance of the obstacle in the rest pixel points, and outputting the nearest distance to the obstacle avoidance decision module.

Special obstacle detection is performed simultaneously.

Fig. 5 is a flowchart of the special obstacle detection module.

The special obstacle detection module for detecting the obstacle comprises the following steps:

step 4-1: loading a trained yolov5 model (the model identification category is pedestrian and the model identification category is transfer robot), and establishing an inference engine and storing the engine through TensorRT inference.

Step 4-2: and the loading deployment engine is used for receiving the RGB pictures of the image acquisition module for reasoning.

Step 4-3: and acquiring pixel coordinates and length and width of the upper left corner of the target rectangular frame in the RGB image obtained after reasoning.

Step 4-4: and converting the RGB pixels into depth image pixels, and acquiring three-dimensional coordinates of pixel values in the rectangular frame in a horizontal depth camera coordinate system.

And 4-5: and removing pixel points of which the numerical values on the X axis do not meet the threshold value under the horizontal depth camera coordinate system, and obtaining the three-dimensional coordinates of the closest point of the obstacle under the horizontal depth camera coordinate system through filtering waves in the remaining pixel points and outputting the three-dimensional coordinates to the obstacle avoidance decision module.

The obstacle avoidance decision module is provided with different obstacle avoidance ranges for the common obstacle and the special obstacle detection module respectively, and three obstacle avoidance grade areas are divided in the obstacle avoidance ranges. The first-stage obstacle avoidance is decelerated by 30 percent, the second-stage obstacle avoidance is decelerated by 60 percent, and the third-stage obstacle avoidance is stopped directly.

As shown in fig. 6, a flowchart of processing obstacle avoidance information by the obstacle decision module includes the following specific steps:

step 5-1: and receiving the coordinate information of the obstacles output by the common obstacle detection module and the special obstacle detection module.

Step 5-2: if the obstacle appears in the three-level obstacle avoidance area, the obstacle information output by the common obstacle detection module and the obstacle detection module is processed, and if the obstacles appear in the first-level obstacle avoidance area and the second-level obstacle avoidance area, the obstacle information output by the special obstacle detection module is preferentially processed.

Step 5-3: and carrying out median filtering and amplitude limiting filtering processing on the distance information of the obstacle.

Step 5-4: and judging which region of the three-stage obstacle avoidance regions the obstacle is in according to the finally processed obstacle distance information, judging to use a corresponding obstacle avoidance strategy according to the region where the obstacle is located, outputting the strategy to the motion control module to control the carrying robot to execute the obstacle avoidance strategy, and finally realizing the obstacle avoidance function.

The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种驾驶人身体条件检测方法、系统及终端机

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!