Detection method, detection device, computer equipment and storage medium
1. A method of detection, comprising:
acquiring first point cloud data in a target environment and target geometric characteristics of a target object;
screening first target point cloud data belonging to the target object from the first point cloud data based on the target geometric features, and performing object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object;
determining positional relationship information of each other object with the target object based on the second target point cloud data and the first target point cloud data;
and screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from the other objects based on the position relation information.
2. The method of claim 1, wherein the screening the first point cloud data for the target object based on the target geometric feature comprises:
acquiring a preset fitting range of the target geometric features;
and fitting first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data.
3. The method according to claim 1 or 2, wherein the other objects include an object located between two adjacent target objects;
the object segmentation is performed on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object, and the method comprises the following steps:
screening second point cloud data located between two adjacent target objects from the first point cloud data based on the first target point cloud data;
and carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object.
4. The method according to claim 1 or 2, wherein the object segmentation of the first point cloud data except the first target point cloud data to obtain the second target point cloud data of the object except the target object comprises:
determining edge position information of the target object based on the first target point cloud data;
based on the edge position information, point cloud data which are not located between two adjacent target objects are removed from the first point cloud data, and the remaining point cloud data after removal are used as third point cloud data;
and carrying out object segmentation on the third point cloud data to obtain second target point cloud data of other objects except the target object.
5. The method according to claim 1 or 2, characterized in that the positional relationship information comprises at least one of: the other object is on the target object; the other objects are located on the target object and beyond the target object; the other object is not on the target object.
6. The method of claim 5, wherein the screening the other objects for a first obstacle that is located on the target object and exceeds the target object by at least a preset distance based on the positional relationship information comprises:
determining the length of the other object exceeding the target object in a preset direction under the condition that the position relation information indicates that the other object is positioned on the target object and exceeds the target object;
and when the length is greater than the preset distance, taking the other objects as the first barrier.
7. The method of claim 5, further comprising, after determining positional relationship information of each other object to the target object based on the second target point cloud data and the first target point cloud data:
and if the position relation information indicates that the other object is not on the target object, taking the other object outside the target object as a second obstacle.
8. A detection device, comprising:
the data acquisition module is used for acquiring first point cloud data in a target environment and target geometric characteristics of a target object;
the screening and dividing module is used for screening first target point cloud data belonging to the target object from the first point cloud data based on the target geometric characteristics, and performing object division on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object;
an information determination module for determining positional relationship information of each other object and the target object based on the second target point cloud data and the first target point cloud data;
and the object screening module is used for screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from the other objects based on the position relation information.
9. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the detection method of any one of claims 1 to 7 when the machine-readable instructions are executed by the processor.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the detection method according to any one of claims 1 to 7.
Background
When the robot runs in a narrow shelf roadway, goods on the shelves on the two sides of the robot, which exceed the shelves, collide with the robot, and normal running of the robot is affected. Under the general condition, the goods exceedes partial volume less, goods exceedes partial and goods shelves zonulae occludens, through the point cloud data clustering to gathering, can't cut apart the goods exceedes part that has less volume with goods shelves zonulae occludens, only can think this goods exceedes part and goods shelves are integrative, consequently can't realize accurate the segmentation, lead to robot environment perception to reduce, can not accomplish accurate obstacle avoidance, when the collision takes place, damage the robot easily, increase the robot maintenance cost.
Disclosure of Invention
The embodiment of the disclosure at least provides a detection method, a detection device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a detection method, including:
acquiring first point cloud data in a target environment and target geometric characteristics of a target object;
screening first target point cloud data belonging to the target object from the first point cloud data based on the target geometric features, and performing object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object;
determining positional relationship information of each other object with the target object based on the second target point cloud data and the first target point cloud data;
and screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from the other objects based on the position relation information.
In an optional embodiment, the screening, from the first point cloud data, first target point cloud data belonging to the target object based on the target geometric feature includes:
acquiring a preset fitting range of the target geometric features;
and fitting first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data.
In an alternative embodiment, the other objects include an object located between two adjacent target objects;
the object segmentation is performed on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object, and the method comprises the following steps:
screening second point cloud data located between two adjacent target objects from the first point cloud data based on the first target point cloud data;
and carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of objects except the target object includes:
determining edge position information of the target object based on the first target point cloud data;
based on the edge position information, point cloud data which are not located between two adjacent target objects are removed from the first point cloud data, and the remaining point cloud data after removal are used as third point cloud data;
and carrying out object segmentation on the third point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the location relation information includes at least one of: the other object is on the target object; the other objects are located on the target object and beyond the target object; the other object is not on the target object.
In an optional embodiment, the screening, from the other objects, a first obstacle that is located on the target object and exceeds the target object by at least a preset distance based on the positional relationship information includes:
determining the length of the other object exceeding the target object in a preset direction under the condition that the position relation information indicates that the other object is positioned on the target object and exceeds the target object;
and when the length is greater than the preset distance, taking the other objects as the first barrier.
In an optional embodiment, after determining the position relationship information of each other object and the target object based on the second target point cloud data and the first target point cloud data, the method further includes:
and if the position relation information indicates that the other object is not on the target object, taking the other object outside the target object as a second obstacle.
In an alternative embodiment, the target object comprises a shelf; the target geometric features comprise features of a preset plane of the shelf; the preset plane comprises a plane of the goods shelf, which is vertical to the direction of the goods entrance and exit and/or the horizontal plane.
In an alternative embodiment, the target object comprises the ground; the target geometric feature comprises a ground plane feature; the detection method further comprises the following steps:
and screening a second obstacle on the ground from the other objects based on the position relation information of the other objects and the ground.
In an optional embodiment, after the screening, based on the positional relationship information, a first obstacle that is located on the target object and exceeds the target object by a preset distance, the method further includes:
determining first location information of the first obstacle in the target environment based on second target point cloud data of the first obstacle to cause a mobile device to avoid the first obstacle based on the first location information; and/or
Determining second position information of the second obstacle in the target environment based on second target point cloud data of the second obstacle, so that a mobile device avoids the second obstacle based on the second position information.
In a second aspect, an embodiment of the present disclosure further provides a detection apparatus, including:
the data acquisition module is used for acquiring first point cloud data in a target environment and target geometric characteristics of a target object;
the screening and dividing module is used for screening first target point cloud data belonging to the target object from the first point cloud data based on the target geometric characteristics, and performing object division on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object;
an information determination module for determining positional relationship information of each other object and the target object based on the second target point cloud data and the first target point cloud data;
and the object screening module is used for screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from the other objects based on the position relation information.
In an optional embodiment, the screening and segmenting module is configured to obtain a preset fitting range of the target geometric feature; and fitting first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data.
In an alternative embodiment, the other objects include an object located between two adjacent target objects;
the screening and dividing module is used for screening second point cloud data positioned between two adjacent target objects from the first point cloud data based on the first target point cloud data; and carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the filtering and segmenting module is configured to determine edge position information of the target object based on the first target point cloud data; based on the edge position information, point cloud data which are not located between two adjacent target objects are removed from the first point cloud data, and the remaining point cloud data after removal are used as third point cloud data; and carrying out object segmentation on the third point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the location relation information includes at least one of: the other object is on the target object; the other objects are located on the target object and beyond the target object; the other object is not on the target object.
In an optional implementation manner, the object screening module is configured to determine, when the position relationship information indicates that the other object is located on the target object and exceeds the target object, a length of the other object exceeding the target object in a preset direction; and when the length is greater than the preset distance, taking the other objects as the first barrier.
In an optional embodiment, the object screening module is further configured to, after determining the position relationship information of each other object and the target object based on the second target point cloud data and the first target point cloud data, if the position relationship information indicates that the other object is not on the target object, regard another object outside the target object as a second obstacle.
In an alternative embodiment, the target object comprises a shelf; the target geometric features comprise features of a preset plane of the shelf; the preset plane comprises a plane of the goods shelf, which is vertical to the direction of the goods entrance and exit and/or the horizontal plane.
In an alternative embodiment, the target object comprises the ground; the target geometric feature comprises a ground plane feature; the object screening module is further configured to screen a second obstacle located on the ground from the other objects based on the information of the position relationship between the other objects and the ground.
In an optional implementation manner, the detection apparatus further includes an obstacle avoidance processing module, configured to, after a first obstacle that is located on the target object and exceeds the target object by a preset distance is screened from the other objects based on the positional relationship information, determine first position information of the first obstacle in the target environment based on second target point cloud data of the first obstacle, so that a mobile apparatus avoids the first obstacle based on the first position information; and/or determining second position information of the second obstacle in the target environment based on second target point cloud data of the second obstacle, so that the mobile device avoids the second obstacle based on the second position information.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the detection apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the detection method, which is not repeated herein.
According to the detection method, the detection device, the computer equipment and the storage medium provided by the embodiment of the disclosure, the first point cloud data in the target environment and the target geometric characteristics of the target object are obtained; screening first target point cloud data belonging to a target object from the first point cloud data based on the target geometric characteristics, and performing object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object; determining the position relation information of each other object and the target object based on the second target point cloud data and the first target point cloud data; based on the position relation information, a first obstacle which is located on the target object and exceeds the target object by at least a preset distance is screened from other objects, and compared with the prior art that a goods exceeding part which is tightly connected with the goods shelf and has a smaller volume cannot be separated, the first obstacle utilizes a target geometric feature which belongs to the unique geometric feature of the target object and does not include features which do not belong to the target object, for example, a plane feature of the goods shelf does not include features of goods with smaller volume exceeding the goods shelf. By using the target geometric characteristics of the target object, the first target point cloud data belonging to the target object and the second target point cloud data belonging to other objects can be accurately segmented, and then, by using the position relation information of each other object and the target object, which other objects belong to the first obstacle can be accurately distinguished.
Further, the detection method, the detection device, the computer device and the storage medium provided by the embodiment of the disclosure are realized by obtaining a preset fitting range of the target geometric features; and fitting the first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data. The first point cloud data belonging to the target object are fitted in the fitting range, and the accuracy of the determined first target point cloud data can be further improved.
Further, according to the detection method, the detection device, the computer device and the storage medium provided by the embodiments of the present disclosure, in the case that other objects include an object located between two adjacent target objects, second point cloud data located between the two adjacent target objects is screened from the first point cloud data based on the first target point cloud data; and carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object. The second point cloud data is a subset of the first point cloud data except the first target point cloud data, and only the screened second point cloud data is subjected to object segmentation.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flow chart of a detection method provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a target environment provided by an embodiment of the present disclosure;
fig. 3 illustrates a flow chart for fitting first point cloud data and determining first target point cloud data provided by an embodiment of the present disclosure;
fig. 4 is a schematic flow chart illustrating environmental perception and obstacle avoidance performed by the robot according to the embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a detection apparatus provided by an embodiment of the present disclosure;
fig. 6 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Furthermore, the terms "first," "second," and the like in the description and in the claims, and in the drawings described above, in the embodiments of the present disclosure are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein.
Reference herein to "a plurality or a number" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Research shows that when the robot runs in a narrow shelf roadway, goods on the shelves on the two sides of the robot, which exceed the shelves, collide with the robot, and normal running of the robot is affected. Under the general condition, the goods exceedes partial volume less, goods exceedes partial and goods shelves zonulae occludens, through the point cloud data clustering to gathering, can't cut apart the goods exceedes part that has less volume with goods shelves zonulae occludens, only can think this goods exceedes part and goods shelves are integrative, consequently can't realize accurate the segmentation, lead to robot environment perception to reduce, can not accomplish accurate obstacle avoidance, when the collision takes place, damage the robot easily, increase the robot maintenance cost.
Based on the above-described research, the present disclosure provides a detection method, apparatus, computer device, and storage medium that utilize a target geometric feature that belongs to a unique geometric feature of a target object, excluding features that do not belong to the target object, such as shelf plane features that do not include features of smaller volume of goods beyond the shelf. By using the target geometric characteristics of the target object, the first target point cloud data belonging to the target object and the second target point cloud data belonging to other objects can be accurately segmented, and then, by using the position relation information of each other object and the target object, which other objects belong to the first obstacle can be accurately distinguished.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
It should be noted that specific terms mentioned in the embodiments of the present disclosure include:
1. the K-means clustering algorithm is an iterative solution clustering analysis algorithm, called clustering, and includes the steps of dividing data into K groups, randomly selecting K objects as initial clustering centers, calculating distances between each object and each seed clustering center, and allocating each object to the nearest clustering center.
2. DBSCAN (sensitivity-Based Spatial Clustering of Applications with Noise) is a relatively representative Density-Based Clustering algorithm. Unlike the partitioning and hierarchical clustering method, which defines clusters as the largest set of density-connected points, it is possible to partition areas with sufficiently high density into clusters and find clusters of arbitrary shape in a spatial database of noise.
3. RANSAC is an abbreviation of Random Sample Consensus, and is an algorithm for calculating mathematical model parameters of data according to a group of Sample data sets containing abnormal data to obtain effective Sample data.
In order to understand the embodiment, an application scenario of the detection method disclosed in the embodiment of the present disclosure is first described in detail, and the detection method disclosed in the embodiment of the present disclosure may be applied to a scenario in which a robot performs environment sensing and obstacle avoidance, specifically, the robot runs in some narrow roadways, precisely distinguishes goods shelves on two sides and goods exceeding the goods shelves, and processes the goods exceeding the goods shelves according to obstacles. The sensor can be installed on the robot for environmental information in the running space of detection robot, including the goods state information of robot in narrow tunnel both sides goods shelves and unsettled barrier information between the goods shelves in narrow tunnel both sides, wherein, goods state information includes following at least one: the goods are arranged on the inner side of the goods shelf (not exceeding the goods shelf), the goods exceed the goods shelf, and the goods are arranged on the outer side of the goods shelf (such as the ground; or the goods are arranged above the ground between the goods shelves on two sides in the narrow roadway in the falling process of the goods shelf).
The execution subject of the detection method provided by the embodiments of the present disclosure is generally a computer device with certain computing power, and in some possible implementations, the detection method may be implemented by a processor calling a computer readable instruction stored in a memory.
The following describes the detection method provided by the embodiments of the present disclosure by taking the execution subject as a computer device as an example.
Referring to the above description of an application scenario of the detection method disclosed in the embodiment of the present disclosure, as shown in fig. 1, it is a flowchart of the detection method provided in the embodiment of the present disclosure, where the method includes steps S101 to S104, where:
s101: and acquiring first point cloud data in the target environment and target geometric characteristics of the target object.
In this step, the target environment may include an environment where the target object is located, for example, in the field of warehouse logistics, a scene where the robot runs in a warehouse is taken as an example, and the target environment includes a robot, a shelf, goods, other devices and equipment, and the like. Additionally, the target object may include a shelf and/or a floor.
The first point cloud data may be point cloud data of the target environment collected by a sensor mounted on the robot, wherein each first point cloud data has unique position coordinates in the robot coordinate system. Here, the sensor may be installed at the top of a portal frame of the robot to realize the collection of information of a target environment in the robot operation space, specifically, the environment information in front of the robot operation and the cargo state information on shelves on both sides of the robot. Here, the sensor used may be a three-dimensional sensing sensor, including but not limited to a time-of-flight based camera, a structured light based camera, a binocular camera, a lidar sensor, a millimeter wave radar sensor, and the like.
In this step, the target geometric feature may include a feature of a preset plane of the shelf and/or a ground plane feature, where the preset plane may include a plane of the shelf perpendicular to the cargo entrance and exit direction and/or the horizontal plane, as shown in fig. 2, which is a schematic diagram of a target environment, which is a top view. Wherein 21 denotes a robot, 22 denotes a forward direction of the robot, 23 denotes a left shelf of the robot, 24 denotes a right shelf of the robot, 25 denotes a direction of goods entrance and exit, and 26 denotes a preset plane of the shelf, which may be a right plane of the left shelf 23 of the robot, that is, a preset plane 26-a, or a left plane of the right shelf 24 of the robot, that is, a preset plane 26-B.
S102: and screening first target point cloud data belonging to a target object from the first point cloud data based on the target geometric characteristics, and performing object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object.
Firstly, based on the target geometric features, an equation about the target geometric features of the target object can be established, the first point cloud data which belong to the target object and meet the target geometric features are screened from the first point cloud data and are subjected to fitting processing, and the first target point cloud data are determined. And then, clustering the first target point cloud data, and segmenting and storing the first target point cloud data separately.
Continuing with the above example, first, a plane equation may be established within the robot coordinate system o-xyz, i.e.
Wherein a, B and c respectively represent coefficients X, Y, Z, X represents coordinate values in the X-axis direction, Y represents coordinate values in the Y-axis direction, Z represents coordinate values in the Z-axis direction, and M represents a distance between the preset plane 26-A and the preset plane 26-B. It should be noted that, in general, the robot travel path is a center line between the preset plane 26-a and the preset plane 26-B. Referring to fig. 2, the right direction of the robot is taken as the positive direction of the x axis in the robot coordinate system o-xyz, the forward direction of the robot is taken as the positive direction of the y axis in the robot coordinate system o-xyz, and the vertical upward direction is taken as the positive direction of the z axis in the robot coordinate system o-xyz.
Then, based on the characteristics of the preset plane 26-A of the left shelf of the robot, a plane equation belonging to the preset plane 26-A can be established in the robot coordinate system o-xyz, since the preset plane 26-A is parallel to the yoz plane and the preset plane 26-A passes through the point of the robot coordinate system o-xyzThe plane equation of the predetermined plane 26-A is thus as
Or, based on the characteristics of the preset plane 26-B of the right shelf of the robot, a plane equation belonging to the preset plane 26-B can be established in the robot coordinate system o-xyz, since the preset plane 26-B is parallel to the yoz plane and the preset plane 26-B passes through the point of the robot coordinate system o-xyzThe plane equation of the predetermined plane 26-B is thus as
Taking the preset plane 26-a and the preset plane 26-B as an example, in specific implementation, based on the position coordinates of the first point cloud data in the robot coordinate system, according to a plane equation of the preset plane 26-a and a plane equation of the preset plane 26-B, first target point cloud data belonging to the preset plane 26-a and first target point cloud data belonging to the preset plane 26-B are screened from the first point cloud data, the first target point cloud data belonging to the preset plane 26-a are separated from the first point cloud data to be stored separately, and the first target point cloud data belonging to the preset plane 26-B are separated from the first point cloud data to be stored separately.
After the first target point cloud data is segmented, whether the distance from the first point cloud data to the first target point cloud data exceeds at least a preset distance is detected between two adjacent target objects in the first point cloud data left after the first target point cloud data is segmented, the first point cloud data exceeding at least the preset distance is clustered, and the objects are segmented to obtain second target point cloud data of the obstacle. Specifically, the KMeans algorithm or DBSCAN may be adopted to analyze the first point cloud data exceeding the preset distance, divide the first point cloud data with higher similarity into the same cluster according to the similarity principle to obtain a clustered object, and divide the clustered object to obtain the second target point cloud data belonging to the obstacle.
It should be noted that, the position coordinates of the separately stored first target point cloud data in the robot coordinate system are known, and the distance from the first point cloud data between two adjacent target objects in the first point cloud data remaining after the first target point cloud data has been segmented to the first target point cloud data may be calculated based on the known position coordinates of the first target point cloud data.
Illustratively, other objects include all objects other than the target object, for example, in the case that the target object is a shelf and/or a ground, the other objects may include goods located on the shelf, or goods located on the ground, or other obstacles located on the ground, or obstacles suspended above a roadway between adjacent shelves, etc. The second target point cloud data may include point cloud data of goods on shelves, or point cloud data of goods on the ground, or point cloud data of other obstacles on the ground, or point cloud data of obstacles suspended above a roadway between adjacent shelves, or the like.
S103: and determining the position relation information of each other object and the target object based on the second target point cloud data and the first target point cloud data.
In this step, the position relationship information may include at least one of: other objects are on the target object; the other objects are located on the target object and beyond the target object; other objects are not on the target object, etc.
For example, the first target point cloud data may be point cloud data of a preset plane 26-a of the shelf, the second target point cloud data may be point cloud data belonging to the cargo 231 on the left shelf 23, and based on the position coordinates of the first target point cloud data and the second target point cloud data in the robot coordinate system, the position coordinates of the point cloud data of each cargo on the left shelf 23 in the robot coordinate system are determined, the position coordinates of the preset plane 26-a are determined, and further the position relationship information between each cargo and the left shelf 23 is determined, wherein the cargo is on the left shelf 23, such as the cargo 231-a; goods are located on the left side shelf 23 and beyond the left side shelf 23, such as goods 231-B.
S104: and screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from other objects based on the position relation information.
In specific implementation, under the condition that the position relation information indicates that other objects are positioned on the target object and exceed the target object, determining the length of the other objects exceeding the target object in the preset direction; and when the length is greater than the preset distance, taking other objects as a first obstacle.
Here, the preset distance may be determined according to a specific application example, and the embodiment of the present disclosure is not particularly limited. Without departing from the scope of the present disclosure, one skilled in the art may make various parameter modifications with respect to the preset distance, and such modifications should fall within the scope of the present disclosure.
Illustratively, referring to fig. 2, the predetermined direction may include a cargo entrance/exit direction, wherein the predetermined direction may be a direction indicated by 27 for a predetermined plane 26-a, and the predetermined direction may be a direction indicated by 28 for a predetermined plane 26-B. With reference to the preset plane 26-a, the goods on the left shelf 23 and exceeding the preset plane 26-a by at least a preset distance are used as a first obstacle, such as the part of the goods 231-B exceeding the preset plane 26-a. Here, the preset distance may be determined based on a distance M between the preset plane 26-a and the preset plane 26-B and a distance L occupied by the robot in a lane between the preset plane 26-a and the preset plane 26-B, for example, the preset distance may be
Then, based on the second target point cloud data of the first obstacle, first position information of the first obstacle in the target environment is determined, so that the mobile device avoids the first obstacle based on the first position information.
For example, the robot may perform obstacle avoidance processing according to first position information of the first obstacle in the operating environment, stop operating in time, and send a signal indicating that the first obstacle is present and position coordinates of the first obstacle in the target environment to the service desk.
In some embodiments, in a case where the positional relationship information indicates that the other object is not on the target object, the other object outside the target object is taken as the second obstacle. Second position information of the second obstacle in the target environment may then be determined based on second target point cloud data of the second obstacle to cause the mobile device to avoid the second obstacle based on the second position information.
For example, the case where the position relationship information indicates that the other object is not on the target object may include a case where the other object is located on the ground between the preset plane 26-a and the preset plane 26-B, or includes a case where the other object is located in the air between the preset plane 26-a and the preset plane 26-B, and the other object in this case is taken as the second obstacle. At this time, the robot may perform obstacle avoidance processing according to second position information of the second obstacle in the operating environment, stop operating in time, and send a signal indicating that the second obstacle appears and a position coordinate of the second obstacle in the target environment to the service desk.
In some embodiments, in the case that the target object includes the ground surface and the target geometric feature includes the ground plane feature, the second obstacle located on the ground surface may be further screened from other objects based on the information of the position relationship between the other objects and the ground surface.
Here, the method for screening the second obstacle on the ground from the other objects may refer to the method for screening the first obstacle in step S104, and in implementation, based on the ground plane feature, a plane equation belonging to the ground may be established in the robot coordinate system o-xyz, and since the ground is parallel to the xoy plane and passes through the origin (0,0,0) of the robot coordinate system o-xyz, the plane equation of the ground is
Z=0
Then, the second target point cloud data located on the ground, that is, the point cloud data whose distance to the ground is zero in the second target point cloud data, may be determined based on the position coordinate of the second target point cloud data in the robot coordinate system and the plane equation Z of the ground being 0, and the other object corresponding to the second target point cloud data located on the ground is used as the second obstacle.
The above steps S101 to S104 utilize the target geometric feature, which belongs to the unique geometric feature of the target object and does not include the feature not belonging to the target object, for example, the shelf plane feature does not include the feature of the smaller volume goods beyond the shelf. By using the target geometric characteristics of the target object, the first target point cloud data belonging to the target object and the second target point cloud data belonging to other objects can be accurately segmented, and then, by using the position relation information of each other object and the target object, which other objects belong to the first obstacle can be accurately distinguished.
Based on the step S102, selecting first target point cloud data belonging to the target object from the first point cloud data based on the target geometric features, in a possible implementation, fitting the first point cloud data belonging to the target object and satisfying the target geometric features, specifically, as shown in fig. 3, a flowchart for fitting the first point cloud data and determining the first target point cloud data includes steps S301 to S302:
s301: acquiring a preset fitting range of the geometric features of the target;
s302: and fitting the first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data.
Here, the fitting range is a range defined by the RANSAC algorithm. The RANSAC algorithm is used for fitting the first point cloud data which belong to the target object and meet the geometric characteristics of the target, and the RANSAC algorithm is used for fitting the first point cloud data which belong to the target object in the fitting range, so that the accuracy of the determined first target point cloud data can be further improved.
Based on the object segmentation of the first point cloud data except the first target point cloud data in step S102 to obtain the second target point cloud data of the other objects except the target object, in a possible embodiment, the segmentation process of the other objects may be performed only on the first point cloud data at the desired position, so that the object segmentation amount can be reduced, and the segmentation efficiency can be improved. In specific implementation, first, second point cloud data between two adjacent target objects can be screened from the first point cloud data based on the first target point cloud data; and then, carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object.
Illustratively, whether the distance between the second point cloud data and the first target point cloud data exceeds at least a preset distance is detected, the second point cloud data exceeding at least the preset distance is clustered, and the object is segmented to obtain second target point cloud data of the obstacle.
Here, the second point cloud data belongs to a subset of the first point cloud data other than the first target point cloud data. For example, when the first target point cloud data of the preset plane 26-a and the first target point cloud data of the preset plane 26-B are known, the position coordinates of the preset plane 26-a and the position coordinates of the preset plane 26-B may be determined, the second point cloud data located between the preset plane 26-a and the preset plane 26-B may be screened out from the first point cloud data, the second point cloud data may be clustered, the second target point cloud data belonging to other objects may be determined, and the second target point cloud data may be segmented from the second point cloud data to prepare for further determining the position relationship information between the other objects and the target object.
Here, object segmentation is performed only on the screened second point cloud data, and the amount of object segmentation can be reduced and the segmentation efficiency can be improved as compared with the object segmentation processing performed on the first point cloud data other than the first target point cloud data.
Based on the object segmentation of the first point cloud data except the first target point cloud data in the step S102, obtaining second target point cloud data of other objects except the target object, in another possible implementation, unnecessary obstacle point cloud data in the first point cloud data may be removed first, and then, the retained first point cloud data is segmented, and in specific implementation, edge position information of the target object is determined based on the first target point cloud data; based on the edge position information, point cloud data which are not located between two adjacent target objects are removed from the first point cloud data, and the remaining point cloud data after removal are used as third point cloud data; and carrying out object segmentation on the third point cloud data to obtain second target point cloud data of other objects except the target object.
Here, the obstacle point cloud data may be point cloud data that is not located between two adjacent target objects, and for example, as shown in fig. 2, the obstacle point cloud data may be point cloud data other than point cloud data between the preset plane 26-a and the preset plane 26-B (including point cloud data of the preset plane 26-a and point cloud data of the preset plane 26-B).
The edge position information of the target object includes position information of the preset plane 26-a and position information of the preset plane 26-B. The third point cloud data includes point cloud data between the preset plane 26-A and the preset plane 26-B.
By adopting the method of removing the obstacle point clouds firstly and then dividing the obstacle point clouds, the influence of the obstacle point cloud data on the object division of the third point cloud data can be avoided, and the accuracy of the object division can be improved.
Taking an application scenario in which the robot performs environmental awareness and obstacle avoidance as an example, the above steps S101 to S104 are explained in detail, as shown in fig. 4, which is a schematic flow diagram of the robot performing environmental awareness and obstacle avoidance, and the method includes steps S401 to S406:
s401: the robot collects first point cloud data through a sensor installed at the top of a robot gantry.
S402: and fitting and segmenting first target point cloud data of a preset plane of the shelf in the first point cloud data based on the shelf plane characteristics.
S403: judging whether the distance from the first point cloud data to the preset plane exceeds at least a preset distance between the preset planes of two adjacent shelves; if yes, go to step S404; if not, step S401 is executed in a loop.
S404: and clustering the first point cloud data exceeding at least a preset distance, and carrying out object segmentation to obtain second target point cloud data of other objects.
S405: taking other objects exceeding the preset distance and smaller than the preset threshold value as first obstacles on the goods shelf; and taking other objects exceeding the preset distance and larger than or equal to the preset threshold value as second obstacles on the ground and/or suspended above the roadway between the adjacent goods shelves.
S406: and sending the position information of the first obstacle and/or the second obstacle in the target environment to the robot so that the robot can carry out obstacle avoidance processing based on the position information of the obstacle.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a detection device corresponding to the detection method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the detection device in the embodiments of the present disclosure is similar to the detection method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 5, a schematic diagram of a detection apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: a data acquisition module 501, a screening and partitioning module 502, an information determination module 503 and an object screening module 504; wherein the content of the first and second substances,
a data obtaining module 501, configured to obtain first point cloud data in a target environment and a target geometric feature of a target object;
a screening and dividing module 502, configured to screen first target point cloud data belonging to the target object from the first point cloud data based on the target geometric features, and perform object division on the first point cloud data except for the first target point cloud data to obtain second target point cloud data of objects except for the target object;
an information determining module 503, configured to determine, based on the second target point cloud data and the first target point cloud data, position relationship information of each other object and the target object;
an object screening module 504, configured to screen, based on the positional relationship information, a first obstacle that is located on the target object and exceeds the target object by at least a preset distance from the other objects.
In an optional implementation manner, the filtering and segmenting module 502 is configured to obtain a preset fitting range of the target geometric feature; and fitting first point cloud data belonging to the target object based on the fitting range and the target geometric characteristics, and determining the first target point cloud data.
In an alternative embodiment, the other objects include an object located between two adjacent target objects;
the screening and dividing module 502 is configured to screen second point cloud data located between two adjacent target objects from the first point cloud data based on the first target point cloud data; and carrying out object segmentation on the second point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the filtering and segmenting module 502 is configured to determine edge position information of the target object based on the first target point cloud data; based on the edge position information, point cloud data which are not located between two adjacent target objects are removed from the first point cloud data, and the remaining point cloud data after removal are used as third point cloud data; and carrying out object segmentation on the third point cloud data to obtain second target point cloud data of other objects except the target object.
In an optional embodiment, the location relation information includes at least one of: the other object is on the target object; the other objects are located on the target object and beyond the target object; the other object is not on the target object.
In an optional implementation manner, the object screening module 504 is configured to determine, when the position relationship information indicates that the other object is located on the target object and exceeds the target object, a length of the other object exceeding the target object in a preset direction; and when the length is greater than the preset distance, taking the other objects as the first barrier.
In an optional implementation, the object screening module 504 is further configured to, after determining the position relationship information of each other object and the target object based on the second target point cloud data and the first target point cloud data, regarding other objects outside the target object as second obstacles if the position relationship information indicates that the other objects are not on the target object.
In an alternative embodiment, the target object comprises a shelf; the target geometric features comprise features of a preset plane of the shelf; the preset plane comprises a plane of the goods shelf, which is vertical to the direction of the goods entrance and exit and/or the horizontal plane.
In an alternative embodiment, the target object comprises the ground; the target geometric feature comprises a ground plane feature; the object screening module 504 is further configured to screen a second obstacle located on the ground from the other objects based on the information of the position relationship between the other objects and the ground.
In an optional embodiment, the detection apparatus further includes an obstacle avoidance processing module 505, configured to, after a first obstacle that is located on the target object and exceeds the target object by a preset distance is screened from the other objects based on the positional relationship information, determine first position information of the first obstacle in the target environment based on second target point cloud data of the first obstacle, so that a mobile apparatus avoids the first obstacle based on the first position information; and/or determining second position information of the second obstacle in the target environment based on second target point cloud data of the second obstacle, so that the mobile device avoids the second obstacle based on the second position information.
The description of the processing flow of each module in the detection apparatus and the interaction flow between each module may refer to the related description in the above detection method embodiment, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of a computer device provided in an embodiment of the present disclosure, and includes:
a processor 61 and a memory 62; the memory 62 stores machine-readable instructions executable by the processor 61, the processor 61 being configured to execute the machine-readable instructions stored in the memory 62, the processor 61 performing the following steps when the machine-readable instructions are executed by the processor 61:
s101: and acquiring first point cloud data in the target environment and target geometric characteristics of the target object.
S102: and screening first target point cloud data belonging to a target object from the first point cloud data based on the target geometric characteristics, and performing object segmentation on the first point cloud data except the first target point cloud data to obtain second target point cloud data of other objects except the target object.
S103: and determining the position relation information of each other object and the target object based on the second target point cloud data and the first target point cloud data.
S104: and screening a first obstacle which is positioned on the target object and exceeds the target object by at least a preset distance from other objects based on the position relation information.
The memory 62 includes a memory 621 and an external memory 622; the memory 621 is also referred to as an internal memory, and temporarily stores operation data in the processor 61 and data exchanged with the external memory 622 such as a hard disk, and the processor 61 exchanges data with the external memory 622 via the memory 621.
For the specific execution process of the instruction, reference may be made to the steps of the detection method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, executes the steps of the detection method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the detection method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the detection method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implementing, and for example, a plurality of units or components may be combined, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种基于概率分配的超像素方法