Method and device for measuring damaged area of vehicle

文档序号:9384 发布日期:2021-09-17 浏览:44次 中文

1. A method of measuring a damaged area of a vehicle, comprising:

acquiring an image to be processed of a vehicle;

obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed;

obtaining first position information of key points included in the image to be processed according to the image to be processed; wherein the key points are points set at preset positions on the 3D model of the vehicle;

acquiring three-dimensional coordinates of key points included in the image to be processed in the 3D model, and acquiring distances between the key points according to the three-dimensional coordinates of the key points in the 3D model;

obtaining at least one key point group according to the distance; the key point group comprises at least three key points, and the union of the at least one key point group comprises all key points on the image to be processed;

for each key point group, if the key points included in the key point group are coplanar, fitting the key points included in the key point group to obtain a first fitting plane;

determining a transformation relation between the image to be processed and the first fitting plane according to the key points included in the image to be processed and the first position information;

obtaining a projection area of the damage area in the first fitting plane according to the transformation relation;

and measuring the projection area to obtain a measurement result.

2. The method of claim 1, wherein the set of keypoints comprises keypoints that are coplanar, comprising:

judging whether the distance between a key point included in the key point group and a first plane is smaller than or equal to a preset threshold value, wherein the first plane is a plane determined by at least three key points in the key point group;

and if the distance between the key points included in the key point group and the first plane is smaller than or equal to the preset threshold, determining that the key points included in the key point group are coplanar.

3. The method according to claim 1, wherein the determining a transformation relation between the image to be processed and a first fitting plane according to the key points included in the image to be processed and the first position information comprises:

acquiring second position information of key points included in the image to be processed in the first fitting plane;

and determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

4. The method of claim 1, wherein the obtaining a projected area of the lesion area in the first fitted plane according to the transformation relationship comprises:

acquiring contour points of the damaged area and third position information of the contour points in the image to be processed;

according to the third position information and the transformation relation, fourth position information of the contour point in the first fitting plane is obtained;

and determining the projection area according to the fourth position information.

5. The method according to any one of claims 1-4, wherein the obtaining a damaged area of the vehicle in the image to be processed from the image to be processed comprises:

inputting the image to be processed into a first neural network model to obtain the damaged area; the first neural network model is used for acquiring a damage area of the vehicle in the image.

6. The method according to any one of claims 1-4, wherein the obtaining, from the image to be processed, first position information of a keypoint in the image to be processed, the keypoint being included in the image to be processed, comprises:

marking key points in the image to be processed;

inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed; the second neural network model is used for obtaining the position information of the key points in the image.

7. The method of any of claims 1-4, wherein the measurement comprises at least one of a length, a width, and an area of the damaged region;

wherein the length of the damage region is the length of the minimum circumscribed rectangle of the projection region;

the width of the damage region is the width of the minimum circumscribed rectangle of the projection region;

the area of the damage region is the area of the projection region.

8. A vehicle damage area measurement device, comprising:

the image acquisition module is used for acquiring an image to be processed of the vehicle;

the first area determining module is used for obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed;

the position acquisition module is used for acquiring first position information of key points included in the image to be processed according to the image to be processed; wherein the key points are points set at preset positions on the 3D model of the vehicle;

the distance acquisition module is used for acquiring three-dimensional coordinates of key points included in the image to be processed in the 3D model and acquiring distances among the key points according to the three-dimensional coordinates of the key points in the 3D model;

a key point group obtaining module, configured to obtain at least one key point group according to the distance; the key point group comprises at least three key points, and the union of the at least one key point group comprises all key points on the image to be processed;

the judging module is used for judging whether the key points included in the key point groups are coplanar or not for each key point group;

the fitting module is used for fitting the key points included in the key point group to obtain a first fitting plane when the key points included in the key point group are coplanar;

the relation acquisition module is used for determining a transformation relation between the image to be processed and the first fitting plane according to the key points included in the image to be processed and the first position information;

the second region determining module is used for obtaining a projection region of the damage region in the first fitting plane according to the transformation relation;

and the measuring module is used for measuring the projection area to obtain a measuring result.

9. The apparatus according to claim 8, wherein the determining module is specifically configured to:

judging whether the distance between a key point included in the key point group and a first plane is smaller than or equal to a preset threshold value, wherein the first plane is a plane determined by at least three key points in the key point group;

and if the distance between the key points included in the key point group and the first plane is smaller than or equal to the preset threshold, determining that the key points included in the key point group are coplanar.

10. The apparatus according to claim 8, wherein the relationship obtaining module is specifically configured to:

acquiring second position information of key points included in the image to be processed in the first fitting plane;

and determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

11. The apparatus of claim 8, wherein the second region determining module is specifically configured to:

acquiring contour points of the damaged area and third position information of the contour points in the image to be processed;

according to the third position information and the transformation relation, fourth position information of the contour point in the first fitting plane is obtained;

and determining the projection area according to the fourth position information.

12. The apparatus according to any one of claims 8-11, wherein the first region determining module is specifically configured to:

inputting the image to be processed into a first neural network model to obtain the damaged area; the first neural network model is used for acquiring a damage area of the vehicle in the image.

13. The apparatus according to any one of claims 8 to 11, wherein the location acquisition module is specifically configured to:

marking key points in the image to be processed;

inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed; the second neural network model is used for obtaining the position information of the key points in the image.

14. The apparatus of any one of claims 8-11, wherein the measurement comprises at least one of a length, a width, and an area of the damage region;

wherein the length of the damage region is the length of the minimum circumscribed rectangle of the projection region;

the width of the damage region is the width of the minimum circumscribed rectangle of the projection region;

the area of the damage region is the area of the projection region.

15. An electronic device, comprising:

at least one processor; and a memory communicatively coupled to the at least one processor;

wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-7.

Background

In recent years, with the increase in the amount of motor vehicles kept, the number of motor vehicles damaged due to traffic accidents and the like has been high. After a motor vehicle is damaged, it is often necessary to measure the damaged area of the vehicle as a basis for settlement of a claim by an insurance company.

At present, for damage assessment of a damaged area of a vehicle, usually, an investigator carries out site investigation and judgment on a vehicle accident site.

The survey personnel need to work on site, so that the labor cost is high, the operation flow time is long, and traffic jam is easily caused. Moreover, the damage is determined manually on site, and usually only the damaged area is determined qualitatively, so that the processing efficiency and accuracy of vehicle damage determination are low.

Disclosure of Invention

The invention provides a method and a device for measuring a damaged area of a vehicle, which improve the damage assessment precision and efficiency of the damaged area.

In a first aspect, the present invention provides a method for measuring a damaged area of a vehicle, comprising:

acquiring an image to be processed of a vehicle;

obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed;

obtaining first position information of key points included in the image to be processed according to the image to be processed; wherein the key points are points set at preset positions on the 3D model of the vehicle;

determining a transformation relation between the image to be processed and a first fitting plane according to the key points included in the image to be processed and the first position information; the first fitting plane is determined according to key points included in the image to be processed on the 3D model;

obtaining a projection area of the damage area in the first fitting plane according to the transformation relation;

and measuring the projection area to obtain a measurement result.

Optionally, the determining, according to the keypoints and the first position information included in the image to be processed, a transformation relationship between the image to be processed and a first fitting plane includes:

determining the first fitting plane according to key points included in the image to be processed;

acquiring second position information of key points included in the image to be processed in the first fitting plane;

and determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

Optionally, the determining the first fitting plane according to the key points included in the image to be processed includes:

obtaining a plurality of standard fitting planes of the 3D model, wherein the standard fitting planes are fitting planes determined according to at least three preset key points on the 3D model;

and determining the first fitting plane in the plurality of standard fitting planes according to key points included in the image to be processed.

Optionally, the obtaining of the plurality of standard fitting planes of the 3D model includes:

acquiring three-dimensional coordinates of a plurality of preset key points on the 3D model;

obtaining the distances among the preset key points according to the three-dimensional coordinates of the preset key points;

obtaining at least one key point group according to the distance; the key point group comprises at least three preset key points, and the union of the at least one key point group comprises all the preset key points on the 3D model;

for each key point group, if the preset key points included in the key point group are coplanar, fitting the preset key points included in the key point group to obtain a standard fitting plane.

Optionally, the determining the first fitting plane in the plurality of standard fitting planes according to the keypoints included in the image to be processed includes:

and determining the first fitting plane according to the identification of the key points included in the image to be processed and the identification of the preset key points included in each standard fitting plane.

Optionally, the obtaining a projection region of the damaged region in the first fitting plane according to the transformation relation includes:

acquiring contour points of the damaged area and third position information of the contour points in the image to be processed;

according to the third position information and the transformation relation, fourth position information of the contour point in the first fitting plane is obtained;

and determining the projection area according to the fourth position information.

Optionally, the obtaining the damaged area of the vehicle in the image to be processed according to the image to be processed includes:

inputting the image to be processed into a first neural network model to obtain the damaged area; the first neural network model is used for acquiring a damage area of the vehicle in the image.

Optionally, the obtaining, according to the image to be processed, first position information of a key point included in the image to be processed, includes:

marking key points in the image to be processed;

inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed; the second neural network model is used for obtaining the position information of the key points in the image.

Optionally, the measurement result includes at least one of a length, a width, and an area of the damaged region;

wherein the length of the damage region is the length of the minimum circumscribed rectangle of the projection region;

the width of the damage region is the width of the minimum circumscribed rectangle of the projection region;

the area of the damage region is the area of the projection region.

In a second aspect, the present invention provides a vehicle damage region measurement apparatus, comprising:

the image acquisition module is used for acquiring an image to be processed of the vehicle;

the first area determining module is used for obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed;

the position acquisition module is used for acquiring first position information of key points included in the image to be processed according to the image to be processed; wherein the key points are points set at preset positions on the 3D model of the vehicle;

the relation acquisition module is used for determining a transformation relation between the image to be processed and a first fitting plane according to the key points included in the image to be processed and the first position information; the first fitting plane is determined according to key points included in the image to be processed on the 3D model;

the second region determining module is used for obtaining a projection region of the damage region in the first fitting plane according to the transformation relation;

and the measuring module is used for measuring the projection area to obtain a measuring result.

Optionally, the relationship obtaining module is specifically configured to:

determining the first fitting plane according to key points included in the image to be processed;

acquiring second position information of key points included in the image to be processed in the first fitting plane;

and determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

Optionally, the relationship obtaining module is specifically configured to:

obtaining a plurality of standard fitting planes of the 3D model, wherein the standard fitting planes are fitting planes determined according to at least three preset key points on the 3D model;

and determining the first fitting plane in the plurality of standard fitting planes according to key points included in the image to be processed.

Optionally, the relationship obtaining module is specifically configured to:

acquiring three-dimensional coordinates of a plurality of preset key points on the 3D model;

obtaining the distances among the preset key points according to the three-dimensional coordinates of the preset key points;

obtaining at least one key point group according to the distance; the key point group comprises at least three preset key points, and the union of the at least one key point group comprises all the preset key points on the 3D model;

for each key point group, if the preset key points included in the key point group are coplanar, fitting the preset key points included in the key point group to obtain a standard fitting plane.

Optionally, the relationship obtaining module is specifically configured to:

and determining the first fitting plane according to the identification of the key points included in the image to be processed and the identification of the preset key points included in each standard fitting plane.

Optionally, the second region determining module is specifically configured to:

acquiring contour points of the damaged area and third position information of the contour points in the image to be processed;

according to the third position information and the transformation relation, fourth position information of the contour point in the first fitting plane is obtained;

and determining the projection area according to the fourth position information.

Optionally, the first region determining module is specifically configured to:

inputting the image to be processed into a first neural network model to obtain the damaged area; the first neural network model is used for acquiring a damage area of the vehicle in the image.

Optionally, the position obtaining module is specifically configured to:

marking key points in the image to be processed;

inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed; the second neural network model is used for obtaining the position information of the key points in the image.

Optionally, the measurement result includes at least one of a length, a width, and an area of the damaged region;

wherein the length of the damage region is the length of the minimum circumscribed rectangle of the projection region;

the width of the damage region is the width of the minimum circumscribed rectangle of the projection region;

the area of the damage region is the area of the projection region.

In a third aspect, the present invention provides a vehicle damage region measurement apparatus comprising a processor and a memory. The memory is used for storing instructions, and the processor is used for executing the instructions stored in the memory, so that the device can execute the method for measuring the damaged area of the vehicle provided by any one of the embodiments of the first aspect.

In a fourth aspect, an embodiment of the present application provides a storage medium, including: a readable storage medium and a computer program for implementing the method for measuring a damaged area of a vehicle provided in any one of the embodiments of the first aspect.

In a fifth aspect, the present application provides a program product including a computer program (i.e., executing instructions), the computer program being stored in a readable storage medium. The computer program may be read from a readable storage medium by at least one processor, and the execution of the computer program by the at least one processor causes the apparatus to implement the method for measuring a damaged area of a vehicle provided in any of the embodiments of the first aspect described above.

The invention provides a method and a device for measuring a damaged area of a vehicle, comprising the following steps: acquiring an image to be processed of a vehicle; obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed; obtaining first position information of key points included in the image to be processed according to the image to be processed; determining a transformation relation between the image to be processed and the first fitting plane according to the key points and the first position information included in the image to be processed; the first fitting plane is determined according to key points included in the image to be processed on the 3D model; obtaining a projection area of the damage area in the first fitting plane according to the transformation relation; and measuring the projection area to obtain a measurement result. The projection area corresponding to the damage area is obtained in the plane fitted on the local outer surface of the vehicle, the quantitative value of the damage area can be obtained by measuring the projection area, and the damage assessment precision and efficiency of the damage area are improved.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

FIG. 1 is a flow chart of a method for measuring a damaged area of a vehicle according to an embodiment of the present invention;

FIG. 2 is a schematic diagram illustrating the positions of a portion of predetermined key points on a 3D model of a vehicle according to the present invention;

FIG. 3 is a schematic diagram illustrating the positions of a further portion of predetermined key points on a 3D model of a vehicle according to the present invention;

FIG. 4 is a schematic diagram of a to-be-processed image provided by the present invention;

FIG. 5 is a flowchart of determining a transformation relationship between an image to be processed and a first fitting plane according to an embodiment of the present invention;

FIG. 6 is a flowchart of obtaining a plurality of standard fitting planes for a 3D model according to an embodiment of the present invention;

FIG. 7 is a schematic diagram of preset keypoints in a standard fit plane according to an embodiment of the present invention;

fig. 8 is a schematic structural diagram of a vehicle damage area measurement device according to an embodiment of the present invention;

fig. 9 is a schematic structural diagram of a vehicle damaged area measuring device according to a second embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The method and the device for measuring the vehicle damage area can be applied to a vehicle damage assessment scene.

Fig. 1 is a flowchart of a method for measuring a damaged area of a vehicle according to an embodiment of the present invention. In the method for measuring a damaged area of a vehicle provided by the embodiment, the execution subject may be a measuring device of the damaged area of the vehicle. As shown in fig. 1, the method for measuring a damaged area of a vehicle according to this embodiment may include:

and S101, acquiring a to-be-processed image of the vehicle.

The image to be processed of the vehicle is obtained by shooting a damaged area of the vehicle. The to-be-processed image of the vehicle includes a damaged area of the vehicle.

And S102, obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed.

Optionally, in an implementation manner, obtaining a damaged area of a vehicle in the image to be processed according to the image to be processed may include:

and processing the image to be processed by adopting an image processing method to obtain a damaged area in the image to be processed.

Due to the fact that the image processing method is multiple in types and mature in application, the damaged area of the vehicle in the image is obtained through the image processing method, and the method is easy to achieve.

In the present embodiment, the type of the image processing method is not limited.

Optionally, in another implementation, obtaining the damaged area of the vehicle in the image to be processed according to the image to be processed may include:

and inputting the image to be processed into the first neural network model to obtain a damaged area. The first neural network model is used for acquiring a damage area of the vehicle in the image.

Specifically, the first neural network model may be trained in advance. The input of the first neural network model is a damage assessment image of the vehicle, and the output is a damage area of the vehicle in the image.

The damaged area of the vehicle in the image is obtained through the neural network algorithm, and the processing efficiency of determining the damaged area is improved.

S103, obtaining first position information of key points included in the image to be processed according to the image to be processed.

The key points are points arranged at preset positions on the 3D model of the vehicle.

Specifically, key points are preset at positions with obvious features on the 3D model of the vehicle. Optionally, each preset key point has a unique identifier. In the present invention, in order to distinguish between a key point preset on a 3D model and a key point included in an image to be processed, the key point preset on the 3D model may be referred to as a preset key point. After obtaining the key points included in the image to be processed, the position information of the key points in the image to be processed, which is referred to as first position information, can be obtained.

Alternatively, the position information may include coordinate values. The unit of the coordinate value may be a length unit or a pixel unit.

Exemplarily, fig. 2 is a schematic position diagram of a part of preset key points on the 3D model of the vehicle provided by the present invention, and fig. 3 is a schematic position diagram of another part of preset key points on the 3D model of the vehicle provided by the present invention.

As shown in FIGS. 2 and 3, 63 preset key points are preset on a 3D model of a vehicle, and the mark is 0-62. Wherein fig. 2 and 3 show the preset key points identified as 0-31. According to the symmetry of the vehicle, the preset key points identified as 1-31 are symmetrical in position with the preset key points identified as 32-62 (not shown).

For example, the preset key points marked as 3-6 can identify the area where the front right lamp of the vehicle is located. For another example, the preset key points marked as 15-16 can identify the area where the handle of the front door on the right side of the vehicle is located.

By setting preset key points in the 3D model of the vehicle, different areas on the vehicle can be identified by a combination of the preset key points.

It should be noted that, the position and the number of the preset key points are not limited in this embodiment. In order to facilitate vehicle damage assessment, more preset key points can be set in the area which is easy to damage so as to refine the partition granularity of the area. Fewer preset key points can also be set in areas where damage is not likely to occur, so as to reduce the number of key points.

Exemplarily, fig. 4 is a schematic diagram of an image to be processed provided by the present invention.

As shown in fig. 4, the image to be processed shows the right rear of the vehicle. The image to be processed includes a damaged area 41. The key points included in the image to be processed include preset key points of the mark 18, the mark 24 and the mark 25. The first location information identifying the keypoints of 18 may be labeled as (x)18-1,y18-1). The first location information identifying the keypoints of 24 is labeled (x)24-1,y24-1) Key of the mark 25The first position information of the point is marked as (x)25-1,y25-1)。

It should be noted that, the present embodiment does not limit the position of the coordinate axis in the image to be processed.

It should be noted that the present embodiment is not limited to the method for constructing the 3D model of the vehicle.

For example, the following describes the construction of a 3D model of a vehicle using a Structure From Motion (SFM).

The SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected disordered pictures. First, a series of images of the vehicle are taken from different angles. Then, between these images, matching points (coresponsoring points) between the two images are found. Relative depth information is obtained through the parallax between the matching points, and therefore a 3D model of the vehicle is constructed.

Optionally, in S103, obtaining first position information of a key point included in the image to be processed according to the image to be processed may include:

and (5) marking key points in the image to be processed.

And inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed. And the second neural network model is used for acquiring the position information of the key point in the image.

Specifically, the second neural network model may be trained in advance. The input of the second neural network model is the damage assessment image of the vehicle with the marked key points, and the output is the position information of the key points in the image.

The second position information of the key point is obtained through a neural network algorithm, and the processing efficiency of obtaining the position information is improved.

And S104, determining a transformation relation between the image to be processed and the first fitting plane according to the key points and the first position information included in the image to be processed.

The first fitting plane is determined according to key points included in the image to be processed on the 3D model.

Specifically, the first fitting plane is a fitting plane determined by key points included in the image to be processed on the 3D model, and the three-dimensional curved surface on the 3D model can be mapped into a two-dimensional plane. Illustratively, as shown in fig. 4, the first fitting plane may be a fitting plane determined according to preset key points of the markers 18, 24, and 25. After the key points included in the image to be processed are obtained, the transformation relation between the image to be processed and the first fitting plane can be determined according to the first position information of the key points. In this way, points, lines or regions in the image to be processed can be mapped into the first fitted plane according to the transformation relation. Correspondingly, points, lines or regions in the first fitting plane may also be mapped into the image to be processed.

And S105, obtaining a projection area of the damage area in the first fitting plane according to the transformation relation.

And S106, measuring the projection area to obtain a measurement result.

Therefore, according to the method for measuring the damaged area of the vehicle provided by the embodiment, since the outer surface of the vehicle is an irregular curved surface, it is difficult to accurately measure the damaged area on the irregular curved surface. The image obtained by shooting the vehicle is a two-dimensional plane image, and the accurate measurement result of the damaged area on the curved surface cannot be obtained through the image to be processed. In this embodiment, by establishing a transformation relationship between the image to be processed and the first fitting plane, the two-dimensional plane where the image to be processed is located, the 3D model of the vehicle, and the two-dimensional plane after the local outer surface of the vehicle is fitted can be associated. By projecting the damaged area in the image to be processed into the first fitted plane, the measurement of the damaged area on the vehicle body can be obtained in the plane to which the local outer surface of the vehicle is fitted. The quantitative calculation of the damaged area is realized, and the damage assessment accuracy and efficiency of the damaged area are improved. Meanwhile, the difficulty of vehicle damage assessment is reduced, and labor cost is reduced.

In the present embodiment, the execution order of S102 to S104 is not limited.

Optionally, please refer to fig. 5. Fig. 5 is a flowchart of determining a conversion relationship between an image to be processed and a first fitting plane according to an embodiment of the present invention. As shown in fig. 5, in S104, determining a transformation relationship between the image to be processed and the first fitting plane according to the key points and the first position information included in the image to be processed may include:

s1041, determining a first fitting plane according to the key points included in the image to be processed.

And S1042, acquiring second position information of the key points included in the image to be processed in the first fitting plane.

And S1043, determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

This is illustrated by way of example.

As shown in fig. 4, the identifications of the keypoints included in the image to be processed are 18, 24, 25. The first position information is (x)18-1,y18-1)、(x24-1,y24-1) And (x)25-1,y25-1). It is assumed that the second position information of the keypoints identified 18, 24, 25 in the first fitting plane is (x) respectively18-2,y18-2)、(x24-2,y24-2) And (x)25-2,y25-2). Then, point pairs { (x) can be followed18-1,y18-1),(x18-2,y18-2)}、{(x24-1,y24-1),(x24-2,y24-2) And { (x)25-1,y25-1),(x25-2,y25-2) And determining a transformation relation between the image to be processed and the first fitting plane. Wherein the matching point pair may be determined according to the identification of the keypoint.

Optionally, in S1043, if the number of the key points included in the image to be processed is 3, the Transformation relationship may be determined by using an Affine Transformation matrix (Affine Transformation matrix) method. If the number of the key points included in the image to be processed is greater than 3, the transformation relationship may be determined by using a Homography transformation matrix (Homography transformation matrix).

The following describes implementations of S1041 and S1042.

Optionally, in an implementation manner, in S1041, determining a first fitting plane according to a keypoint included in the image to be processed may include:

and acquiring a plurality of standard fitting planes of the 3D model, wherein the standard fitting planes are determined according to at least three preset key points on the 3D model.

And determining a first fitting plane in the plurality of standard fitting planes according to key points included in the image to be processed.

Specifically, in this implementation, since the 3D model of the vehicle is selected, the positions of the preset key points set on the 3D model are also fixed. Therefore, a plurality of standard fitting planes can be predetermined according to preset key points on the 3D model. For example, as shown in FIG. 2, the predetermined key points identified 21-23 can define a standard fit plane. The preset key points of the marks 14-17 can determine a standard fitting plane. Wherein, each standard fitting plane is generated by fitting at least 3 preset key points.

Correspondingly, S1042, acquiring second position information of the keypoint included in the image to be processed in the first fitting plane, may include:

and acquiring the position information of the preset key points included in the first fitting plane.

And obtaining second position information according to the key points included in the processed image and the position information of the preset key points included in the first fitting plane.

Specifically, a plurality of standard fitting planes may be predetermined according to preset key points on the 3D model. The position information of the preset key points included in each standard fitting plane in the standard fitting plane may be obtained in advance. Therefore, according to the key points included in the image to be processed and the position information of the preset key points in the first fitting plane obtained in advance, the second position information of the key points included in the image to be processed in the first fitting plane can be obtained.

Optionally, determining a first fitting plane in the plurality of standard fitting planes according to the keypoints included in the image to be processed may include:

and determining a first fitting plane according to the identification of the key points included in the image to be processed and the identification of the preset key points included in each standard fitting plane.

The implementation mode reduces the difficulty of determining the first fitting plane and acquiring the second position information.

Optionally, fig. 6 is a flowchart for acquiring multiple standard fitting planes of the 3D model according to the embodiment of the present invention. As shown in fig. 6, obtaining a plurality of standard fit planes for the 3D model may include:

s601, obtaining three-dimensional coordinates of a plurality of preset key points on the 3D model.

Specifically, preset key points are preset on a 3D model of the vehicle. The three-dimensional coordinates of the preset key points in the three-dimensional coordinate system can be obtained according to the 3D model. In this embodiment, the directions of the coordinate axes and the positions of the origins in the three-dimensional coordinate system are not limited. For example, as shown in FIG. 2, the origin of the three-dimensional coordinate system may be the projected point of the center point of the vehicle in a plane defined by the bottom surface of the wheel. The positive direction of the X axis points to the left side of the vehicle, the positive direction of the Y axis is vertical upward, and the positive direction of the Z axis points to the head of the vehicle. For another example, the origin of the three-dimensional coordinate system may be the center point of the vehicle.

S602, obtaining the distances among the preset key points according to the three-dimensional coordinates of the preset key points.

After the three-dimensional coordinates of the preset key points are obtained, the distance between any two preset key points can be obtained. The distance calculation method is not limited in this embodiment. For example, the euclidean distance between two preset keypoints can be obtained by formula (1).

Wherein x and y respectively represent two preset key points, xiThree-dimensional coordinates, y, representing a predetermined key point xiRepresenting the three-dimensional coordinates of the preset keypoint y.

And S603, obtaining at least one key point group according to the distance.

The key point groups comprise at least three preset key points, and the union of at least one key point group comprises all the preset key points on the 3D model.

Specifically, each keypoint group may include at least 3 preset keypoints. The union of all the keypoint groups includes all the preset keypoints. For a certain preset keypoint, it may be located in at least one preset keypoint. For example, as shown in fig. 2, the predetermined keypoints for the identifiers 19, 20, 14, 18 may be a keypoint group. The predetermined keypoints of the identifiers 15, 16, 14, 11 may be a keypoint group.

It should be noted that, in this embodiment, the method for acquiring the key point group is not limited.

The K-nearest neighbor (kNN) algorithm is described as an example.

The K nearest neighbors mean K nearest neighbors. Each sample may be represented by its nearest k neighbors. The core idea of the kNN algorithm is as follows: the k nearest neighbors of a sample in feature space mostly belong to a class. In this embodiment, a KNN algorithm may be adopted to search k preset key points closest to each preset key point. For example, as shown in fig. 2, for the preset key points of the identifier 19, K preset key points of the identifiers 19, 20, 14, and 18 may be obtained by using a KNN algorithm.

And S604, for each key point group, if the preset key points included in the key point group are coplanar, fitting the preset key points included in the key point group to obtain a standard fitting plane.

Optionally, the preset keypoints included in the keypoint group are coplanar, and may include:

each preset key point included in the key point group has a distance from the first plane smaller than or equal to a preset threshold. The first plane may be a plane determined by at least three preset keypoints in the keypoint group.

It should be noted that, in this embodiment, a specific value of the preset threshold is not limited.

It should be noted that, the algorithm for obtaining the standard fitting plane according to the key point group is not limited in this embodiment.

For example, a Random Sample Consensus algorithm (RANSAC) may be used to obtain a coplanar plane, called a standard fit plane, to which each coplanar set of keypoints fits best.

By adopting RANSAC algorithm, the robustness of the standard fitting plane is improved.

After obtaining the plurality of standard fitting planes of the 3D model through the steps of S601 to S604, for each standard fitting plane, for the preset key points included in the standard fitting plane, the preset key points on the 3D model may be projected onto the standard fitting plane, so as to obtain the position information of the preset key points included in the standard fitting plane.

Illustratively, fig. 7 is a schematic diagram of preset keypoints in a standard fitting plane according to an embodiment of the present invention. As shown in fig. 7, after the preset key points of the identifiers 13 to 21 are projected onto the standard fitting plane, a Principal Component Analysis (PCA) method may be adopted, and two-dimensional coordinates are retained to obtain coordinates of the preset key points of the identifiers 13 to 21 in the standard fitting plane. The coordinates may be in centimeters. For example, the coordinates of the preset keypoints of the identifier 13 in the standard fit plane may be labeled 13(4, -104).

Optionally, in another implementation manner, in S1041, determining a first fitting plane according to the keypoints included in the image to be processed may include:

three-dimensional coordinates in the 3D model are obtained.

And obtaining the distance between the key points according to the three-dimensional coordinates of the key points included in the image to be processed in the 3D model.

And fitting the key points included in the image to be processed to obtain a first fitting plane.

Reference may be made to the above description of obtaining multiple standard fitting planes of a 3D model, which has similar principles and will not be described herein again.

Optionally, in S105, obtaining a projection region of the damaged region in the first fitting plane according to the transformation relationship may include:

and acquiring the contour points of the damaged area and third position information of the contour points in the image to be processed.

And obtaining fourth position information of the contour point in the first fitting plane according to the third position information and the transformation relation.

And determining the projection area according to the fourth position information.

Optionally, in S106, the measurement result includes at least one of a length, a width, and an area of the damaged region.

The length of the damage area is the length of the minimum circumscribed rectangle of the projection area.

The width of the damage region is the width of the minimum bounding rectangle of the projection region.

The area of the damage region is the area of the projection region.

Alternatively, the area of the projection region may be calculated by a triangular patch method. And dividing the projection area into a plurality of triangular patches, wherein the sum of the areas of the triangular patches is the total area of the projection area. The coplanar area is taken as the area of the damaged region.

For the length and width of the damaged area, illustratively, as shown in fig. 4, the rectangle 42 is the smallest bounding rectangle of the damaged area 41. Here is a schematic diagram showing only a minimum bounding rectangle. In the embodiment of the present application, the minimum bounding rectangle is actually the minimum bounding rectangle of the projection area in the first fitting plane.

Illustratively, according to the method for measuring the damaged area of the vehicle provided by the embodiment of the application, the damaged area 41 has a length of 3.446 decimeters, a width of 2.555 decimeters and an area of 6.95 square decimeters.

It can be seen that the present embodiment provides a method for measuring a damaged area of a vehicle, including: the method comprises the steps of obtaining an image to be processed of a vehicle, obtaining a damaged area of the vehicle in the image to be processed according to the image to be processed, obtaining first position information of key points included in the image to be processed according to the image to be processed, determining a transformation relation between the image to be processed and a first fitting plane according to the key points included in the image to be processed and the first position information, obtaining a projection area of the damaged area in the first fitting plane according to the transformation relation, measuring the projection area, and obtaining a measurement result. According to the method for measuring the damaged area of the vehicle, a fitting plane for measuring the damaged area of the vehicle can be obtained by a method for performing planarization approximation on the local outer surface of the vehicle. By acquiring the position information of the key points included in the image to be processed of the vehicle, the transformation relation between the fitting plane determined by the key points included in the image to be processed and the image to be processed can be obtained. Furthermore, by projecting the damaged area in the image to be processed into the fitting plane, the measurement result of the damaged area on the vehicle body can be obtained in the plane to which the local outer surface of the vehicle is fitted. The quantitative calculation of the damaged area is realized, and the damage assessment accuracy and efficiency of the damaged area are improved.

Fig. 8 is a schematic structural diagram of a vehicle damaged area measuring device according to an embodiment of the present invention. The device for measuring the damaged area of the vehicle provided by the embodiment is used for executing the method for measuring the damaged area of the vehicle provided by any one of the embodiments shown in fig. 1-7. As shown in the figure, the device for measuring a damaged area of a vehicle provided in this embodiment may include:

and the image acquisition module 81 is used for acquiring the image to be processed of the vehicle.

And the first region determining module 82 is used for obtaining the damaged region of the vehicle in the image to be processed according to the image to be processed.

And the position obtaining module 83 is configured to obtain, according to the image to be processed, first position information of a key point included in the image to be processed. The key points are points arranged at preset positions on the 3D model of the vehicle.

And the relationship obtaining module 84 is configured to determine a transformation relationship between the image to be processed and the first fitting plane according to the key points and the first position information included in the image to be processed. The first fitting plane is determined according to key points included in the image to be processed on the 3D model.

And a second region determining module 85, configured to obtain a projection region of the damage region in the first fitting plane according to the transformation relation.

And the measuring module 86 is used for measuring the projection area to obtain a measuring result.

Optionally, the relationship obtaining module 84 is specifically configured to:

and determining a first fitting plane according to key points included in the image to be processed.

And acquiring second position information of the key points included in the image to be processed in the first fitting plane.

And determining a transformation relation between the image to be processed and the first fitting plane according to the first position information and the second position information.

Optionally, the relationship obtaining module 84 is specifically configured to:

and acquiring a plurality of standard fitting planes of the 3D model, wherein the standard fitting planes are determined according to at least three preset key points on the 3D model.

And determining a first fitting plane in the plurality of standard fitting planes according to key points included in the image to be processed.

Optionally, the relationship obtaining module 84 is specifically configured to:

and acquiring three-dimensional coordinates of a plurality of preset key points on the 3D model.

And obtaining the distances among the preset key points according to the three-dimensional coordinates of the preset key points.

At least one keypoint group is obtained from the distance. The keypoint groups comprise at least three preset keypoints, and the union of at least one keypoint group comprises all the preset keypoints on the 3D model.

For each key point group, if the preset key points included in the key point group are coplanar, fitting the preset key points included in the key point group to obtain a standard fitting plane.

Optionally, the relationship obtaining module 84 is specifically configured to:

and determining a first fitting plane according to the identification of the key points included in the image to be processed and the identification of the preset key points included in each standard fitting plane.

Optionally, the second area determining module 85 is specifically configured to:

and acquiring the contour points of the damaged area and third position information of the contour points in the image to be processed.

And obtaining fourth position information of the contour point in the first fitting plane according to the third position information and the transformation relation.

And determining the projection area according to the fourth position information.

Optionally, the first region determining module 82 is specifically configured to:

and inputting the image to be processed into the first neural network model to obtain a damaged area. The first neural network model is used for acquiring a damaged area of the vehicle in the image.

Optionally, the position obtaining module 83 is specifically configured to:

and (5) marking key points in the image to be processed.

And inputting the image to be processed after the key point is calibrated into a second neural network model to obtain first position information of the key point in the image to be processed. The second neural network model is used for acquiring the position information of the key points in the image.

Optionally, the measurement result includes at least one of a length, a width, and an area of the damaged region.

Wherein, the length of the damage area is the length of the circumscribed rectangle of the projection area.

The width of the damage region is the width of the circumscribed rectangle of the projection region.

The area of the damage region is the area of the projection region.

The device for measuring the damaged area of the vehicle provided by the embodiment is used for executing the method for measuring the damaged area of the vehicle provided by any one of the embodiments shown in fig. 1-7. The specific implementation and technical effects are similar, and are not described herein again.

Fig. 9 is a schematic structural diagram of a vehicle damaged area measuring device according to a second embodiment of the present invention. As shown, the vehicle damage region measurement device includes a processor 91 and a memory 92. The memory 92 is configured to store instructions, and the processor 91 is configured to execute the instructions stored in the memory 92, so as to enable the vehicle damage area measurement apparatus to perform the vehicle damage area measurement method provided in any one of the embodiments shown in fig. 1 to 7. The specific implementation and technical effects are similar, and are not described herein again.

It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the determining module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the function of the determining module is called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.

For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).

In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:基于人眼瞳孔的吸毒判别方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!