Digital photogrammetry method, electronic equipment and system
1. A digital photogrammetry method, comprising:
acquiring a first image and a second image, wherein the first image and the second image both comprise a target object and a first object with known actual size; the shooting positions of the first image and the second image are different, and the shooting directions of the first image and the second image are different;
constructing a digital three-dimensional space according to the first image and the second image;
determining the distance between the two end points of the target object based on the digital three-dimensional space and a scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space, wherein the scaling is related to the position of the first object in the first image, the position of the first object in the second image, and the actual dimension of the first object S1.
2. The method of claim 1, further comprising:
determining a sky direction of the digital three-dimensional space according to the ground of the digital three-dimensional space and the photographing center of the first image, or according to the ground of the digital three-dimensional space and the photographing center of the second image; the ground of the digital three-dimensional space is a plane with the most densely distributed discrete points in the digital three-dimensional space;
determining an altitude of the target object based on the digital three-dimensional space, the scaling, a ground of the digital three-dimensional space, and a sky direction of the digital three-dimensional space.
3. The method according to claim 1 or 2, wherein before said determining the distance between the two end points of the target object according to the digital three-dimensional space and the scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space, the method further comprises:
acquiring the position of the first object in the first image, the position of the first object in the second image, and the actual size of the first object S1;
calculating a dimension of the first object in the digital three-dimensional space based on the position of the first object in the first image and the position of the first object in the second image S2;
calculating the scaling of the dimensions of the digital three-dimensional space to a real three-dimensional space based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2.
4. The method of claim 3, wherein the obtaining the position of the first object in the first image, the position of the first object in the second image, and the actual size of the first object S1 comprises:
receiving input of the position of the first object in the first image, the position of the first object in the second image, and the actual size of the first object S1;
alternatively, the position of the first object in the first image and the position of the first object in the second image are identified, and the actual size of the first object is found S1.
5. The method of claim 4, wherein the first object is a target designed in segments, the first object comprising at least a first black segment, a first white segment, a first colored segment, a second white segment, a second colored segment, a third white segment, a second black segment, arranged in sequence; wherein the colors of the first color segment and the second color segment are a pair of complementary colors;
the actual dimension S1 of the first object is the length between two end points of the first object, one end point of the first object is located at the boundary of the first black segment and the first white segment, and the other end point of the first object is located at the boundary of the third white segment and the second black segment;
the positions of the first object in the first image are the positions of two end points of the first object in the first image; the positions of the first object in the second image are the positions of two end points of the first object in the second image.
6. The method of claim 5, wherein the identifying the location of the first object in the first image and the location of the first object in the second image comprises:
identifying the first and second color segments in the first image and a first region in the first image having a straight line feature; identifying the first and second color segments in the second image and a second region in the second image having a straight line feature;
automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the first image, the first area in the first image and the position relation of each color segment in the first object; and automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the second image, the second area in the second image and the position relation of each color segment in the first object.
7. The method according to any one of claims 5 or 6, wherein the first color segment is a red segment and the second color segment is a cyan segment;
or, the first color segment is a magenta segment, and the second color segment is a green segment.
8. The method of any one of claims 1-7, wherein determining the distance between the two endpoints of the target object based on the digital three-dimensional space and a scaling of the dimensions of the digital three-dimensional space to a real three-dimensional space comprises:
determining a distance between two end points of the target object according to the digital three-dimensional space, the scaling, the positions of the two end points of the target object in the first image, and the positions of the two end points of the target object in the second image.
9. The method of any one of claims 2-8, wherein said determining the altitude of the target object based on the digital three-dimensional space, the scale, a ground of the digital three-dimensional space, and a sky direction of the digital three-dimensional space comprises:
determining an altitude of the target object based on the digital three-dimensional space, the scaling, a ground of the digital three-dimensional space, a sky direction of the digital three-dimensional space, a position of a top of the target object in the first image, and a position of the top of the target object in the second image.
10. A measuring device, comprising:
an acquisition unit configured to acquire a first image and a second image, both of which include a target object and a first object of which actual size is known; the shooting positions of the first image and the second image are different, and the shooting directions of the first image and the second image are different;
a construction unit for digital three-dimensional space based on the first image and the second image;
a determining unit for determining a distance between two end points of the target object based on the digital three-dimensional space and a scaling of the dimensions of the digital three-dimensional space and a real three-dimensional space, wherein the scaling is related to the position of the first object in the first image, the position of the first object in the second image, and the actual dimension S1 of the first object.
11. The measurement device according to claim 10, wherein the determination unit is further configured to:
determining a sky direction of the digital three-dimensional space according to the ground of the digital three-dimensional space and the photographing center of the first image, or according to the ground of the digital three-dimensional space and the photographing center of the second image; the ground of the digital three-dimensional space is a plane with the most densely distributed discrete points in the digital three-dimensional space;
determining an altitude of the target object based on the digital three-dimensional space, the scaling, a ground of the digital three-dimensional space, and a sky direction of the digital three-dimensional space.
12. The measuring device according to claim 10 or 11,
before the determination unit determines the distance between the two end points of the target object based on the digital three-dimensional space and the scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space,
the acquiring unit is further configured to acquire the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object;
the determining unit is further configured to calculate a size S2 of the first object in the digital three-dimensional space according to the position of the first object in the first image and the position of the first object in the second image; calculating the scaling of the dimensions of the digital three-dimensional space to a real three-dimensional space based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2.
13. The measurement apparatus according to claim 12, wherein in the process of acquiring the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object, the acquiring unit is specifically configured to:
receiving input of the position of the first object in the first image, the position of the first object in the second image, and the actual size of the first object S1;
alternatively, the position of the first object in the first image and the position of the first object in the second image are identified, and the actual size of the first object is found S1.
14. The measuring device according to claim 13, wherein the first object is a marker post of a segmented design, the first object comprising at least a first black segment, a first white segment, a first colored segment, a second white segment, a second colored segment, a third white segment, a second black segment, arranged in sequence; wherein the colors of the first color segment and the second color segment are a pair of complementary colors;
the actual dimension S1 of the first object is the length between two end points of the first object, one end point of the first object is located at the boundary of the first black segment and the first white segment, and the other end point of the first object is located at the boundary of the third white segment and the second black segment;
the positions of the first object in the first image are the positions of two end points of the first object in the first image; the positions of the first object in the second image are the positions of two end points of the first object in the second image.
15. The measurement device according to claim 14, wherein in the process of the acquiring unit identifying the position of the first object in the first image and the position of the first object in the second image, the acquiring unit is further specifically configured to:
identifying the first and second color segments in the first image and a first region in the first image having a straight line feature; identifying the first and second color segments in the second image and a second region in the second image having a straight line feature;
automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the first image, the first area in the first image and the position relation of each color segment in the first object; and automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the second image, the second area in the second image and the position relation of each color segment in the first object.
16. A measuring device according to any one of claims 14 or 15, wherein the first colour segment is a red segment and the second colour segment is a cyan segment;
or, the first color segment is a magenta segment, and the second color segment is a green segment.
17. The measurement arrangement according to any of the claims 10-16, wherein in the process of the determination unit determining the distance between the two end points of the target object based on the digital three-dimensional space and the scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space, the determination unit is specifically configured to:
determining a distance between two end points of the target object according to the digital three-dimensional space, the scaling, the positions of the two end points of the target object in the first image, and the positions of the two end points of the target object in the second image.
18. The measurement device according to any of claims 11-17, wherein in the process of the determination unit determining the height of the target object from the digital three-dimensional space, the scaling, the ground of the digital three-dimensional space and the sky direction of the digital three-dimensional space, the determination unit is specifically configured to:
determining an altitude of the target object based on the digital three-dimensional space, the scaling, a ground of the digital three-dimensional space, a sky direction of the digital three-dimensional space, a position of a top of the target object in the first image, and a position of the top of the target object in the second image.
19. A server, comprising one or more processors, one or more memories, and one or more communication interfaces, the one or more memories, the one or more communication interfaces coupled with the one or more processors, the one or more memories for storing computer program code, the computer program code comprising computer instructions that, when read by the one or more processors from the one or more memories, cause the server to perform the digital photogrammetry method of any of claims 1-9.
20. A computer storage medium comprising computer instructions which, when run on a server, cause the server to perform the digital photogrammetry method of any of claims 1-9.
Background
Surveying work aiming at the communication base station, including measurement of indoor and outdoor equipment size, height, deployment spacing and the like, is an important prerequisite for work such as site design, equipment deployment, material sending, risk detection and the like of the communication base station. The traditional surveying means is outdoor ruler measurement and indoor hand-drawing. The traditional survey means has the problems of large workload, poor reliability, high danger and the like. For this reason, operators and tower vendors and the like use digital photogrammetry methods more and more to complete survey work for communication base stations.
The digital photogrammetry is to process a digital image or a digitized image by using a computer, and to replace the stereo measurement and identification of human eyes by using computer vision, so as to finish the automatic extraction of geometric and physical information.
Among them, measurement schemes based on ground sequence images are often used. The measuring personnel can take a large number of photos or videos containing the control points by using a mobile phone or a camera and the like, and then perform relevant data processing on the photos or videos. It should be noted that before taking a picture or video, the surveying staff needs to lay out a plurality of targets representing control points following strict distance and orientation relationships. The target placement process is complicated and unreliable. In addition, the arrangement of a plurality of targets requires a certain space, and is difficult to implement in a narrow space, a sloping roof, and other scenes.
Therefore, a reliable digital photogrammetry method which is convenient to operate and suitable for wider measurement scenes is needed.
Disclosure of Invention
The digital photogrammetry method, the electronic equipment and the system can reduce the operation complexity of image acquisition, expand the applicable scene of digital photogrammetry and increase the reliability of measurement.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
in a first aspect, the present application provides a method, including: acquiring a first image and a second image, wherein the first image and the second image both comprise a target object and a first object with known actual size; the shooting positions of the first image and the second image are different, and the shooting directions of the first image and the second image are different; constructing a digital three-dimensional space according to the first image and the second image; the distance between the two end points of the target object is determined from the digital three-dimensional space, and the scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space, wherein the scaling is related to the position of the first object in the first image, the position of the first object in the second image, and the actual dimension of the first object S1.
Wherein the digital three-dimensional space is presented in the form of a three-dimensional point cloud. The three-dimensional point cloud is a set of point data of the surface of an object in a three-dimensional space, and can reflect the contour of the surface of the object. The two end points of the target object refer to the two end points of the distance to be measured. It should be noted that the target object here may be one, and the distance between two end points of the target object may be the height, length, width, and the like of the target object. The target objects may be two, and the distance between the two end points of the target objects may be the distance between the two target objects, and the like.
Compared with the prior art in which a target with a plurality of control points is required to be arranged by a measurer according to a strict distance relation and a strict azimuth relation in advance, and a large number of pictures or videos containing the target are shot, the operation of shooting the images twice in different shooting positions and different shooting directions is more convenient. In addition, the size scaling of the digital three-dimensional space is determined by using the actual size of the first object, so that the reliability of measurement is improved. Furthermore, since images can be shot at different shooting positions and in different shooting directions in scenes such as a narrow machine room and an inclined roof, the measurement method provided by the embodiment of the application can be applied to wider measurement scenes.
In a possible implementation, the method further includes: determining the sky direction of the digital three-dimensional space according to the ground of the digital three-dimensional space and the photographing center of the first image or according to the ground of the digital three-dimensional space and the photographing center of the second image; the ground of the digital three-dimensional space is a plane with the most densely distributed discrete points in the digital three-dimensional space; and determining the height of the target object according to the digital three-dimensional space, the scaling, the ground direction and the sky direction of the digital three-dimensional space.
The plane with the most dense three-dimensional discrete point distribution is the plane with the largest quantity value of point data in the unit space.
The height measurement implementation method enables the embodiment of the application to be applicable to more measurement scenes. For example, in some scenes where the target object may be high or the bottom of the target object may be occluded, the height of the target object may be calculated from the top of the shot. In addition, in the scheme provided by the embodiment of the application, the height of the target object can be measured only by marking the top end position in the first image and the second image, the bottom end position does not need to be marked, and the operation of a measuring person is simplified.
In one possible implementation, before determining the distance between the two end points of the target object according to the digital three-dimensional space and the scaling of the digital three-dimensional space to the size of the real three-dimensional space, the method further comprises: acquiring the position of the first object in the first image, the position of the first object in the second image, and the actual size of the first object S1; calculating a dimension S2 of the first object in the digital three-dimensional space based on the position of the first object in the first image and the position of the first object in the second image; the scaling of the dimensions of the digital three-dimensional space to the real three-dimensional space is calculated based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2.
The position of the first object in the first image may be a coordinate of an image point of the first object in the first image. Similarly, the position of the first object in the second image may be the coordinates of the image point of the first object in the second image.
Thus, a method of calculating a scaling from the actual size S1 of a first object of known size, and calculating the size S2 of the first object in digital three-dimensional space is provided. Because the actual size of the first object is accurate data, the accuracy of the calculated scaling is improved, and the reliability of measurement is improved.
In one possible implementation, the acquiring the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object includes: receiving input of a position of the first object in the first image, a position of the first object in the second image, and an actual size of the first object S1; alternatively, the position of the first object in the first image is identified, and the position of the first object in the second image is identified, and the actual size of the first object is found S1.
Thus, a method is provided by which the positions of the two end points of the first object can be manually marked by a measuring person and the actual size of the first object can be entered, and a method is also provided by which the server automatically identifies the positions of the two end points of the first object and finds the actual size of the first object. The method enriches the modes of acquiring the positions of the two end points of the first object and the actual size of the first object.
In a possible implementation manner, the first object is a marker post designed in segments, and the first object at least includes a first black segment, a first white segment, a first color segment, a second white segment, a second color segment, a third white segment, and a second black segment, which are sequentially arranged; wherein the color of the first color segment and the color of the second color segment are a pair of complementary colors; the actual dimension S1 of the first object is the length between two end points of the first object, one end point of the first object is located at the boundary of the first black segment and the first white segment, and the other end point of the first object is located at the boundary of the third white segment and the second black segment; the positions of the first object in the first image are the positions of two end points of the first object in the first image; the positions of the first object in the second image are the positions of the two end points of the first object in the second image.
Therefore, the specially designed mark post can be used as the first object, so that the server can automatically recognize two end points in the first object conveniently, and the operation of a measuring person is simplified.
In one possible implementation, identifying a location of the first object in the first image and a location of the first object in the second image includes: identifying a first color segment and a second color segment in the first image, and a first region in the first image having a straight line feature; identifying a first color segment and a second color segment in the second image, and a second region having a straight line feature in the second image; automatically determining the positions of two end points of a first object in a first image according to a first color segment and a second color segment in the first image, a first area with straight line characteristics in the first image and the position relation of each color segment in the first object; and automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the second image, the second area with the straight line characteristic in the second image and the position relation of each color segment in the first object. Thus, a method of identifying two end points of a target is provided.
In one particular implementation, a filter may be employed to determine regions having straight line features from the first image and the second image. The filter may be a real part of a two-dimensional Gabor function.
In one possible implementation, the first color segment is a red segment and the second color segment is a cyan segment; alternatively, the first color segment is a magenta segment and the second color segment is a green segment.
It should be noted that when insufficient light is captured, blue and black colors are not easily distinguished in the captured image. When the shooting light is too bright, yellow and white are not easy to distinguish. Therefore, neither the first color segment nor the second color segment is blue or yellow.
In one possible implementation, calculating a scaling ratio of the size of the digital three-dimensional space to the real three-dimensional space according to the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object includes: calculating a dimension S2 of the first object in the digital three-dimensional space based on the position of the first object in the first image and the position of the first object in the second image; the scaling of the dimensions of the digital three-dimensional space to the real three-dimensional space is calculated based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2. Thus, a method is provided that specifically calculates the scaling of the dimensions of the digital three-dimensional space to the real three-dimensional space.
In one possible implementation, determining a distance between two end points of the target object according to the digital three-dimensional space and a scaling ratio of the size of the digital three-dimensional space to the size of the real three-dimensional space includes: determining the distance between the two end points of the target object according to the digital three-dimensional space, the scaling, the positions of the two end points of the target object in the first image and the positions of the two end points of the target object in the second image.
In one possible implementation, determining the height of the target object according to the digital three-dimensional space, the scaling, the ground of the digital three-dimensional space, and the sky direction of the digital three-dimensional space includes: determining the height of the target object according to the digital three-dimensional space, the scaling, the ground of the digital three-dimensional space, the sky direction of the digital three-dimensional space, the position of the top end of the target object in the first image, and the position of the top end of the target object in the second image.
In a second aspect, a measuring apparatus is provided, which includes: an acquisition unit configured to acquire a first image and a second image, both of which include a target object and a first object of which actual size is known; the shooting positions of the first image and the second image are different, and the shooting directions of the first image and the second image are different; a construction unit for digital three-dimensional space based on the first image and the second image; a determining unit for determining a distance between the two end points of the target object based on the digital three-dimensional space and a scaling of the dimensions of the digital three-dimensional space and the real three-dimensional space, wherein the scaling is related to the position of the first object in the first image, the position of the first object in the second image, and the actual dimension of the first object S1.
In a possible implementation manner, the determining unit is further configured to: determining the sky direction of the digital three-dimensional space according to the ground of the digital three-dimensional space and the photographing center of the first image or according to the ground of the digital three-dimensional space and the photographing center of the second image; the ground of the digital three-dimensional space is a plane with the most densely distributed discrete points in the digital three-dimensional space; and determining the height of the target object according to the digital three-dimensional space, the scaling, the ground of the digital three-dimensional space and the sky direction of the digital three-dimensional space.
In one possible implementation, before the determining unit determines the distance between the two end points of the target object according to the digital three-dimensional space and the scaling of the sizes of the digital three-dimensional space and the real three-dimensional space, the acquiring unit is further configured to acquire the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object; a determining unit, further configured to calculate a size S2 of the first object in the digital three-dimensional space according to the position of the first object in the first image and the position of the first object in the second image; calculating the scaling of the dimensions of the digital three-dimensional space to a real three-dimensional space based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2.
In one possible implementation manner, in the process of acquiring the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object, the acquiring unit is specifically configured to: receiving input of a position of the first object in the first image, a position of the first object in the second image, and an actual size of the first object S1; alternatively, the position of the first object in the first image is identified, and the position of the first object in the second image is identified, and the actual size of the first object is found S1.
In a possible implementation manner, the first object is a marker post designed in segments, and the first object at least includes a first black segment, a first white segment, a first color segment, a second white segment, a second color segment, a third white segment, and a second black segment, which are sequentially arranged; wherein the color of the first color segment and the color of the second color segment are a pair of complementary colors; the actual dimension S1 of the first object is the length between two end points of the first object, one end point of the first object is located at the boundary of the first black segment and the first white segment, and the other end point of the first object is located at the boundary of the third white segment and the second black segment; the positions of the first object in the first image are the positions of two end points of the first object in the first image; the positions of the first object in the second image are the positions of the two end points of the first object in the second image.
In a possible implementation manner, in the process that the obtaining unit identifies the position of the first object in the first image and the position of the first object in the second image, the obtaining unit is further specifically configured to: identifying a first color segment and a second color segment in the first image, and a first region in the first image having a straight line feature; identifying a first color segment and a second color segment in a second image, and a second region having a straight line feature in the second image; automatically determining the positions of two end points of a first object in a first image according to the position relationship of a first color segment and a second color segment in the first image, the first area in the first image and each color segment in the first object; and automatically determining the positions of two end points of the first object in the first image according to the first color segment and the second color segment in the second image, the second area in the second image and the position relation of each color segment in the first object.
In one possible implementation, the first color segment is a red segment and the second color segment is a cyan segment; alternatively, the first color segment is a magenta segment and the second color segment is a green segment.
In one possible implementation manner, in the process that the determining unit calculates the scaling ratio of the size of the digital three-dimensional space to the size of the real three-dimensional space according to the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object, the determining unit is specifically configured to: calculating a dimension S2 of the first object in the digital three-dimensional space based on the position of the first object in the first image and the position of the first object in the second image; the scaling of the dimensions of the digital three-dimensional space to the real three-dimensional space is calculated based on the actual dimension of the first object S1 and the dimension of the first object in the digital three-dimensional space S2.
In a possible implementation manner, in the process that the determining unit determines the distance between the two end points of the target object according to the digital three-dimensional space and the scaling ratio of the sizes of the digital three-dimensional space and the real three-dimensional space, the determining unit is specifically configured to: determining the distance between the two end points of the target object according to the digital three-dimensional space, the scaling, the positions of the two end points of the target object in the first image and the positions of the two end points of the target object in the second image.
In one example, the positions of the two end points of the target object in the first image and the positions of the two end points of the target object in the second image, which are manually marked by the measuring personnel, can be received, and the distance between the two end points of the target object can be calculated according to the position information and the digital three-dimensional space and the scaling.
It should be noted that the target object here may be one, and the distance between two end points of the target object may be the height, length, width, and the like of the target object. The target objects may be two, and the distance between the two end points of the target objects may be the distance between the two target objects, and the like. For example, the measurement method can be used in the digital survey of telecommunication base stations, and is used for acquiring information such as equipment size, cable length, installation spacing and the like. And the method can also be used for other engineering measurement or daily life, such as measuring the building distance and the like.
In one possible implementation manner, in the process that the determining unit determines the height of the target object according to the digital three-dimensional space, the scaling ratio, the ground of the digital three-dimensional space, and the sky direction of the digital three-dimensional space, the determining unit is specifically configured to: determining the height of the target object according to the digital three-dimensional space, the scaling, the ground of the digital three-dimensional space, the sky direction of the digital three-dimensional space, the position of the top end of the target object in the first image, and the position of the top end of the target object in the second image.
In one example, the position of the top end of the target object in the first image and the position of the top end of the target object in the second image, which are manually marked by the measuring personnel, can be received, and the height of the target object can be calculated according to the position information, the digital three-dimensional space, the ground and the sky direction of the digital three-dimensional space. The measuring method provided by the embodiment of the application can be applied to a telecommunication base station digital surveying scene and is used for obtaining the heights of various devices on a long-distance high tower and the tower. And the device can also be applied to other engineering measurement or daily life, such as building height measurement.
A third aspect provides a server comprising one or more processors, one or more memories, and one or more communication interfaces, the one or more memories, the one or more communication interfaces being coupled with the one or more processors, the one or more memories being configured to store computer program code, the computer program code comprising computer instructions that, when read by the one or more processors from the one or more memories, cause the server to perform the method as described in the above aspects and any one of the possible implementations.
A fourth aspect provides a chip system comprising a processor, which when executing instructions performs the method as described in the above aspects and any one of the possible implementations thereof.
In a fifth aspect, a computer storage medium is provided, comprising computer instructions, which, when executed on a server, cause the server to perform the method as described in the above aspect and any one of its possible implementations.
A sixth aspect provides a computer program product for causing a computer to perform the method as described in the above aspects and any one of the possible implementations when the computer program product runs on the computer.
Drawings
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 2A is a schematic diagram of an image acquisition method according to an embodiment of the present disclosure;
fig. 2B is a schematic diagram of another image acquisition method provided in the embodiment of the present application;
fig. 2C is a schematic diagram of another image acquisition method provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 5A is a schematic flowchart of a digital photogrammetry method according to an embodiment of the present application;
fig. 5B is a schematic diagram illustrating a method for calculating a distance between two end points of a target object according to an embodiment of the present disclosure;
fig. 6A-6H are schematic user interface diagrams of some electronic devices provided by embodiments of the present application;
FIG. 7A is a schematic flow chart illustrating another digital photogrammetry method according to an embodiment of the present application;
FIG. 7B is a schematic diagram of a method for calculating a height of a target object according to an embodiment of the present disclosure;
FIG. 8 is a schematic view of a post according to an embodiment of the present disclosure;
FIG. 9 is a schematic view of a color wheel according to an embodiment of the present application;
FIG. 10 is a schematic view of another post provided in accordance with an embodiment of the present application;
fig. 11A to 11D are schematic diagrams illustrating a method for identifying a post according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure.
Detailed Description
In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In the following, the terms "first", "second" are used for descriptive purposes only and are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified. In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Fig. 1 shows a communication system according to an embodiment of the present application. The communication system comprises a first electronic device 100, a server 200, and a second electronic device 300. In some examples, the second electronic device 300 may be the same device as the first electronic device 100. In other examples, the first electronic device 100, the server 200, and the second electronic device 300 may be the same device. That is, all the steps in the embodiment of the present application are performed by one device, such as a terminal.
The first electronic device 100 is a device with a camera, and can be used to capture an image of a target. For example, the first electronic device 100 may be a mobile phone, a tablet computer, a camera, a wearable electronic device, and the like, and the specific form of the first electronic device 100 is not particularly limited in this application.
Specifically, the surveying person may use the first electronic device 100 to take a first image and a second image in different shooting directions at different shooting positions, wherein the first image and the second image each include the target object. The shooting position may be a position in which the camera of the first electronic device 100 is located in light (or a shooting center) when the image is shot. The capturing of the first image and the second image in different capturing directions means that the first image and the second image are captured by selecting different capturing points around a target object (for example, a target object) with the subject as the center, and the capturing points of the two times are not on the same straight line with the target object. Under the condition that the shooting distance and the shooting height are not changed, different side images of the target object can be displayed in different shooting directions. Alternatively, the first image and the second image are taken in different shooting directions, and it can be understood that the Q point on the target object forms an angle with a connection line of the shooting centers of the two images, respectively, where the angle is not zero and not 180 degrees. The point Q on the target object may be any point on the target object, for example, any one of two end points of the target object, a vertex of the target object, or the like.
It should be noted that the first image and the second image photographed at different photographing positions and in different photographing directions may constitute a stereo pair to subsequently constitute a digital three-dimensional space, and the distance between both end points of the target object and the height of the target object are calculated based on the principle of triangulation.
For example, as shown in fig. 2A, a measuring person may take a first image of the object 21 to be measured at a certain position with the first electronic device 100. When the first image is captured, the camera of the first electronic device 100 is located at the first position P1. Then, the measurement person moves his or her own position and takes a second image at another position. When the second image is captured, the camera of the first electronic device 100 is located at the second position P2. Wherein, the point Q1 on the target object forms an included angle alpha 1 with the connecting line of the point P1 and the point P2 respectively. Wherein the included angle α 1 is not zero and not 180 degrees. The measurement method shown in fig. 2A may be used in the context of outdoor measurements. For example, a measurer can face a target object (such as a house, an iron tower and the like), move left and right, take pictures twice, and measure the target object at a distance, wherein the distance between the two shooting positions is several meters to tens of meters.
For another example, as shown in fig. 2B, the measuring person may take a first image of the target object 22 by holding the first electronic device 100 overhead. When the first image is captured using the first electronic apparatus 100, the camera of the first electronic apparatus 100 is located at the first position P3. Then, the measurement person places the first electronic device 100 at the waist to take a second image of the target object. When the first electronic device 100 is used to capture the second image, the camera of the first electronic device 100 is located at the second position P4. The point Q2 on the target object forms an included angle alpha 2 with the connecting line of the point P1 and the point P2 respectively. Wherein the included angle α 2 is not zero and not 180 degrees. The method shown in fig. 2B can be applied to measurement in a narrow space, for example, a narrow machine room, and the distance between two shooting points is about 0.4-1 meter, and a target object within 10 meters can be measured.
For another example, as shown in fig. 2C, the measuring person may extend his/her arm to the side of his/her body to capture a first image of the target object 22. When the first image is captured using the first electronic apparatus 100, the camera of the first electronic apparatus 100 is located at the first position P5. The measuring person then extends his arm to the other side of the body and takes a second image of the target object. When the first electronic device 100 is used to capture the second image, the camera of the first electronic device 100 is located at the second position P6. The point Q3 on the target object forms an included angle alpha 3 with the connecting line of the point P1 and the point P2 respectively. Wherein the included angle α 3 is not zero and not 180 degrees. The method shown in fig. 2C may be applied to a scenario where a measurement person is not convenient to move, such as on a tower or a roof. Then, the measuring personnel can respectively unfold the arms leftwards and rightwards to realize twice photographing, the distance between the two photographing points is about 1-2.5 meters, and the target object within 20-50 meters can be measured.
In some examples, information of the first object of which the measurement person inputs a known size, such as two end point information of the first object in the first image and the second image, and the actual size of the first object, may also be received by the first electronic device 100. Wherein the first image and the second image each comprise the first object.
Then, the first electronic device 100 transmits the photographed first and second images to the server 200. The server 200 constructs a digital three-dimensional space from the first image and the second image. It should be noted that the ratio of the size of each object in the digital three-dimensional space to the size of each object in the real three-dimensional world is the same, the relative positional relationship between each object in the digital three-dimensional space is the same as the relative positional relationship between each object in the real three-dimensional world, and the ratio of the distance between each object in the digital three-dimensional space to the distance between each object in the real three-dimensional world is the same.
The server 200 may identify information of the first object of known size in the first image and the second image or receive information of the first object of known size from the first electronic device 100. The server 200 may scale the digital three-dimensional space to the real three-dimensional world based on the actual size of the first object and the size of the first object in the digital three-dimensional space. Thus, server 200 may calculate the distance between the two end points of the target object based on the scale and the digital three-dimensional space. In one implementation, the server 200 may receive the two endpoint information of the target object sent by the second electronic device 300, and the server 200 may calculate the distance between the two endpoints according to the two endpoint information of the target object and the scaling ratio. In another implementation, the server 200 may send the digital three-dimensional space and the scale to the second electronic device 300. The second electronic device 200 may calculate the distance between the two end points of the target object according to the information of the two end points of the target object input by the measuring person.
The second electronic device 300 is a device having a display screen and an input device, and can display the first image and the second image and receive information of the target object input by the measuring person according to the first image and the second image. For example, the second electronic device 300 may be a mobile phone, a tablet PC, a Personal Computer (PC), a Personal Digital Assistant (PDA), a netbook, etc., and the specific form of the second electronic device 300 is not particularly limited in this application. In some examples, the second electronic device 300 may be the same device as the first electronic device 100.
In other embodiments of the present application, the server 200 may further determine that a plane with the most dense three-dimensional discrete points in the digital three-dimensional space is the ground, and further determine the height of the ground and the sky direction. Thus, the server 200 may calculate the height of the target object, i.e., the distance from the top of the target object to the ground, according to the scaling ratio, the digital three-dimensional space, the ground height, and the sky direction. In one implementation, the server 200 may receive the top-end information of the target object transmitted by the second electronic device 300. The server 200 may calculate the altitude of the target object according to the top information of the target object, the digital three-dimensional space, the ground altitude, and the sky direction. In another implementation, the server 200 may send the digital three-dimensional space and related parameters (scale ratio, ground altitude, and sky direction) to the second electronic device 300. The second electronic device 200 may calculate the height of the target object according to the top information of the target object input by the measuring staff and the digital three-dimensional space.
In summary, in the measurement method provided by the embodiment of the application, a measurer can use the first electronic device 100 to shoot the first image and the second image in different shooting directions at two different shooting positions. A digital three-dimensional space is then constructed based on the first image and the second image. And then, according to the actual sizes of the objects with known sizes in the first image and the second image, the scale scaling of the digital three-dimensional space and the real three-dimensional world is obtained. The distance between the two end points of the target object in the first image and the second image is then calculated based on the scale and the digital three-dimensional space. The ground in the digital three-dimensional space can also be identified, and the plane where the three-dimensional discrete points are most densely distributed is the ground, so that the height of the target object in the first image and the second image can be calculated. Compared with the prior art, in which a measuring person needs to set a target with a plurality of control points according to a strict distance relation and a strict azimuth relation in advance and shoot a large number of photos or videos containing the target, the embodiment of the application is convenient to shoot images at different shooting positions and in different shooting directions, and is high in reliability. In addition, the scaling of the size of the digital three-dimensional space is determined by using the actual size of the first object in the embodiment of the application, which is also beneficial to improving the reliability of measurement. In addition, since images can be shot at different shooting positions and in different shooting directions in scenes such as a narrow machine room and an inclined roof, the measurement method provided by the embodiment of the application can be applied to wider measurement scenes.
Referring to fig. 3, fig. 3 is a schematic structural diagram of the first electronic device 100.
The first electronic device 100 may include a processor 110, an internal memory 121, a Universal Serial Bus (USB) interface 130, a camera 150, a display screen 160, and the like. Optionally, the first electronic device 100 may further include one or more of an external memory interface 120, a charging management module 140, a power management module 141, and a battery 142.
Among other things, processor 110 may include one or more processing units, such as: the processor 110 may include one or more of an Application Processor (AP), a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU). The different processing units may be separate devices or may be integrated into one or more processors. In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, a general-purpose input/output (GPIO) interface, and/or a Universal Serial Bus (USB) interface, etc. The processor 110 is communicatively coupled to other devices (e.g., internal memory 121, camera 150, display screen 160, etc.) through the one or more interfaces.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the first electronic device 100, and may also be used to transmit data between the first electronic device 100 and a peripheral device.
In this embodiment, the first electronic device 100 may send the captured first image and the captured second image to the server 200 through the USB interface 130, and send the positions of the two endpoints or the top of the target object in the first image, the positions of the two endpoints or the top of the target object in the second image, and the like, which are received from the user mark, to the server 200. In other examples, the first electronic device 100 may further send the information, which is received from the user, of the positions of the two end points of the first object with the known size in the first image, the positions of the two end points of the first object with the known size in the second image, and the actual size of the first object, to the server 200 through the one or more interfaces.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and does not limit the structure of the first electronic device 100. In other embodiments of the present application, the first electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The first electronic device 100 implements the display function through the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The first electronic device 100 may implement a photographing function through the ISP, the camera 193, the GPU, the display screen 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the first electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
In the embodiment of the present application, the first electronic device 100 may be used to call the camera 193 to capture the first image and the second image containing the target object in different capturing directions at different capturing positions. The different shooting position is the optical center of the camera 193.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the first electronic device 100. The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the first electronic device 100. In other embodiments of the present application, the first electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Referring to fig. 4, fig. 4 shows a schematic diagram of a server 200, where the server 200 includes one or more processors 210, one or more external memories 220, and one or more communication interfaces 230. Optionally, the server 200 may also include an input device 240 and an output device 250.
The processor 210, the external memory 220, the communication interface 230, the input device 240, and the output device 250 are connected by a bus. The processor 210 may include a general-purpose Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), a neural Network Processing Unit (NPU), or an Integrated Circuit for controlling the execution of programs according to the present disclosure.
Generally, a processor may have internal memory disposed therein and may be used to store computer-executable program code, including instructions. The internal memory may include a program storage area and a data storage area. The storage program area may store an operating system, an algorithm model that needs to be used in the embodiment of the present application, and the like, for example, an algorithm model for identifying the first object, an algorithm for constructing a digital three-dimensional space according to the first image and the second image, an algorithm for solving a scaling scale of a digital three-dimensional scale according to an actual size of the first object, and an algorithm for identifying a plane in the digital three-dimensional space where three-dimensional discrete points are most densely distributed. The storage data area may store data (a three-dimensional discrete point cloud of the digital three-dimensional space, an actual size of the first object, a parameter of a ground position of the digital three-dimensional space, a parameter of a sky direction of the digital three-dimensional space, etc.) created during use of the server 200, and the like. In addition, the internal memory may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one of a magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 210 executes various functional applications of the server 200 and data processing by executing instructions stored in an internal memory. In one example, the processor 210 may also include multiple CPUs, and the processor 210 may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, or processing cores that process data (e.g., computer program instructions).
Communication interface 230 may be used to communicate with other devices or communication networks, such as ethernet, Wireless Local Area Networks (WLAN), etc.
The output device 250 is in communication with the processor 210 and may display information in a variety of ways. For example, the output device may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) Display device, a Cathode Ray Tube (CRT) Display device, a projector (projector), or the like.
The input device 240 is in communication with the processor 210 and may receive user input in a variety of ways. For example, the input device may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
It should be noted that the structure of the second electronic device 300 can be described with reference to the structure of the first electronic device 100 in fig. 3, and it is understood that the second electronic device 300 can include more or less components than the first electronic device 100, or combine some components, or split some components, or arrange different components. The embodiment of the present application does not limit this. In some examples, the second electronic device 300 may be the same device as the first electronic device 100.
In order to facilitate understanding of technical solutions provided by the embodiments of the present application, description will be made on technical terms related to the embodiments of the present application.
Adjustment of a free net: the adjustment refers to reasonably assigning accidental errors of the observed values, pre-correcting systematic errors of the observed values, and controlling gross errors of the observed values by adopting a certain observation principle and a manual error method. In general, in the adjustment algorithm, the control network is fixed on the known data based on the known calculation data. When the starting data in the network is unnecessary, the network is called as a free network, and the adjustment method when the starting data is not available is the adjustment of the free network.
Space backward intersection (space reselection): the method of calculating the elements of the external orientation of the picture according to the collinearity equation by using more than three control points (or connection points) on the picture, which are not on a straight line.
Space intersection (space intersection): the method is to determine the object space coordinate (coordinate in a certain temporary three-dimensional coordinate system or ground measurement coordinate system coordinate) of the point by the internal and external orientation elements of the stereo pair of the left and right images and the image coordinate measurement value of the image point with the same name.
Shooting position: the shooting position in this context may be understood as a position in which the light of the camera of the first electronic device 100 (or the shooting center) is located when the image is shot.
Shooting direction: the capturing of the first image and the second image in different capturing directions herein means that the first image and the second image are captured by selecting different capturing points around a target object (e.g., a target object) with the subject as a center, and the capturing points of the two capturing points are not on the same straight line with the target object. Under the condition that the shooting distance and the shooting height are not changed, different side images of the target object can be displayed in different shooting directions. Alternatively, the first image and the second image are taken in different shooting directions, and it can be understood that the Q point on the target object forms an angle with a connection line of the shooting centers of the two images, respectively, where the angle is not zero and not 180 degrees. The point Q on the target object may be any point on the target object, for example, any one of two end points of the target object, a vertex of the target object, or the like.
The technical solutions in the following embodiments can be implemented in a communication system as shown in fig. 1, and the technical solutions provided in the embodiments of the present application are described in detail below with reference to the drawings.
As shown in fig. 5A, a flowchart of a digital photogrammetry method provided in the embodiment of the present application is specifically as follows:
s501, acquiring a first image and a second image, wherein the first image and the second image both comprise a target object, and the shooting positions of the first image and the second image are different. And the shooting directions of the first image and the second image are different.
In some embodiments, the measuring person may carry the first electronic device 100 and take the first image and the second image of the target object in different shooting directions at different shooting positions. The meaning of the shooting position and the shooting direction can be referred to the above description, and is not described herein again. Then, the first and second images captured by the first electronic device 100 are transmitted to the server 200, and the server 200 performs subsequent data processing.
The first electronic device 100 may be a device convenient to carry or a device commonly used by a measurer, such as a mobile phone, a tablet computer, a camera, a wearable device with a camera, or a device connected with a camera. Therefore, special measuring equipment is avoided being used in the measuring process, the measuring cost is reduced, and the carrying of measuring personnel is facilitated.
A cellular phone is taken as an example of the first electronic device 100 for illustration.
The measuring personnel can open the measuring application in the mobile phone and call the camera to take a picture. In the shooting process, the mobile phone can display some guide information to prompt a measurer to shoot a first image and a second image at two different shooting positions in different shooting directions. For example, the presentation information 601 shown in fig. 6A, the presentation information 602 shown in fig. 6B, the presentation information 603 shown in fig. 6C, and the presentation information 604 shown in fig. 6D. Then, the mobile phone may upload the captured first image and second image to the server 200 for processing.
Further, in one example, if the angle formed by the point Q on the target object and the line connecting the center of the two images is controlled to be greater than 5 degrees and less than 60 degrees, the measurement error can be reduced to 2% and less than 2%. In another example, it is assumed that the photographing center P1 of the first image photographed is at a distance D from any point Q on the target object. The point Q on the target object may be any point on the target object, for example, any one of two end points of the target object, a vertex of the target object, or the like. If the distance between the photographing centers P2 and P1 at which the second image is photographed is controlled to be in a range of more than D/20 and less than D, the measurement error can be reduced to 2% and less than 2%.
S502, constructing a digital three-dimensional space according to the first image and the second image.
In some embodiments of the present application, the server 200 may first identify a rigid, invariant region as a valid region in the first image and the second image, or identify a non-rigid, alterable object as an invalid region in the first image and the second image. The relevant data processing procedure is subsequently performed for the active areas in the first image and the second image. Therefore, on one hand, the areas of the variable objects in the first image and the second image are excluded, so that the accuracy of the subsequent feature point matching is improved. On the other hand, the areas of the variable objects in the first image and the second image are eliminated, and only the effective areas in the first image and the second image are processed subsequently, so that the data volume of subsequent processing is greatly reduced, and the data processing efficiency is improved.
In some examples of this embodiment, the sky, water surface, pedestrians, vehicles, etc. in the image may all be considered changeable objects. Therefore, the server 200 may perform full-element classification on the received first image and second image by using a semantic segmentation method, and identify changeable objects, i.e., invalid areas, in the first image and the second image. The server 200 may then add a gray mask to the inactive areas in the first and second images to mask the inactive areas in the first and second images.
The semantic segmentation method includes, but is not limited to, using an open source deep learning training model deep lab-v3, and the like, which is not limited in this embodiment.
Optionally, the server 200 may perform color equalization on the images of the identified invariant region (i.e., the effective region) in the first image and the second image. In a specific implementation, algorithms such as histogram stretching, histogram regularization, Kama transformation and the like may be performed on the effective regions in the first image and the second image to achieve color equalization.
The histogram stretching method is taken as an example for explanation. In one example, the server 200 counts a gray level histogram for each image of the effective area in the first image. The gray histogram is obtained by counting the occurrence frequency of all pixels of an image according to the size of a gray value. Then, the server 200 discards a part of pixels having a larger gray value (for example, 0.5% of the number of discarded pixels in the total pixels of the effective region image) and/or discards a part of pixels having a smaller gray value (for example, 0.5% of the number of discarded pixels in the total pixels of the effective region image) based on the gray histogram of the image of the effective region in the first image, thereby obtaining the truncation threshold. And constructing a linear transformation formula according to the truncation threshold, stretching the gray histogram, and realizing color equalization of the image of the effective area in the first image. In a similar manner, the server 200 may color equalize the active area image in the second image.
It should be noted that color equalization is performed on the effective areas in the first image and the second image, which is beneficial to reducing the influence of environmental factors (e.g., weather conditions, light conditions, etc.) during shooting and camera specifications used for shooting, and improving the accuracy of subsequent feature point matching and dense matching.
Further, the server 200 performs feature point matching, free net adjustment, and dense matching on the image of the region of the feature in the first image, resulting in a digital three-dimensional space.
The feature point matching comprises feature extraction, feature description and feature matching. Specifically, the server 200 extracts feature points from the effective region image of the first image and the effective region image of the second image after the above-described processing, and then describes each feature point separately. The respective feature points in the effective area image of the first image are compared with the respective feature points in the effective area image of the second image in terms of degree of similarity. And judging whether the feature points with the similarity degree higher than the threshold value A are the same feature points (namely the same-name feature points) or not, and finishing the feature matching. It is to be understood that the server 200 may match the feature points in the first image and the second image by using any technique known in the related art, which is not specifically limited in this embodiment of the present application. For example, the server 200 may perform feature description using a scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF), and other feature description operators. For another example, the server 200 may perform feature matching using a least squares method.
The feature points determined to be of the same name in the effective area image of the first image and the effective area of the second image form connection points of the two images, and the server 200 performs gross error rejection and adjustment assignment, that is, free net adjustment, based on the connection points. And then, performing space rear interactive calculation according to the connection points after the adjustment of the free network to obtain relative exterior orientation elements of the two images. And then, based on relative external orientation elements of the two images, constructing an epipolar image, and performing dense matching, namely performing pixel-by-pixel spatial forward intersection calculation to obtain a three-dimensional dense point cloud (point clouds) to form a digital three-dimensional space. The three-dimensional point cloud is a set of point data of the surface of an object in a three-dimensional space, and can be used for reflecting the surface contour of the object. The points obtained by the method are large in number and dense, namely dense point clouds.
It should be noted that, in the embodiment of the present application, the digital three-dimensional space obtained by performing processing such as free net adjustment and dense matching on the basis of the first image and the second image is inconsistent with the position, size, and direction of the real three-dimensional world. However, the size of each object in the digital three-dimensional space is in the same proportion as the actual size of each object in the real three-dimensional world; the distance between each object in the digital three-dimensional space is in the same proportion as the distance between each object in the real three-dimensional world.
In other embodiments of the present application, the server 200 may further verify whether the first image and the second image captured by the measurement person meet the capturing requirement according to the matching condition of the feature points when the feature points are matched. If the first image and the second image do not meet the shooting requirement, the first electronic device 100 may display related prompt information or play a related voice prompt to prompt the measurer to shoot the first image and the second image again, or shoot the second image again.
For example, if it is detected that the homonymous feature points are not extracted or the number of extracted homonymous feature points is less than a threshold B (e.g., 10 to 100) in the effective areas of the first image and the second image, the server 200 may prompt the measurer through the first electronic device 100, "please re-photograph the first image and the second image, and ensure that the two shots are aligned with the same target".
For another example, if it is detected that the image point position deviation of the extracted feature points of the same name on the two images is generally smaller than the threshold C (for example, 5 to 20 pixels) in the effective areas of the first image and the second image, the server 200 may prompt the measurer through the first electronic device 100 to "please shoot the first image and the second image again, and ensure shooting at different positions".
The image point position deviation Δ P of the feature point (denoted as P point) with the same name on the first image and the second image can be calculated by the following formula:
wherein, (Px)1,Py1) Pixel coordinates of a point P in the first image; (Px)2,Py2) Is the pixel coordinate of the point P in the second image.
S503, acquiring the actual size S1 of the first object and the position information of the first object in the first image and the second image; calculating the size of the first object in the digital three-dimensional space according to the position information of the first object in the first image and the second image S2; wherein the first object is included in both the first image and the second image.
In some embodiments of the present application, the measuring person may also input information of the object with known size in the first image and the second image through the first electronic device 100, for example, the position of the two end points of the object with known size in the first image, the position of the two end points of the object with known size in the second image, and the size of the first object in real space, i.e., the real size of the object S1. In one specific example, the measuring person may distinguish a first object of known size from the first image and the second image, mark two end points of the first object in the first image and the second image, respectively, and input the size of the first object through the first electronic device 100. In another specific example, the measuring person may also place a first object of known size within the field of view of the camera while taking the first image and while taking the second image. That is, when the first image and the second image are captured, the first object is also captured in both the first image and the second image. After the first image and the second image are taken, the measuring person may mark the two end points of the first object in the first image and the second image, respectively, and input the real size of the first object through the first electronic device 100.
Then, the first electronic device 100 sends the positions of the two end points of the first object in the first image (e.g., the coordinates of the pixels in the first image), the positions of the two end points of the first object in the second image (e.g., the coordinates of the pixels in the second image), and the true size S1 of the first object to the server 200. The server 200 may first calculate coordinates of the two end points of the first object in the digital three-dimensional space according to the positions of the two end points of the first object in the first image and the positions of the two end points in the second image, and then calculate a distance between the two coordinates, i.e., the size S2 of the first object in the digital three-dimensional space.
Still take a mobile phone as an example of the first electronic device 100 for illustration. The mobile phone can display the first image and the second image at the same time, or display the first image and the second image sequentially, so that the measuring staff can mark the positions of two end points (namely four positions) of the first object with known size on the two images respectively. In some examples, after the measurement person marks the position of an end point in one of the images, the mobile phone may also make an auxiliary line in the other image according to the position of the end point in the image and the geometric relationship of the stereopair to help the measurement person mark the position of the end point in the other image.
For example, the first image and the second image can be displayed in sequence because the display screen of the mobile phone is small. As shown in fig. 6E and 6F, the first image and the second image displayed on the mobile phone each include a4 paper with a known length, and the a4 paper has a length of 29.7 cm. Then, the measuring person can mark the two end points of the long side of the a4 paper in the first image and the second image, respectively. As shown in fig. 6E, the measuring person may first mark the two end points E1 and F1 of the long side of the a4 paper in the first image. Of course, it is also possible to mark the two end points E1 and F1 of the long side of the a4 paper after enlarging the first image, so that the marking is more accurate. Then, switching to the second image, the cell phone can make an auxiliary line in the second image with both endpoints E1 and F1 marked in the first image, and the geometric relationship of the stereopair. As shown in fig. 6F, the dotted line (1) is an auxiliary line corresponding to the end point E1, and the measurer can mark the corresponding end point E2 in the second image according to the auxiliary line. The dotted line (2) is an auxiliary line corresponding to the end point F1, from which the measurer can mark the corresponding end point F2 in the second image. The cell phone then calculates the coordinates of the pixels of E1 and F1 in the first image, and the coordinates of the pixels of E2 and F2 in the second image. The mobile phone can also prompt the measuring staff to input the value of the actual size S1 of the first object, and then the mobile phone sends the coordinates of the image points of E1, F1, E2 and F2 and the value of the actual size S1 to the server 200 for subsequent processing.
In other embodiments of the present application, the server 200 may also identify two end points of a first object of known size in the first image and the second image, and an actual size S1 between the two end points of the first object. In some examples, a pole of a fixed length and an appearance that is easily recognized by the server 200 may be designed as the first object. When the surveying staff takes the first image and the second image, the staff is placed in the view range taken by the first electronic device 100, that is, the staff is included in both the taken first image and the taken second image.
The designed marker post can be a plurality of rod-shaped objects with colors distributed at intervals. The server 200 may determine the position of the end point of the target by automatically locking a particular color in the rod-shaped object in the first image and the second image. And the actual dimension S1 between the two endpoints of the target is known. The design of the post and the method for identifying the two end points of the post will be described in detail below, and will not be described here.
In still other embodiments of the present application, the server 200 may first identify two end points of the first object with known dimensions in the first image and the second image. If the identification fails, the measurement person may be prompted by the first electronic device 100 to manually input information about the first object of known size, such as the positions of the two end points of the first object in the first image, the positions of the two end points of the first object in the second image, and the actual size of the first object S1. Alternatively, the server 200 may identify two end points of the first object with known sizes in the first image and the second image, and prompt the measuring person through the first electronic device 100 to perform a check-peer on the information of the identified first object, so as to ensure the accuracy of the identification result.
And S504, obtaining the scale ratio of the digital three-dimensional space and the real three-dimensional world according to the actual size S1 of the first object and the size S2 of the first object in the digital three-dimensional space.
In some embodiments of the present application, the server 200 obtains a scaling ratio of the digital three-dimensional space to the real space of S1/S2 according to the size of the first object in the digital three-dimensional space S2 and the actual size of the first object in the real three-dimensional world S1.
And S505, determining the distance between two end points of the target object according to the digital three-dimensional space and the scale scaling, the first image and the second image.
In one example, the server 200 may receive the positions of the two end points of the target object transmitted by the second electronic device 300, including the coordinates of the pixels of the two end points of the target object in the first image and the coordinates of the pixels of the two end points of the target object in the second image. And performing space forward intersection according to the image point coordinates of the two end points of the target object, and calculating the coordinates of the two end points of the target object in the digital three-dimensional space. As shown in fig. 5B, the U point (x3, y3, z3) and the V point (x4, y4, z4) are coordinates of the two end points of the target object calculated by the server 200 in the digital three-dimensional space. The distance between the U point and the V point in the digital three-dimensional space can be calculated according to the coordinates of the U point and the V point in the digital three-dimensional space, and then the actual distance between the two end points can be calculated according to the distance between the U point and the V point in the digital three-dimensional space and the scale.
In another example, the server 200 may also send the calculated digital three-dimensional space and scale to the second electronic device 300. The markings of both end points of the target object input by the measuring person are received by the second electronic device 300 and the distance between both end points of the target object is calculated from the digital three-dimensional space and the scale.
Still take a mobile phone as an example of the second electronic device 300 for illustration. The mobile phone can display the first image and the second image at the same time, or display the first image and the second image successively, so that the measuring staff can mark the positions of two end points of the target object on the two images respectively (namely, four positions in total). The specific marking method is the same as the method for marking the two end points of the first object in step S503, and is not described herein again. For example, as shown in fig. 6G, the measuring person marks two end points S1 and R1 of the target object in the first image. As shown in fig. 6H, the measuring person marks two end points S2 and R2 of the target object in the second image. Then, the second electronic device 300 sends the coordinates of the image points of S1, R1, S2 and R2 to the server 200, so that the server 200 can perform subsequent processing to measure the length of the display. In some examples, the second electronic device 300 may be the same device as the first electronic device 100.
It should be noted that the target object here may be one, and the distance between two end points of the target object may be the height, length, width, and the like of the target object. The target objects may be two, and the distance between the two end points of the target objects may be the distance between the two target objects, and the like. For example, the measurement method can be used in the digital survey of telecommunication base stations, and is used for acquiring information such as equipment size, cable length, installation spacing and the like. And the method can also be used for other engineering measurement or daily life, such as measuring the building distance and the like.
In summary, in the measurement method provided by the embodiment of the application, a measurer can use the first electronic device 100 to shoot the first image and the second image in different shooting directions at two different shooting positions. A digital three-dimensional space is then constructed based on the first image and the second image. And obtaining the scale scaling of the digital three-dimensional space and the real three-dimensional world according to the actual size of the first object with the known size in the first image and the second image. The distance between the two end points of the target object in the first image and the second image can then be calculated based on the digital three-dimensional space and the scale scaling. Compared with the prior art, the method and the device have the advantages that the target with the plurality of control points is required to be arranged by measurement personnel according to the strict distance relation and the strict azimuth relation in advance, and a large number of pictures or videos including the target are shot. In addition, the scaling of the size of the digital three-dimensional space is determined by using the actual size of the first object in the embodiment of the application, which is also beneficial to improving the reliability of measurement. In addition, since images can be shot at different shooting positions and in different shooting directions in scenes such as a narrow machine room and an inclined roof, the measurement method provided by the embodiment of the application can be applied to wider measurement scenes.
It is considered that in some scenes measuring the height of the target object, the first electronic device 100 cannot capture an image including the top end and the bottom end of the target object, possibly because the target object is high or the bottom end of the target object is blocked by other objects. Therefore, the embodiment of the present application further provides a digital photogrammetry method, which can identify a plane with the densest three-dimensional discrete points in the digital three-dimensional space based on the digital three-dimensional space obtained in step S502 and the scale obtained in step S504, and determine that the plane is the ground. Furthermore, the direction of the sky is determined according to the normal vector of the ground and the shooting positions of the first image and the second image. Then, the server 200 may calculate a distance from the top end of the target object to the ground, that is, a height of the target object, according to the top end of the target object, the ground position, and the sky direction, so as to expand a usage scenario of the measurement method provided in the embodiment of the present application. In addition, when the height of the target object is calculated, the height of the target object can be measured only by marking the top end position in the first image and the second image, and the bottom end position does not need to be marked.
Specifically, as shown in fig. 7A, a schematic flow chart of another digital image measuring method according to the embodiment of the present application is provided. The measuring method includes the steps S501 to S504, and the steps S701 to S702, and specifically includes the following steps:
s701, identifying the plane with the most dense discrete point distribution in the second three-dimensional space as the ground, and further determining the sky direction in the digital three-dimensional space.
Generally, in the real three-dimensional world, the ground is the most complex plane with the most distributed rigid bodies. Thus, the region of the first image and the second image where texture is most abundant may be considered as the ground. Then, in the digital three-dimensional space constructed based on the first image and the second image, the plane in which the three-dimensional discrete points are most densely distributed may be regarded as the ground. The plane with the most dense three-dimensional discrete point distribution is the plane with the largest quantity value of point data in the unit space. After the ground in the digital three-dimensional space is determined, the normal vector of the ground is the sky direction or the gravity direction. Furthermore, the first image or the second image shooting center is determined to be positioned above the ground, so that the sky direction in the digital three-dimensional space can be determined.
In one specific example, the server 200 may determine the ground and sky directions in the digital three-dimensional space using the following steps:
step a, determining the shooting midpoint of the first image and the shooting center of the second image in the digital three-dimensional space. Center of connecting line of photographic centers of two imagesIs set to O (O)x,Oy,Oz) And (4) pointing, and taking the point O as a sphere center to construct a virtual sphere.
And b, gridding the virtual sphere in a longitude and latitude mode. The longitude is recorded as Lon, the range of values is (-180 degrees, 180 degrees), the latitude is recorded as Lat, the range of values is (-90 degrees, 90 degrees), 1 degree is taken as a sampling interval, 360-180 grid points are totally arranged on the virtual sphere, of course, the sampling interval can be other degrees, and the number of the grid points on the virtual sphere is not limited in the application.
Step c, taking the sphere center O as a starting point, guiding rays to grid points on the spherical surface to form 360 vectors by 180 vectors which are marked asRepresenting 360 x 180 directions.
Step d, using the virtual sphere center position as the starting point and following the virtual sphere center positionN (for example, 10) virtual cylinders with a height of m (for example, 0.2 m) and a radius of r (for example, 50 m) are arranged at equal intervals in the direction and are recorded asWhere i is a cylinder mark code, i ∈ {1, 2.., 10 }. It should be noted that the height and radius of the virtual cylinder are designed according to the size of the real three-dimensional world, and therefore, the virtual cylinder is divided by the scaling ratio S1/S2 in the digital three-dimensional space. Of course, the height and radius of the virtual cylinder may also be designed according to the size of the digital three-dimensional space, and need not be divided by the size scaling ratio S1/S2, which is not limited in the embodiments of the present application.
E, respectively calculating the number of three-dimensional discrete points in a bounding box formed by 360 × 180 × n virtual cylinders formed in the steps d and e, and recording the corresponding direction of the bounding box with the maximum numberAnd a tag code iMark. ThenThe sky direction isIn the opposite direction, the ground position is m x i away from the center O of the virtual sphereMark。
It is understood that other methods may be used to determine the ground and sky directions in the digital three-dimensional space, which is not specifically limited in the embodiments of the present application.
S702, determining the distance from the top end of the target object to the ground as the height of the target object according to the position of the ground, the direction of the sky, the digital three-dimensional space, the first image and the second image.
In one example, the server 200 may receive the position of the target object tip transmitted by the second electronic device 300, including the pixel coordinates of the target object tip in the first image and the pixel coordinates of the target object tip in the second image. And performing space forward intersection according to the image point coordinates of the top end of the target object, and calculating the coordinates of the top end of the target object in the digital three-dimensional space. As shown in fig. 7B, the T point (x1, y1, z1) is the coordinate of the top end of the target object in the digital three-dimensional space calculated by the server 200. The G point (x2, y2, z2) is a point optionally taken on the floor of the digital three-dimensional space. The connecting line of the point T and the point G at the top end of the target object can be used as the hypotenuse of the triangle (or the right trapezoid), and the height H in the vertical direction can be used as the right-angle side to construct the triangle (or the right trapezoid). Solving the triangle according to the geometric principle, and obtaining the length H of the right-angle side as the height of the target object.
In another example, the server 200 may further transmit the information of the calculated digital three-dimensional space, the size scaling, the ground position, the sky direction, and the like to the second electronic device 300. The mark of the top end of the target object input by the measuring person is received by the second electronic device 300, and the height of the target object is calculated from the third three-dimensional space.
Still take a mobile phone as an example of the second electronic device 300 for illustration. The mobile phone can display the first image and the second image at the same time or sequentially display the first image and the second image so that the measuring personnel can mark the positions of the top end of the target object on the two images respectively (namely, two positions in total). The specific marking method is the same as the method for marking the two end points of the first object in step S503, and is not described herein again. In some examples, the second electronic device 300 may be the same device as the first electronic device 100.
Therefore, the measuring method provided by the embodiment of the application can be applied to a telecommunication base station digital surveying scene and is used for obtaining the heights of various devices on a long-distance high tower and the tower. And the device can also be applied to other engineering measurement or daily life, such as building height measurement.
The method by which the server 200 identifies the two endpoints of a pole is described in detail below in connection with a pole.
Fig. 8 is a schematic view of a post according to an embodiment of the present invention. The marker post is a rod-shaped object which is designed by four-color segmentation. Where the four colors include black, white, color 1, and color 2. Wherein, color 1 and color 2 are different, and neither color 1 nor color 2 is black or white. Color 1 and color 2 may select a pair of complementary colors from the hue ring. Fig. 9 is a schematic diagram of a 24-color hue circle. For example, color 1 and color 2 may be red (red) and cyan (cyan). Colors 1 and 2 may also be magenta (magenta) and green (green). Of course, color 1 and color 2 may also be close to the two colors of the complementary colors. For example, color 1 is red, and color 2 may be cyan-blue or cyan-green. The two colors in the complementary colors are distinguished more, so that the server 200 can accurately identify the two colors.
It should be noted that when insufficient light is captured, blue and black colors are not easily distinguished in the captured image. When the shooting light is too bright, yellow and white are not easy to distinguish. Thus, neither color 1 nor color 2 is blue or yellow.
In some examples, the four color segments on the target are arranged in the order: black, white, color 1, white, color 2, white, and black. Thus, the intersection points of the black segment and the white segment (i.e., the point a and the point B) can be considered as two end points that need to be recognized by the server 200. On the target, the distance between these two end points is the actual dimension of the first object S1. For example, the segment lengths of the respective colors between the two end points are equal, being a first length S0, e.g., 10 centimeters. Then, S1 is 5 × S0, and the total length of the pole is greater than 5 × S0.
In one example, the marker post may be made of plastic or carbon, and has the characteristics of difficult deformation and non-conductivity. The diameter of the stem may be 1 to 2.5 cm. The post is a straight rod, including but not limited to a cylinder, an elliptic cylinder, a triangular prism, a quadrangular prism, etc. In another example, the post may also be designed to be foldable, i.e. the post may be divided into at least two sections, connected by bolts or rubber bands, facilitating assembly and disassembly.
Fig. 10 is a schematic view of another exemplary post according to the present disclosure. In comparison with the target of fig. 8, fig. 10 adds a white segment to the black segments at both ends of the target of fig. 8 (here, the black segments at both ends of the target are also S0). Thus, the intersection points of the black and white segments at both ends of the target (i.e., points C and D) can be considered as two end points that need to be recognized by the server 200. On the target, the distance between these two end points is the actual dimension of the first object S1. For example, the segment lengths of the respective colors between the two end points are equal, being a first length S0, e.g., 10 centimeters. Then, S1 is 7 × S0, and the total length of the pole is greater than 7 × S0. The embodiment of the present application does not limit the specific form of the post.
In some embodiments of the present application, a surveying person may place a target as shown in fig. 8 or a target as shown in fig. 10 in a viewing range of a camera while taking the first image and the second image using the first electronic device 100. Then the target is included in both the first image and the second image.
Then, the server 200 may determine the approximate position of the target in the two images by using a depth learning and morphology synthesis method for the first image and the second image, respectively, and then lock the center line of the target according to the straight line feature and the color feature of the target. And then, accurately locking two end points in the target, namely the two end points of the first object according to the center line of the target, the known segmentation relation of each color in the target and the gray variation in the image.
In the following, taking the post shown in fig. 10 as an example, and referring to fig. 11A to 11D, a method for identifying two end points of the post by the server 200 is described in detail, where the method specifically includes:
step a, respectively predicting the position ranges of the marker posts in the first image and the second image, and recording the position ranges as first ranges.
In a specific implementation manner, the two images may be switched, that is, each image is cut into small pieces, that is, slice images. Each slice image may have a size of 500 × 500 pixels, for example, and a certain overlap, for example, 50% overlap, may be left between the slice image and the slice images around the slice image. The slicing processing is to increase the pixel ratio of the marker in the slice image, and is more helpful for target detection.
Then, a depth learning method may be adopted to perform target detection on the slice image of the first image and the slice image of the second image respectively to obtain approximate positions of the marker post in the first image and the second image. The model for target detection includes but is not limited to Mask R-CNN and the like.
It should be noted that, before performing the object detection, the server 200 may pre-process the first image and the second image. For example, in general optical cameras, imaging distortion exists, and distortion correction needs to be performed on each image by using camera intrinsic parameters. In another example, if the camera of the first electronic device 100 is a fisheye lens, a perspective transformation of spherical surface to center projection needs to be performed on each image. The distortion correction and the perspective transformation are used for ensuring that the shape of the target in the image is a straight line and is not distorted due to projection deformation.
Optionally, morphological opening operations of "erosion first and then dilation" may be further adopted in the image of the marker post in the first image and the image of the marker post in the second image identified by the depth learning-based method to remove small-area noise points, and the predicted range of the marker post is expanded and communicated to better constrain the positions of the marker posts in the first image and the second image. Wherein the scale of the morphological dilation should be larger than the scale of the morphological erosion. For example, the scale of morphological dilation is 30 pixels and the scale of morphological erosion is 10 pixels.
For example, as shown in fig. 11A, an example of the first image or the second image includes a target. After the correlation process of step a is applied, the approximate position of the target in the image can be obtained, such as the white area shown in fig. 11B.
And b, determining a region with straight line characteristics from the images in the first range in the first image and the second image, wherein the region is a position range with higher precision of the marker post and is marked as a second range. Wherein the second range is smaller than the first range and the second range is included in the first range.
In one particular implementation, a filter may be employed to determine regions having straight line features from the first image and the second image. The filter may be a real part of a two-dimensional Gabor function, where a formula for constructing the filter is:
wherein (x, y) is the position in the two-dimensional filter; gaborλ,σ,γ,θ,φ(x, y) is the value of the Gabor filter at the position; lambda is the wavelength of the sine function, and lambda is more than 10 and less than 20; sigma is the standard deviation of the Gaussian function, and sigma is more than 3 and less than 6; gamma is the aspect ratio of the Gaussian function in the x and y directions, and gamma is 1; phi is the initial phase of the sine wave, and phi is 0; θ is the direction of the Gabor kernel function. In some examples of the application, the direction of some Gabor kernel functions may be selected. For example, 9 directions are selected, for example, θ is 0 °, 20 °, 40 °, 60 °, 80 °, 100 °, 120 °, 140 °, 160 °, i.e., 9 Gabor filters are respectively constructed.
After the filter is constructed, the first image and the second image are processed using the constructed filter to extract straight line features in the images. For example, the size of the Gabor filter window is set to any odd number between 21 and 51. Then go toFiltering the images in the first range in the first image and the second image for multiple times in a sliding window mode to obtain Gabor characteristic values of the images in the ith row and the jth column in the theta direction, and recording the Gabor characteristic values as TGabor-θ(i, j). The final Gabor eigenvalue is a maximum value after taking absolute values of the Gabor eigenvalues in a plurality of directions (for example, 9 directions), respectively, as follows:
after the above calculation, the region where the Gabor feature value is greater than the threshold value D (e.g., 100) is a region where the straight line feature is significant, and may be regarded as a more accurate position range of the target. For example, as shown in fig. 11C, the white area in the image is an area where the straight line feature determined in step b is obvious.
Further, according to the approximate range of the mark post determined in the step a and the straight line area determined in the step b, the overlapped area is further determined to be a more accurate position range of the mark post.
And c, identifying the color characteristics of the marker post from the images in the second range in the first image and the second image, and identifying two end points of the marker post by combining the design of the marker post.
Firstly, the super-pixel segmentation processing is carried out on the images in the second range in the first image and the second image, namely, the super-pixels are formed by pixels with approximate colors and textures in the images, so that the imaging noise points of the optical lens can be effectively inhibited, and the color blocks in the images can be favorably locked. Here, the color patch refers to a region having a specific color characteristic. The super-pixel segmentation algorithm includes, but is not limited to, Simple Linear Iterative Clustering (SLIC), Mean-shift algorithm (Mean-shift), and the like. The size of the super pixel is, for example, 50 to 100 pixels.
Then, the super-pixel-divided image is converted from the RGB space to the HSL (Hue, Saturation, and brightness) space, and two color patches, i.e., a color patch corresponding to color 1 and a color patch corresponding to color 2, such as a red color patch and a blue-green color patch, are extracted from the images in the second range in the first image and the second image by a Hue threshold division method.
And searching the position with the maximum gray value change in the specific range at the two sides of the determined two color blocks on the central line of the determined marker post, and determining the position as the boundary point of the black color block and the white color block. Wherein the maximum change of the gray value is close to and slightly smaller than 255. Since the gray scale value of black in the image is close to and slightly above zero, the gray scale value of white in the image is close to and slightly below 255. Then, the boundary point between the black color block and the white color block is the maximum variation of the gray value.
For example, if a target bar as shown in fig. 8 is used, a position where the variation of the gray value is the largest, for example, the point a, can be determined on an extension line of one side of the line connecting the centers of gravity of the two color patches. On the extension line of the other side of the connecting line of the barycenter of the two color blocks, a position where the variation of the gray value is maximum, such as a point B, can be determined. Optionally, the two determined endpoints may be further verified according to the position relationship of each color block in the marker post. For example, the center of gravity of the color patch with color 1 is 2 × s0 away from the center of gravity of the color patch with color 2. If the distance from the center of the color block of color 1 to the point a is 1.5 × s0, or the distance from the center of the color block of color 2 to the point a is 2.5 × s0, the point a is considered to be accurately identified. If the distance between the point B and the center of the color block of color 1 is 2.5 × s0, or the distance between the point B and the center of the color block of color 2 is 1.5 × s0, the point B is considered to be accurately identified.
For another example, if a target bar as shown in fig. 10 is used, two positions where the variation of the gray value is the largest, such as the point a and the point C, can be determined on an extension line of one side of the connecting line of the centers of gravity of the two color patches. Further, the point C can be determined as an end point in the marker post according to the fact that the distance from the point C to the center of gravity of the color block is larger than the distance from the point A to the center of gravity of the color block. Optionally, whether the identification of the point C is accurate may be further determined according to the position relationship between the point a and the point C on the marker post. For example, the center of gravity of the color patch with color 1 is 2 × s0 away from the center of gravity of the color patch with color 2. If the distance between the point C and the point A is s0, the point C can be considered to be correctly identified. On the extension line of the other side of the connecting line of the barycenter of the two color patches, two positions where the variation of the gray value is the largest, such as the point B and the point D, can be determined. Similarly, point D may be determined to be the other end point in the target. Optionally, whether the D point identification is accurate may be further determined according to the position relationship between the B point and the D point. Of course, other methods may also be used to verify the accuracy of the point C or the point D identified by the server 200, which is not limited in this embodiment of the application. Of course, the distance between the point a and the point B may also be defined as the distance between the two end points of the target that the server 200 needs to identify, which is not limited in the embodiment of the present application.
The example of the post shown in fig. 10 is still used for explanation. As shown in fig. 11D, in the region 1001 to be identified in step b, a patch 1002 corresponding to color 1 and a patch 1003 corresponding to color 2 are identified. Then, on an extension line of a connection line of the center of gravity of the color patch 1002 and the center of gravity of the color patch 1003, four end points having the largest gray value variation are found, which are the point a, the point C, the point B, and the point D, respectively. Further, according to the position relation of each segment, the point C and the point D can be determined to be two end points of the marker post.
Therefore, when the server 200 can recognize the two end points of the target as the two end points of the first object, the measuring person does not need to mark the two end points of the first object in the first image and the second image respectively through the first electronic device 100, and the operation of the measuring person can be simplified, so that the measurement is more automated.
The above embodiment is described by taking an example of constructing a digital three-dimensional space according to a first image and a second image, then determining a scale, a ground position, a sky direction, and the like of the digital three-dimensional space and a real three-dimensional world, and finally directly calculating a distance between two end points of a target object according to the digital three-dimensional space, the scale, the ground position, the sky direction, and the like, or calculating a height of the target object. Based on the inventive concept of the embodiment of the application, after the digital three-dimensional space is constructed according to the first image and the second image, and the information such as the scale scaling, the ground position, the sky direction and the like of the digital three-dimensional space and the real three-dimensional world is determined, the obtained digital three-dimensional space is scaled, translated, rotated and the like, so that the digital three-dimensional space is adjusted to be consistent with the real three-dimensional world. Then, the distance between the two end points of the target object is calculated or the height of the target object is calculated according to the adjusted digital three-dimensional space. The embodiment of the present application does not limit this.
The embodiment of the present application further provides a chip system, as shown in fig. 12, the chip system includes at least one processor 1101 and at least one interface circuit 1102. The processor 1101 and the interface circuit 1102 may be interconnected by wires. For example, the interface circuit 1102 may be used to receive signals from other devices (e.g., a memory of the server 200). As another example, the interface circuit 1102 may be used to send signals to other devices (e.g., the processor 1101). Illustratively, the interface circuit 1102 may read instructions stored in the memory and send the instructions to the processor 1101. The instructions, when executed by the processor 1101, may cause the server 200 to perform the various steps performed by the server 200 in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
It is to be understood that the above-mentioned terminal and the like include hardware structures and/or software modules corresponding to the respective functions for realizing the above-mentioned functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present embodiments.
In the embodiment of the present application, the terminal and the like may be divided into functional modules according to the method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the embodiment of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 13 shows another possible structural diagram of the server involved in the above embodiment, in the case of dividing each functional module by corresponding functions. The server 200 includes an obtaining unit 1301, a constructing unit 1302, and a determining unit 1303.
The acquiring unit 1301 is configured to acquire a first image and a second image, where the first image and the second image both include a target object and a first object with a known actual size; the shooting positions of the first image and the second image are different, and the shooting directions of the first image and the second image are different.
A construction unit 1302 for digital three-dimensional space based on the first image and the second image.
A determining unit 1303, configured to determine a distance between two end points of the target object according to the digital three-dimensional space and a scaling of the sizes of the digital three-dimensional space and a real three-dimensional space, where the scaling is related to the position of the first object in the first image, the position of the first object in the second image, and the actual size S1 of the first object.
Further, the determining unit 1303 is further configured to: determining a plane with the most dense discrete points in the digital three-dimensional space as the ground of the digital three-dimensional space; determining a sky direction of the digital three-dimensional space according to the ground of the digital three-dimensional space and the photographing center of the first image, or according to the ground of the digital three-dimensional space and the photographing center of the second image; determining an altitude of the target object based on the digital three-dimensional space, the scaling, a ground of the digital three-dimensional space, and a sky direction of the digital three-dimensional space.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again. In case of an integrated unit, the obtaining unit 1301 may be the communication interface 230 of the server 200. The above-mentioned constructing unit 1302 and the determining unit 1303 may be integrated together, and may be the processor 210 of the server 200.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: flash memory, removable hard drive, read only memory, random access memory, magnetic or optical disk, and the like.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
- 上一篇:石墨接头机器人自动装卡簧、装栓机
- 下一篇:一种无人机三维地图建模方法