Vanishing point estimation method and device

文档序号:9237 发布日期:2021-09-17 浏览:29次 中文

1. A method of vanishing point estimation, the method comprising:

acquiring a first intersection set, wherein the first intersection set comprises an intersection of straight lines where any two line segments in a first video picture are located;

selecting an intersection point with a position relation meeting a preset condition with a first vanishing point from the first intersection point set to obtain a second intersection point set, wherein the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture;

and estimating a vanishing point of the first video picture according to the second intersection point set.

2. The method of claim 1, wherein selecting, from the first intersection set, an intersection having a positional relationship with a first vanishing point that satisfies a preset condition, to obtain a second intersection set, comprises:

mapping each intersection point included in the first intersection point set to a Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface;

determining an arc surface on the Gaussian spherical surface by taking a direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and a first angle as a conical angle;

and forming a second intersection set by the intersection points corresponding to each first mapping point in the arc surface.

3. The method of claim 1, wherein estimating the vanishing point for the first video picture from the second set of intersection points comprises:

mapping each intersection point in the second intersection point set to a tangent plane of a Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point;

clustering the second mapping points corresponding to each intersection point to obtain clustered mapping points;

and estimating a vanishing point of the first video picture according to the cluster mapping points.

4. The method of claim 3, wherein said mapping each intersection point of said second set of intersection points to a tangent plane of a Manhattan sphere to obtain a second mapped point corresponding to said each intersection point comprises:

according to the first vanishing point, the second vanishing point and the third vanishing point, mapping each intersection point included in the second intersection point set to a Manhattan system spherical surface to obtain a third mapping point corresponding to each intersection point, wherein the second vanishing point and the third vanishing point are the other two vanishing points estimated according to the second video picture;

according to the third mapping point corresponding to each intersection point, respectively determining the direction vector of each intersection point in the Manhattan system spherical surface;

and mapping each intersection point to a tangent plane of the Manhattan system spherical surface according to the direction vector of each intersection point to obtain a second mapping point corresponding to each intersection point.

5. The method of claim 3, wherein estimating the vanishing point for the first video picture based on the cluster mapped points comprises:

mapping the cluster mapping points to the Manhattan system spherical surface to obtain third mapping points of the cluster mapping points;

and mapping a third mapping point of the cluster mapping points to the Gaussian spherical surface to obtain a vanishing point of the first video picture.

6. The method of any of claims 1 to 5, wherein the vanishing points of the first video picture comprise three,

after estimating the vanishing point of the first video picture according to the second intersection point set, the method further includes:

and carrying out orthogonalization correction on the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality.

7. The method of claim 6, wherein the orthonormalizing the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality comprises:

acquiring a direction vector of each vanishing point in the three vanishing points in a Gaussian spherical surface;

decomposing a direction matrix to obtain a first correction matrix and a second correction matrix, wherein the direction matrix is obtained based on the direction vector of each vanishing point;

and acquiring three vanishing points with orthogonality included in the first video picture according to the first correction matrix and the second correction matrix.

8. An apparatus for vanishing point estimation, the apparatus comprising:

the acquisition module is used for acquiring a first intersection point set, wherein the first intersection point set comprises an intersection point of straight lines where any two line segments in the first video picture are located;

a selection module, configured to select an intersection point, of which a position relationship with a first vanishing point meets a preset condition, from the first intersection point set to obtain a second intersection point set, where the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a video picture of a previous frame of the first video picture;

and the estimation module is used for estimating the vanishing point of the first video picture according to the second intersection point set.

9. The apparatus of claim 8, wherein the selection module is to:

mapping each intersection point included in the first intersection point set to a Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface;

determining an arc surface on the Gaussian spherical surface by taking a direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and a first angle as a conical angle;

and forming a second intersection set by the intersection points corresponding to each first mapping point in the arc surface.

10. The apparatus of claim 8, wherein the estimation module is to:

mapping each intersection point in the second intersection point set to a tangent plane of a Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point;

clustering the second mapping points corresponding to each intersection point to obtain clustered mapping points;

and estimating a vanishing point of the first video picture according to the cluster mapping points.

Background

In perspective projection, images of objects extending to infinity in a real scene may appear in a limited range, for example, two (or more) parallel straight lines intersect at a point after passing through the perspective projection, which is called vanishing point. Vanishing point information is widely applied to the field of scene analysis based on image frames, such as scene reconstruction based on images, internal reference of image acquisition equipment and the like.

In order to effectively obtain the vanishing point of the image, the final vanishing point is often found from a plurality of assumed vanishing points by means of exhaustion and the like, and when the vanishing point is estimated by the algorithm, the required operation amount is large, and the operation complexity is high.

Disclosure of Invention

The embodiment of the application provides a vanishing point estimation method and device, so that the operation complexity is reduced. The technical scheme is as follows:

in one aspect, an embodiment of the present application provides a vanishing point estimation method, where the method includes:

acquiring a first intersection set, wherein the first intersection set comprises an intersection of straight lines where any two line segments in a first video picture are located;

selecting an intersection point with a position relation meeting a preset condition with a first vanishing point from the first intersection point set to obtain a second intersection point set, wherein the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture;

and estimating a vanishing point of the first video picture according to the second intersection point set.

Optionally, the selecting, from the first intersection set, an intersection whose position relationship with the first vanishing point meets a preset condition to obtain a second intersection set includes:

mapping each intersection point included in the first intersection point set to a Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface;

determining an arc surface on the Gaussian spherical surface by taking a direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and a first angle as a conical angle;

and forming a second intersection set by the intersection points corresponding to each first mapping point in the arc surface.

Optionally, the estimating a vanishing point of the first video picture according to the second intersection set includes:

mapping each intersection point in the second intersection point set to a tangent plane of a Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point;

clustering the second mapping points corresponding to each intersection point to obtain clustered mapping points;

and estimating a vanishing point of the first video picture according to the cluster mapping points.

Optionally, the mapping each intersection point in the second intersection point set to a tangent plane of a manhattan sphere to obtain a second mapping point corresponding to each intersection point includes:

according to the first vanishing point, the second vanishing point and the third vanishing point, mapping each intersection point included in the second intersection point set to a Manhattan system spherical surface to obtain a third mapping point corresponding to each intersection point, wherein the second vanishing point and the third vanishing point are the other two vanishing points estimated according to the second video picture;

according to the third mapping point corresponding to each intersection point, respectively determining the direction vector of each intersection point in the Manhattan system spherical surface;

and mapping each intersection point to a tangent plane of the Manhattan system spherical surface according to the direction vector of each intersection point to obtain a second mapping point corresponding to each intersection point.

Optionally, the estimating a vanishing point of the first video picture according to the cluster mapping points includes:

mapping the cluster mapping points to the Manhattan system spherical surface to obtain third mapping points of the cluster mapping points;

and mapping a third mapping point of the cluster mapping points to the Gaussian spherical surface to obtain a vanishing point of the first video picture.

Optionally, the vanishing points of the first video picture include three,

after estimating the vanishing point of the first video picture according to the second intersection point set, the method further includes:

and carrying out orthogonalization correction on the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality.

Optionally, the performing an orthogonalization correction on the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality includes:

acquiring a direction vector of each vanishing point in the three vanishing points in a Gaussian spherical surface;

decomposing a direction matrix to obtain a first correction matrix and a second correction matrix, wherein the direction matrix is obtained based on the direction vector of each vanishing point;

and acquiring three vanishing points with orthogonality included in the first video picture according to the first correction matrix and the second correction matrix.

In another aspect, an embodiment of the present application provides an apparatus for vanishing point estimation, where the apparatus includes:

the acquisition module is used for acquiring a first intersection point set, wherein the first intersection point set comprises an intersection point of straight lines where any two line segments in the first video picture are located;

a selection module, configured to select an intersection point, of which a position relationship with a first vanishing point meets a preset condition, from the first intersection point set to obtain a second intersection point set, where the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a video picture of a previous frame of the first video picture;

and the estimation module is used for estimating the vanishing point of the first video picture according to the second intersection point set.

Optionally, the selecting module is configured to:

mapping each intersection point included in the first intersection point set to a Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface;

determining an arc surface on the Gaussian spherical surface by taking a direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and a first angle as a conical angle;

and forming a second intersection set by the intersection points corresponding to each first mapping point in the arc surface.

Optionally, the estimating module is configured to:

mapping each intersection point in the second intersection point set to a tangent plane of a Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point;

clustering the second mapping points corresponding to each intersection point to obtain clustered mapping points;

and estimating a vanishing point of the first video picture according to the cluster mapping points.

Optionally, the estimating module is configured to:

according to the first vanishing point, the second vanishing point and the third vanishing point, mapping each intersection point included in the second intersection point set to a Manhattan system spherical surface to obtain a third mapping point corresponding to each intersection point, wherein the second vanishing point and the third vanishing point are the other two vanishing points estimated according to the second video picture;

according to the third mapping point corresponding to each intersection point, respectively determining the direction vector of each intersection point in the Manhattan system spherical surface;

and mapping each intersection point to a tangent plane of the Manhattan system spherical surface according to the direction vector of each intersection point to obtain a second mapping point corresponding to each intersection point.

Optionally, the estimating module is configured to:

mapping the cluster mapping points to the Manhattan system spherical surface to obtain third mapping points of the cluster mapping points;

and mapping a third mapping point of the cluster mapping points to the Gaussian spherical surface to obtain a vanishing point of the first video picture.

Optionally, the vanishing points of the first video picture include three,

the device further comprises:

and the correction module is used for carrying out orthogonalization correction on the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality.

Optionally, the correction module is configured to:

acquiring a direction vector of each vanishing point in the three vanishing points in a Gaussian spherical surface;

decomposing a direction matrix to obtain a first correction matrix and a second correction matrix, wherein the direction matrix is obtained based on the direction vector of each vanishing point;

and acquiring three vanishing points with orthogonality included in the first video picture according to the first correction matrix and the second correction matrix.

In another aspect, the present application provides an electronic device, comprising: a processor and a memory. The processor and the memory can be connected through a bus system. The memory is used for storing programs, instructions or codes, and the processor is used for executing the programs, the instructions or the codes in the memory to realize the method.

In another aspect, the present application provides a computer program product comprising a computer program stored in a computer readable storage medium and loaded by a processor to implement the above method.

In another aspect, the present application provides a non-transitory computer-readable storage medium for storing a computer program, which is loaded by a processor to execute the instructions of the method.

The technical scheme provided by the embodiment of the application can have the following beneficial effects:

obtaining a second intersection point set by selecting an intersection point from the first intersection point set, wherein the position relation between the second intersection point set and the first vanishing point meets a preset condition, the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture; the number of the intersection points in the second intersection point set is greatly reduced, so that the vanishing point of the first video picture is estimated according to the second intersection point set, and the operation amount can be reduced.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.

Fig. 1 is a schematic diagram of a network architecture provided in an embodiment of the present application;

fig. 2 is a flowchart of a method for vanishing point estimation according to an embodiment of the present disclosure;

fig. 3 is a flowchart of another vanishing point estimation method provided by an embodiment of the present application;

fig. 4 is a flowchart of another vanishing point estimation method provided by the embodiment of the present application;

FIG. 5 is a schematic diagram of a Gaussian sphere and a cambered surface provided by an embodiment of the present application;

fig. 6 is a schematic structural diagram of an apparatus for vanishing point estimation according to an embodiment of the present disclosure;

fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

Referring to fig. 1, embodiments of the present application provide a network architecture that may include a camera that may be deployed indoors for taking a video that includes one frame of a video picture.

When the video is shot by the camera, vanishing points included in each frame of video picture in the video can be estimated. Alternatively, the first and second electrodes may be,

the network architecture can also comprise a terminal or a server, wherein a network connection is established between the terminal and the camera, or a network connection is established between the server and the camera. The terminal can receive the video sent by the camera, and the terminal estimates the vanishing point included in each frame of video picture in the video. Alternatively, the server may receive the video sent by the camera, and the server estimates the vanishing points included in each frame of video picture in the video.

The camera or terminal estimates the vanishing point in the video pictures, which will be described in detail in the following embodiments, and will not be described in detail at first. After the camera or the terminal estimates the vanishing point in the video picture, the vanishing point can be processed to realize image correction, so that the quality of the video picture is improved. For example, rendering the vanishing point to achieve image rectification of the video picture.

Optionally, the terminal may be a computer or a tablet computer.

Referring to fig. 2, an embodiment of the present application provides a method for vanishing point estimation, which may be applied to the network architecture shown in fig. 1, and includes:

step 201: and acquiring a first intersection set, wherein the first intersection set comprises the intersection of straight lines where any two line segments in the first video picture are located.

Step 202: and selecting an intersection point of which the position relation with the first vanishing point meets a preset condition from the first intersection point set to obtain a second intersection point set, wherein the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture.

Step 203: and estimating a vanishing point of the first video picture according to the second intersection point set.

In the embodiment of the application, an intersection point, the position relation of which with a first vanishing point meets a preset condition, is selected from a first intersection point set to obtain a second intersection point set, the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a video picture of a previous frame of the first video picture; therefore, the number of the intersection points in the second intersection point set can be greatly reduced, the vanishing point of the first video picture is estimated according to the second intersection point set, and the operation amount can be reduced.

Referring to fig. 3, an embodiment of the present application provides a vanishing point estimating method, which may be applied to the network architecture shown in fig. 1, and an execution host of the method may be a camera, a terminal, or a server. When the device acquires a video, the method is used for estimating a vanishing point in a first frame of video picture of the video. The method comprises the following steps:

step 301: and acquiring a third intersection set, wherein the third intersection set comprises N intersections, and the N intersections comprise intersections of straight lines where any two line segments in the first frame of video picture of the video are located.

In this step, a first frame of video picture of the video is detected by a line segment detection algorithm, and positions of a first end point and a second end point included in each line segment in the first frame of video picture in an image coordinate system of the first frame of video picture are obtained. And acquiring a linear equation of the straight line where each line segment is located according to the position of the first endpoint and the position of the second endpoint included in each line segment. And acquiring the intersection point of the straight lines of any two line segments according to the straight line equation of the straight lines of any two line segments to obtain a third intersection point set.

Alternatively, the line segment detection algorithm may be a Line Segment Detector (LSD) detection algorithm.

For any line segment in the first frame video picture, the position of the first end point of the any line segment comprises the abscissa and the ordinate of the first end point in the image coordinate system, and the position of the second end point of the any line segment comprises the abscissa and the ordinate of the second end point in the image coordinate system. Therefore, the linear equation of the straight line where any line segment is located is obtained according to the abscissa and the ordinate of the first endpoint and the abscissa and the ordinate of the second endpoint.

Optionally, before obtaining the straight line equation of the straight line where each line segment is located, the length of each line segment is further obtained according to the position of the first endpoint and the position of the second endpoint included in each line segment. And selecting line segments with the length exceeding a length threshold value from each line segment included in the first frame of video picture, and acquiring a straight line equation of a straight line where each selected line segment is located.

Step 302: and mapping each intersection point included by the third intersection point set to the Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface.

Before executing the step, a gaussian sphere is generated, the radius of the gaussian sphere is a unit length, coordinates of any point on the gaussian sphere include longitude and latitude, the gaussian sphere is divided at equal intervals along the longitude direction of the gaussian sphere, and the gaussian sphere is divided at equal intervals along the latitude direction of the gaussian sphere, so that a plurality of grids are divided on the gaussian sphere.

The longitude range interval length occupied by each grid on the Gaussian spherical surface is equal, and the latitude range interval length occupied by each grid is equal.

After a plurality of grids are divided on the Gaussian spherical surface, the initial score of each grid is also set. The initial score of each grid marked off on the gaussian sphere can be a number of 0, 1 or 2.

In this step, for any one of the third intersection point sets, the three-dimensional coordinates of the first mapping point corresponding to the gaussian spherical surface at the any one of the intersection points are acquired by the following first formula based on the position of the any one of the intersection points and the camera's internal reference.

The first formula is:

in the first formula, X, Y, Z are three-dimensional coordinates of the first mapping point, respectively, and x and y are an abscissa and an ordinate included in the position of any one of the intersection points, respectively. c. Cx、cyAnd f are all internal references of the camera. Wherein, cxAnd cyRespectively central pixel point of cameraThe abscissa and ordinate of (a), f is the focal length of the camera.

And then converting the three-dimensional coordinate of the first mapping point into the coordinate of the first mapping point on the Gaussian spherical surface through a second formula, wherein the coordinate comprises the longitude and the latitude of the first mapping point.

The second formula is:

in the second formula, λ and φ are the longitude and latitude of the first mapping point, respectively.

In this step, the coordinates of the first mapping points of each intersection point on the gaussian spherical surface include longitude and latitude, so that the grid where each first mapping point is located can be determined according to the coordinates of each first mapping point.

Step 303: and obtaining the score of each grid on the Gaussian spherical surface according to the first mapping point corresponding to each intersection point on the Gaussian spherical surface.

In this step, for the intersection point of the straight lines where any two line segments in the third intersection point set are located, the grid where the first mapping point is located is determined according to the coordinates of the first mapping point corresponding to the intersection point. The current score of the grid is obtained. And calculating a first score of the first mapping point according to the length of each line segment of the two line segments, the included angle between the straight lines of the two line segments and the current score of the grid. And accumulating the first score of the first mapping point and the current score of the grid to obtain a second score, and updating the current score of the grid into the second score.

Wherein a higher score for a grid indicates more mapping points in the grid.

And executing the operation on each intersection point in the third intersection point set to obtain the score of each grid in the Gaussian spherical surface.

Optionally, an included angle between the straight lines of the two line segments is calculated according to a straight line equation of the straight lines of the two line segments. Calculating a first score of the first mapping point according to the length of each line segment of the two line segments, an included angle between straight lines of the two line segments and the current score of the grid by using a third formula as follows;

the third formula is: s (λ, Φ) ═ S (a, b) + l1*l2*sin2θ

In the third formula, S (λ, φ) is the first score of the first mapping point, S (a, b) is the current score of the grid, (a, b) is the coordinates of the grid, l1And l2The lengths of the two line segments are respectively, and theta is an included angle between straight lines where the two line segments are located; λ, φ are the longitude and latitude of the first mapping point, respectively.

Step 304: and aiming at any intersection point in the third intersection point set, acquiring K vanishing point combinations corresponding to the intersection point according to the first mapping point corresponding to the intersection point, wherein each vanishing point combination comprises three direction vectors in the Gaussian spherical surface, and K is an integer greater than 1.

The third intersection set includes N intersections, so that N × K vanishing point combinations are obtained in this step, which are multiplications.

In this step, K vanishing point combinations corresponding to the intersection point may be obtained by the following operations 3041 to 3043. The operations 3041 through 3043 are, respectively:

3041: and acquiring a first direction vector of the intersection point on the Gaussian spherical surface according to the coordinates of the first mapping point corresponding to the intersection point, wherein the starting point of the first direction vector is the spherical center of the Gaussian spherical surface, and the end point of the first direction vector is the first mapping point corresponding to the intersection point.

3042: determining a plane which passes through the spherical center of the Gaussian spherical surface and is perpendicular to the direction vector, obtaining an arc of the Gaussian spherical surface and the plane, sampling K sampling points on the arc at equal intervals, and obtaining a second direction vector of each sampling point in the K sampling points.

For any sampling point, the starting point of the second direction vector of the any sampling point is the sphere center of the Gaussian sphere, and the end point is the any sampling point.

3043: and for any one of the K sampling points, acquiring a third direction vector according to the first direction vector and the second direction vector of the sampling point, and forming a vanishing point combination corresponding to the sampling point by using the first direction vector, the second direction vector of the sampling point and the third direction vector.

The third direction vector is v3=v1×v2,v1Is a first direction vector, v2Is a second direction vector. The operation 3043 is repeated to obtain K vanishing point combinations corresponding to the intersection point.

For any vanishing point combination, the vanishing point combination comprises a first direction vector, a second direction vector and a third direction vector which respectively correspond to three spherical points on the Gaussian sphere. That is, the three spherical points are respectively a first spherical point corresponding to the first direction vector, a second spherical point corresponding to the second direction vector and a third spherical point corresponding to the third direction vector. The first spherical point is the first mapping point corresponding to the intersection point.

The operations 3041 to 3043 described above are performed on each intersection in the third intersection set, so as to obtain K vanishing point combinations corresponding to each intersection in the third intersection set, and thus obtain N × K vanishing point combinations in total.

Step 305: and aiming at any vanishing point combination in the N x K vanishing point combinations, obtaining the score of the any vanishing point combination.

For any one vanishing point combination, the vanishing point combination includes a first direction vector of the first spherical point, a second direction vector of the second spherical point, and a third direction vector of the third spherical point. And respectively acquiring the coordinates of a first spherical surface point, the coordinates of a second spherical surface point and the coordinates of a third spherical surface point in the vanishing point combination on the Gaussian spherical surface according to the first direction vector, the second direction vector and the third direction vector of the vanishing point combination. And respectively obtaining the score of the first grid where the first spherical point is located, the score of the second grid where the second spherical point is located and the score of the third grid where the third spherical point is located according to the coordinates of the first spherical point, the coordinates of the second spherical point and the coordinates of the third spherical point. And obtaining the score of the vanishing point combination according to the score of the first grid, the score of the second grid and the score of the third grid.

Optionally, according to the score of the first grid, the score of the second grid and the score of the third grid, the score of the vanishing point combination is obtained through the following fourth formula; that is, the score of the first network, the score of the second network, and the score of the third grid are accumulated to obtain the score of the vanishing point combination.

The fourth formula is:

in the second formula, SHIs the score of the vanishing point combination, S1、S2、S3The score of the first grid, the score of the second grid and the score of the third grid, respectively.

Step 306: and estimating three vanishing points in the first frame of video picture according to the score of each vanishing point combination in the N x K vanishing point combinations.

In this step, the vanishing point combination with the maximum score is selected from the N × K vanishing point combinations, and the vanishing point combination with the maximum score includes a first spherical point corresponding to the first direction vector, a second spherical point corresponding to the second direction vector, and a third spherical point corresponding to the third direction vector. And determining a vanishing point corresponding to the first spherical point, a vanishing point corresponding to the second spherical point and a vanishing point corresponding to the third spherical point in the first frame of video picture, thus obtaining three vanishing points in the first frame of video picture.

Optionally, according to the longitude and latitude of the first spherical point, the three-dimensional coordinate of the first spherical point may be obtained through the second formula, and according to the three-dimensional coordinate of the first spherical point, the position of the vanishing point corresponding to the first spherical point in the first frame of video picture may be determined through the first formula, so as to obtain the first vanishing point in the first frame of video picture.

And according to the longitude and the latitude of the second spherical point, the three-dimensional coordinate of the second spherical point can be obtained through the second formula, and according to the three-dimensional coordinate of the second spherical point, the position of a vanishing point corresponding to the second spherical point in the first frame of video picture can be determined through the first formula, so that the second vanishing point in the first frame of video picture is obtained. And the number of the first and second groups,

and according to the longitude and the latitude of the third spherical point, the three-dimensional coordinate of the third spherical point can be obtained through the second formula, and according to the three-dimensional coordinate of the third spherical point, the position of a vanishing point corresponding to the third spherical point in the first frame of video picture can be determined through the first formula, so that the third vanishing point in the first frame of video picture is obtained.

In the embodiment of the application, the intersection point of straight lines where any two line segments are located in the first frame of video picture of the video is obtained, and a third intersection point set comprising N intersection points is obtained. And obtaining scores of the N x K vanishing point combinations according to the third intersection point set, and estimating three vanishing points in the first frame of video picture according to the score of each vanishing point combination in the N x K vanishing point combinations. Therefore, three vanishing points included in the first frame of video picture can be estimated based on the first frame of video picture.

Referring to fig. 4, an embodiment of the present application provides a vanishing point estimating method, which may be applied to the network architecture shown in fig. 1, and an execution host of the method may be a camera, a terminal, or a server. When the device acquires a video, the method is used for estimating a vanishing point in any frame of video picture after a first frame of video picture of the video. For convenience of description, any frame of video picture after the first frame of video picture is referred to as a first video picture, that is, the first video picture is a second frame of video picture, a third frame of video picture, a fourth frame of video picture, … … of the video. The method comprises the following steps:

step 401: and acquiring a first intersection set, wherein the first intersection set comprises M intersections, and the M intersections are intersections of straight lines where any two line segments in the first video picture are located.

In this step, the first video picture is detected by a line segment detection algorithm, and positions of a first endpoint and a second endpoint included in each line segment in the first video picture in an image coordinate system of the first video picture are obtained. And acquiring a linear equation of the straight line where each line segment is located according to the position of the first endpoint and the position of the second endpoint included in each line segment. And acquiring the intersection point of the straight lines of any two line segments according to the straight line equation of the straight lines of any two line segments to obtain a first intersection point set.

For any line segment in the first video picture, the position of the first endpoint of the any line segment comprises the abscissa and the ordinate of the first endpoint in the image coordinate system, and the position of the second endpoint of the any line segment comprises the abscissa and the ordinate of the second endpoint in the image coordinate system. Therefore, the linear equation of the straight line where any line segment is located is obtained according to the abscissa and the ordinate of the first endpoint and the abscissa and the ordinate of the second endpoint.

Optionally, before obtaining the straight line equation of the straight line where each line segment is located, the length of each line segment is further obtained according to the position of the first endpoint and the position of the second endpoint included in each line segment. And selecting line segments with the length exceeding a length threshold value from each line segment included in the first frame of video picture, and acquiring a straight line equation of a straight line where each selected line segment is located.

Step 402: and selecting an intersection point of which the position relation with the first vanishing point meets a preset condition from the first intersection point set to obtain a second intersection point set, wherein the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture.

Optionally, the second video picture is a video picture of a previous frame adjacent to the first video picture. For example, if the first video picture is a second frame video picture of the video, the second video picture is the first frame video picture of the video. For another example, if the first video picture is a third frame video picture of the video, the second video picture is a second frame video picture of the video. For example, if the first video picture is a fourth frame video picture of the video, the second video picture is a third frame video picture of the video.

In the case that the first video picture is the second frame video picture of the video, and the second video picture is the first frame video picture of the video, three vanishing points in the second video picture can be estimated by the method of the embodiment shown in fig. 3.

In this step, the second intersection set can be obtained by the following operations 4021 to 4023. The operations of 4021 to 4023 are:

4021: and mapping each intersection point included in the first intersection point set to the Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface.

In this step, the coordinates of the first mapping point of each intersection on the gaussian spherical surface include longitude and latitude. Each intersection included in the first set of intersections can be mapped onto the gaussian sphere by the first and second formulas described above. The detailed process of mapping onto the gaussian sphere will not be described in detail here.

4022: and determining an arc surface on the Gaussian spherical surface by taking a direction vector of a first vanishing point in the Gaussian spherical surface as a cone axis and taking a first angle as a cone angle, wherein the first vanishing point is an vanishing point estimated according to a second video picture, and the second video picture is a previous frame video picture of the first video picture.

The first video picture is the ith frame video picture of the video, and i is 2, 3, 4, … …. The second video picture is the i-1 frame video picture of the video.

In this step, the camber can be determined by the following first and second steps. The first step and the second step are respectively:

the first step is as follows: and selecting one vanishing point from the three vanishing points included in the second video picture as a first vanishing point, and mapping the first vanishing point to the Gaussian spherical surface to obtain a first mapping point of the first vanishing point on the Gaussian spherical surface.

The second step is that: and determining an arc surface on the Gaussian spherical surface by taking the direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and the first angle as a conical angle.

In the second step, a direction vector of the first vanishing point is obtained, wherein the direction vector of the first vanishing point takes the center of a Gaussian sphere as a starting point and takes a first mapping point of the first vanishing point as an end point. Referring to fig. 5, on the gaussian spherical surface, an arc is determined on the gaussian spherical surface with the direction vector of the first vanishing point as the cone axis and the first angle as the cone angle, the gaussian spherical surface is divided into two arc surfaces based on the arc, and the arc surface with the smallest area is selected from the two arc surfaces.

4023: and forming a second intersection set by the intersection points corresponding to each first mapping point in the cambered surface.

Step 403: and mapping each intersection point in the second intersection point set to an upper tangent plane of the Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point included in the second intersection point set.

Before this step is performed, a Manhattan sphere is established. Then, a second mapping point corresponding to each intersection included in the second intersection set is acquired by the following operations 4031 to 4032. The operations of 4031 to 4032 are respectively:

4031: and according to the first vanishing point, the second vanishing point and the third vanishing point, mapping each intersection point included in the second intersection point set to a Manhattan system spherical surface to obtain a third mapping point corresponding to each intersection point, wherein the second vanishing point and the third vanishing point are the other two vanishing points obtained according to the estimation of the second video picture.

In this step, the first vanishing point, the second vanishing point and the third vanishing point are mapped onto a gaussian spherical surface to obtain a first mapping point of the first vanishing point, a first mapping point of the second vanishing point and a first mapping point of the third vanishing point. And respectively acquiring a direction vector of the first vanishing point, a direction vector of the second vanishing point and a direction vector of the third vanishing point according to the first mapping point of the first vanishing point, the first mapping point of the second vanishing point and the first mapping point of the third vanishing point. And mapping the first mapping point corresponding to each intersection point included in the second intersection point set to the Manhattan system spherical surface according to the direction vector of the first vanishing point, the direction vector of the second vanishing point and the direction vector of the third vanishing point to obtain the third mapping point corresponding to each intersection point.

For a first mapping point corresponding to any one of the intersection points included in the second intersection point set, a third mapping point corresponding to the any one of the intersection points is obtained by the following fifth formula.

The fifth formula is:

in the fifth formula, vkThe coordinates of the first mapping point corresponding to any one of the intersection points, v1,v2,v3Respectively a direction vector of the first vanishing point, a direction vector of the second vanishing point and a direction vector of the third vanishing point,RTis a transposed matrix of the matrix R, v'kThe matrix R is a third mapping point corresponding to any one of the intersection points, and is obtained by combining the direction vector of the first vanishing point, the direction vector of the second vanishing point, and the direction vector of the third vanishing point.

4032: and mapping each intersection point to a tangent plane of the Manhattan system spherical surface according to the third mapping point corresponding to each intersection point to obtain a second mapping point corresponding to each intersection point.

For the third mapping point corresponding to any one of the intersection points, the third mapping point corresponding to any one of the intersection points is mapped to one tangent plane of the manhattan system spherical surface by the following sixth formula according to the third mapping point corresponding to any one of the intersection points, and the second mapping point corresponding to any one of the intersection points is obtained.

The sixth formula is:

in the sixth formula, m'kIs the coordinate of the second mapping point corresponding to any one of the intersection points, the coordinate being a two-dimensional coordinate, v'k,x,v'k,y,v'k,zThe abscissa, ordinate and ordinate of the third mapping point corresponding to any one of the intersection points are respectively.

Step 404: and clustering the corresponding second mapping points of each intersection point to obtain clustered mapping points.

In this step, the second mapping points corresponding to each intersection point may be clustered by the following seventh formula to obtain clustered mapping points.

The seventh formula is:

in the seventh formula, s'jIs a cluster mapping point, e is a natural constant, c is a weighted constant, m'jThe coordinate of the j-th secondary mapping point is a two-dimensional coordinate including an abscissa m'j,xAnd ordinate m'j,yAnd T is the number of second mapping points.

Step 405: and estimating a first vanishing point of the first video picture according to the cluster mapping point.

In this step, the cluster mapping point is mapped to a Manhattan spherical surface to obtain a third mapping point corresponding to the cluster mapping point; and mapping the third mapping point of the cluster mapping point to a Gaussian spherical surface to obtain a first mapping point corresponding to the cluster mapping point, wherein the point corresponding to the first mapping point on the first video picture is the first vanishing point of the first video picture.

The cluster mapping point is a point on the normal section of the Manhattan spherical surface, and the cluster mapping point can be reversely mapped to the Manhattan spherical surface through the sixth formula to obtain a third mapping point corresponding to the cluster mapping point. The third mapping point can be reversely mapped to the gaussian spherical surface by the fifth formula to obtain a first mapping point corresponding to the clustering mapping point, and then the corresponding point of the first mapping point on the first video picture is obtained by the second formula and the first formula.

The first video pictures comprise three vanishing points, the second video pictures comprise three vanishing points, and the second video pictures comprise a second vanishing point and a third vanishing point besides the first vanishing point. For the second vanishing point of the second video picture, estimating the second vanishing point of the first video picture through the above steps 402 to 405; for the third vanishing point of the second video picture, the third vanishing point of the first video picture is estimated through the above steps 402 to 405.

Step 406: and carrying out orthogonalization correction on the three vanishing points included in the first video picture to obtain three vanishing points with orthogonality.

In this step, the orthogonalization correction can be performed by the following operations 4061 to 4063. The operations 4061 to 4063 are:

4061: and acquiring a direction vector of each vanishing point in the three vanishing points included in the first video picture in the Gaussian spherical surface.

For any direction vector of the vanishing point, the starting point of the direction vector is the sphere center of the Gaussian sphere, and the end point is the corresponding first mapping point of the vanishing point on the Gaussian sphere. Assuming the directions of the three vanishing pointsThe vectors are respectively represented as

4062: and decomposing the direction matrix to obtain a first correction matrix and a second correction matrix, wherein the direction matrix is obtained based on the direction vector of each vanishing point.

Optionally, the direction vector of each vanishing point is formed into the direction matrix, so that the formed direction matrix can be expressed as

Optionally, a Singular Value Decomposition (SVD) decomposition method may be adopted to decompose the directional matrix to obtain a left singular matrix and a right singular matrix, where the left singular matrix and the right singular matrix are the first correction matrix and the second correction matrix, respectively.

4063: and acquiring three vanishing points with orthogonality included in the first video picture according to the first correction matrix and the second correction matrix.

The three vanishing points with orthogonality included in the first video picture can be obtained by the following eighth formula.

The eighth formula is:

in the eighth formula, λ1、λ2、λ3Are all weight coefficients, v'1、v′2、v′3Three vanishing points.

In the embodiment of the application, a first intersection set is obtained, where the first intersection set includes M intersections, and the M intersections are intersections of straight lines where any two line segments in the first video picture are located. And determining an arc surface on the Gaussian spherical surface based on the direction vector of the first vanishing point, and forming a second intersection set by the intersection points corresponding to the first mapping point in the arc surface. Compared with the first intersection point set, the number of the intersection points included by the second intersection point set is greatly reduced, so that the operation amount can be reduced and the operation complexity can be reduced when the vanishing point included by the first video picture is estimated based on the second intersection point set.

The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.

Referring to fig. 6, an embodiment of the present application provides an apparatus 500 for vanishing point estimation, where the apparatus 500 includes:

an obtaining module 501, configured to obtain a first intersection set, where the first intersection set includes an intersection of straight lines where any two line segments in the first video picture are located;

a selecting module 502, configured to select an intersection point, of which a position relationship with a first vanishing point meets a preset condition, from the first intersection point set to obtain a second intersection point set, where the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a video picture of a previous frame of the first video picture;

an estimating module 503, configured to estimate a vanishing point of the first video picture according to the second intersection set.

Optionally, the selecting module 502 is configured to:

mapping each intersection point included in the first intersection point set to a Gaussian spherical surface to obtain a first mapping point of each intersection point on the Gaussian spherical surface;

determining an arc surface on the Gaussian spherical surface by taking a direction vector of the first vanishing point in the Gaussian spherical surface as a conical axis and a first angle as a conical angle;

and forming a second intersection set by the intersection points corresponding to each first mapping point in the arc surface.

Optionally, the estimating module 502 is configured to:

mapping each intersection point in the second intersection point set to a tangent plane of a Manhattan spherical surface to obtain a second mapping point corresponding to each intersection point;

clustering the second mapping points corresponding to each intersection point to obtain clustered mapping points;

and estimating a vanishing point of the first video picture according to the cluster mapping points.

Optionally, the estimating module 503 is configured to:

according to the first vanishing point, the second vanishing point and the third vanishing point, mapping each intersection point included in the second intersection point set to a Manhattan system spherical surface to obtain a third mapping point corresponding to each intersection point, wherein the second vanishing point and the third vanishing point are the other two vanishing points estimated according to the second video picture;

according to the third mapping point corresponding to each intersection point, respectively determining the direction vector of each intersection point in the Manhattan system spherical surface;

and mapping each intersection point to a tangent plane of the Manhattan system spherical surface according to the direction vector of each intersection point to obtain a second mapping point corresponding to each intersection point.

Optionally, the estimating module 503 is configured to:

mapping the cluster mapping points to the Manhattan system spherical surface to obtain third mapping points of the cluster mapping points;

and mapping a third mapping point of the cluster mapping points to the Gaussian spherical surface to obtain a vanishing point of the first video picture.

Optionally, the vanishing points of the first video picture include three,

the apparatus 500 further comprises:

and the correction module is used for carrying out orthogonalization correction on the three vanishing points included in the first video picture to obtain the three vanishing points with orthogonality.

Optionally, the correction module is configured to:

acquiring a direction vector of each vanishing point in the three vanishing points in a Gaussian spherical surface;

decomposing a direction matrix to obtain a first correction matrix and a second correction matrix, wherein the direction matrix is obtained based on the direction vector of each vanishing point;

and acquiring three vanishing points with orthogonality included in the first video picture according to the first correction matrix and the second correction matrix.

In the embodiment of the application, the selection module selects an intersection point, of which the position relationship with a first vanishing point meets a preset condition, from the first intersection point set to obtain a second intersection point set, wherein the first vanishing point is a vanishing point estimated according to a second video picture, and the second video picture is a video picture of a previous frame of the first video picture; the number of the intersection points in the second intersection point set is greatly reduced, so that the dead point of the first video picture is estimated by the estimation module according to the second intersection point set, and the operation amount can be reduced.

With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.

Fig. 7 shows a block diagram of an electronic device 600 according to an exemplary embodiment of the present invention. The electronic device 600 may be a terminal or a server mentioned in the above embodiments, and may be a portable mobile terminal, such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The electronic device 600 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.

In general, the electronic device 600 includes: a processor 601 and a memory 602.

The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.

The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the method of vanishing point estimation provided by the method embodiments herein.

In some embodiments, the electronic device 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera 606, an audio circuit 607, a positioning component 608, and a power supply 609.

The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.

The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.

The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the electronic device 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the electronic device 600 or in a foldable design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.

The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the electronic device 600 and the rear camera is disposed on the back of the electronic device. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.

Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and disposed at different locations of the electronic device 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.

The positioning component 608 is used to locate a current geographic Location of the electronic device 600 to implement navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.

The power supply 609 is used to supply power to various components in the electronic device 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.

In some embodiments, the electronic device 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.

The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the electronic device 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.

The gyro sensor 612 may detect a body direction and a rotation angle of the electronic device 600, and the gyro sensor 612 and the acceleration sensor 611 may cooperate to acquire a 3D motion of the user on the electronic device 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.

The pressure sensor 613 may be disposed on a side bezel of the electronic device 600 and/or on an underlying layer of the touch display screen 605. When the pressure sensor 613 is disposed on a side frame of the electronic device 600, a user's holding signal of the electronic device 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.

The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be disposed on the front, back, or side of the electronic device 600. When a physical button or vendor Logo is provided on the electronic device 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.

The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.

Proximity sensor 616, also referred to as a distance sensor, is typically disposed on the front panel of electronic device 600. The proximity sensor 616 is used to capture the distance between the user and the front of the electronic device 600. In one embodiment, when the proximity sensor 616 detects that the distance between the user and the front face of the electronic device 600 gradually decreases, the processor 601 controls the touch display screen 605 to switch from the bright screen state to the dark screen state; when the proximity sensor 616 detects that the distance between the user and the front surface of the electronic device 600 gradually becomes larger, the processor 601 controls the touch display screen 605 to switch from the breath screen state to the bright screen state.

Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the electronic device 600, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.

Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:一种基于双目视觉的钢拱架铰接孔检测方法及其应用

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!