Multi-vehicle speed measurement method based on video compression domain
1. A multi-vehicle speed measurement method based on a video compression domain is characterized by comprising the following steps:
step 1, extracting a motion vector MV from a video code stream;
step 2, initializing a camera;
1) analyzing lane lines and lane areas;
2) calculating a homography conversion matrix H for converting the actual distance and the real distance of the pixel;
3) learning the maximum vehicle speed analyzable by the camera;
step 3, preprocessing a motion vector MV;
eliminating the motion vector MV not in the lane area, and only processing the macro block of the non-zero motion vector MV in the lane area;
detecting a moving target in a time-space domain:
judging whether a nonzero macro block in a lane area is a moving target or not according to the characteristic that the vehicle movement is continuous and smooth, and setting a threshold value to determine the current macro block MB to be detectedcWhether it is a macro block MBreal of a moving vehicle;
step 4, marking a moving target;
dividing vehicles with different lane lines:
when one target frame crosses two lanes, dividing the target frame into two parts by lane lines, and marking the target frame as { center, box, linenum, mvs }; center is the center of gravity of the target frame, which contains the coordinates of the center of gravity; box is the size of the target frame, which comprises the coordinates of the upper left corner and the length and the width; linenum is the road; MVs is the set of non-zero motion vectors MV in the target frame;
merging the target frames:
accumulating the non-zero motion vectors MV in the marked target frame to form a projection MV which is marked as MVpro;
judging the similarity and the proximity of the MVpro of each target frame; firstly, judging the similarity, if the similarity is close, continuously judging the proximity degree, and if the target frames are close, combining the target frames to form a target frame of a certain vehicle;
finally merging into a plurality of target frames so as to determine the positions of a plurality of vehicles;
step 5, tracking the moving target;
moving and projecting the target frame obtained in the step (4) into an adjacent frame according to the projection MV of the target frame, and predicting the position of the target frame in the adjacent frame as a predicted target frame;
marking the prediction target frame as A and the adjacent frame target frame as B, calculating the intersection ratio IOU of A and B, and calculating the intersection ratio formula
If the intersection ratio is larger than 0, determining that the two frames are the same object, adding the two target frames into a target frame matching pair list, and deleting the two target frames from the list to be searched until the searching in the matching pair list is finished;
step 6, calculating the speed;
calculating the instantaneous speed: finding out the matched pixel position of the corresponding vehicle by using a characteristic matching method through the target frame matching pair list obtained in the step 5;
calculating the feature matching points of the current target frame and the matching target frame, obtaining the feature points of the corresponding area by using sift, and obtaining the optimal pair of coordinates (x) in all the feature matching points by matchingi,yi)、(xi+1,yi+1);
Converting the coordinates (x, y) into (x ', y') of real distances through the homography conversion matrix H obtained in the step 2, obtaining time through a frame rate fps, calculating displacement through the real distances of two points, and calculating the actual displacement speed through removing the position and the time;
the formula for calculating the actual displacement speed is as follows, Vi is the instantaneous speed of the frame, i represents the ith frame of the vehicle, and 1/fps is the time of one frame;
set stop-line and calculate average velocity:
according to the vehicle running direction, setting a stop line as a position 15% away from the running-out interface in the running direction, stopping tracking when a target frame of the vehicle crosses the stop line, and calculating the average speed of the target frame as a record;
the average velocity is calculated as:
2. the video-compression-domain-based multi-vehicle velocity measurement method according to claim 1, wherein when extracting the motion vector MV from the video stream, the macroblock size is normalized to 4 × 4, the non-4 × 4 macroblocks are split into n 4 × 4 macroblocks, and the split macroblocks use the original MV size.
3. The video compression domain-based multi-vehicle speed measurement method according to claim 1, wherein the method for analyzing lane lines and lane areas comprises:
after the camera is fixedly installed, the camera shoots a period of time, the influence of a foreground region is eliminated through mean value filtering, a picture of a static background is obtained, and lane information is obtained through the static background;
extracting the lane line from the picture by using canny edge segmentation on the picture of the static background;
after the edge segmentation, the portion belonging to the straight line is analyzed by using Hough transform to obtain the lane portion.
4. The video compression domain-based multi-vehicle speed measurement method according to claim 1, wherein the maximum vehicle speed V analyzable by the camera is learnedmaxThe method comprises the following steps: the vehicle enters the camera in the first frame, the vehicle exits the camera in the second frame, and the maximum vehicle speed V is calculated according to the distance of the shooting area of the camera and the time in one framemax。
5. The video compression domain-based multi-vehicle speed measurement method according to claim 1, wherein the method for detecting the moving target in the time-space domain in the step 3 comprises the following steps:
step a, spatial domain processing: according to the current macro block MB to be measuredcWhether the 8 neighborhoods have the motion vectors MV or not is determined to determine whether the macro block is an isolated point, if more than 5 macro blocks in the neighborhoods have the non-zero motion vectors MV, the macro block is considered as the non-isolated point, otherwise, the macro block is considered as the isolated point, and the motion vectors MV of the macro block is set to be zero;
step b, time domain processing:
let macroblock MBc be the macroblock to be analyzed and MVc be its motion vector MV; MBref is a macroblock MBcProjected to macroblocks in the reference frame, MVref is itA motion vector MV;
b, processing the macro block MB processed in the step acThe following treatments were carried out:
synthesizing and back-projecting a macro block MBc to be analyzed and a corresponding MVc onto a reference frame to generate an MBref, wherein the MBref at most overlaps with four blocks of the reference frame, calculating the size of the MVref according to the MVs of the four overlapped blocks by the weight of the overlapped area, comparing the size of the MVref with the MVc, if the size of the MVref is similar to the size of the MVc, considering that MBc and the corresponding MVc truly reflect a moving vehicle, and marking MBc as MBreal.
6. The video compression domain-based multi-vehicle speed measurement method according to claim 1, wherein the calculation method for forming the projection MV in the step 4 is as follows:
MVi∈mvs
where n denotes the number of non-zero motion vectors MV.
7. The video compression domain-based multi-vehicle speed measuring method according to claim 6, wherein the similarity calculating method in step 4 comprises:
wherein, MVcpro is a projection MV in a current target frame, MVnpro is a projection MV in another target frame, and Scos is cosine similarity;
setting a threshold value b, and if Scos > b, considering the values to be similar;
the proximity degree judgment method adopts a distance calculation method:
setting the coordinate of the first target frame as X1min X1max Y1min Y1max, and the coordinate of the second target frame as X2min X2max Y2min Y2 max;
dx=Min(|X1min-X2min|,|X1max-X2max|,|X1min-X2max|,|X1max-X2min|);
dy=Min(|Y1min-Y2min|,|Y1max-Y2max|,|Y1min-Y2max|,|Y1max-Y2min|);
dx is the x-axis distance of the two target frames, and dy represents the y-axis distance of the two target frames;
if dx is less than 75 and dy is less than 75, the distance is considered to be close, the corresponding length is enlarged, and the two target frames are combined into one target frame;
and after all the target frames are combined, calculating the projection MV in the target frames, wherein the final form of the target frames is { center, box, line, mvpro }.
8. The utility model provides a camera, deploys and is used for that the speed is tested to the multi-vehicle on highway which characterized in that: the method for measuring the speed adopts the method for measuring the speed of the multiple vehicles based on the video compression domain as claimed in any one of claims 1 to 7.
Background
With the development of society, the expressway reaches all the way around nowadays, brings convenience to the travelers. However, the overspeed condition occurs on the expressway, and the overspeed is one of the main reasons for the accident of the expressway, which brings great potential safety hazards to the travelers on the expressway. At present, many speed measuring devices are arranged on the expressway, and are used for detecting the occurrence of the violation item of overspeed, including electromagnetic coil speed measurement, radar speed measurement, laser speed measurement and the like. These speed measuring devices need to cooperate with a camera device to detect whether the vehicle is speeding or not. The speed measuring methods not only need to install additional equipment, but also have higher installation and maintenance cost, so a new method for measuring the speed of the video image is generated. The method only needs the support of the camera equipment, calculates the vehicle speed by using an image processing method, and can be applied to the scenes of the traditional speed measuring method.
The current video image speed measurement methods are of two types, one is to select certain information of a vehicle as a characteristic (such as a license plate) for identification and tracking; one is to specify a certain area in the image as a monitoring point and measure the speed when a target passes through. The former needs to perform feature identification and matching, and although higher precision can be achieved, the video needs to be completely decoded, a vehicle is searched for in the whole frame of image, and the calculation complexity is high. In the latter, although the calculation amount is reduced, the time when the vehicle enters and exits the boundary is difficult to determine, an error is increased, the accuracy is reduced, and the measurement is difficult in the case where a large number of vehicles enter the area.
The current video image speed measurement method firstly finds out vehicles in each frame and records vehicle characteristics (such as license plate, vehicle type and color), then finds out the same vehicle through the characteristics, records the displacement of the vehicle between frames, and calculates the vehicle speed. Therefore, vehicles need to be searched globally in a video frame, because the existing monitoring video has higher resolution, the global search in the whole frame needs higher calculation power, the problems of high delay, instability and the like can be caused, and the speed measurement of the vehicles needs more stable real-time performance so as to achieve the purpose of warning or danger early warning of overspeed vehicles. It is therefore desirable to provide a method for fast search and tracking of vehicles and calculating their speed to solve the current delay problem due to full decoding calculations.
Disclosure of Invention
The invention aims to provide a multi-vehicle speed measuring method based on a video compression domain aiming at the defects of the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
the invention provides a multi-vehicle speed measuring method based on a video compression domain in a first aspect, which comprises the following steps:
step 1, extracting a motion vector MV from a video code stream;
step 2, initializing a camera;
1) analyzing lane lines and lane areas;
2) calculating a homography conversion matrix H for converting the actual distance and the real distance of the pixel;
3) learning the maximum vehicle speed analyzable by the camera;
step 3, preprocessing a motion vector MV;
eliminating the motion vector MV not in the lane area, and only processing the macro block of the non-zero motion vector MV in the lane area;
detecting a moving target in a time-space domain:
judging whether a nonzero macro block in a lane area is a moving target or not according to the characteristic that the vehicle movement is continuous and smooth, and setting a threshold value to determine the current macro block MB to be detectedcWhether it is a macro block MBreal of a moving vehicle;
step 4, marking a moving target;
dividing vehicles with different lane lines:
when one target frame crosses two lanes, dividing the target frame into two parts by lane lines, and marking the target frame as { center, box, linenum, mvs }; center is the center of gravity of the target frame, which contains the coordinates of the center of gravity; box is the size of the target frame, which comprises the coordinates of the upper left corner and the length and the width; linenum is the road; MVs is the set of non-zero motion vectors MV in the target frame;
merging the target frames:
accumulating the non-zero motion vectors MV in the marked target frame to form a projection MV which is marked as MVpro;
judging the similarity and the proximity of the MVpro of each target frame; firstly, judging the similarity, if the similarity is close, continuously judging the proximity degree, and if the target frames are close, combining the target frames to form a target frame of a certain vehicle;
finally merging into a plurality of target frames so as to determine the positions of a plurality of vehicles;
step 5, tracking the moving target;
moving and projecting the target frame obtained in the step (4) into an adjacent frame according to the projection MV of the target frame, and predicting the position of the target frame in the adjacent frame as a predicted target frame;
marking the prediction target frame as A and the adjacent frame target frame as B, calculating the intersection ratio IOU of A and B, and calculating the intersection ratio formula
If the intersection ratio is larger than 0, determining that the two frames are the same object, adding the two target frames into a target frame matching pair list, and deleting the two target frames from the list to be searched until the searching in the matching pair list is finished;
step 6, calculating the speed;
calculating the instantaneous speed: finding out the matched pixel position of the corresponding vehicle by using a characteristic matching method through the target frame matching pair list obtained in the step 5;
calculating the feature matching points of the current target frame and the matching target frame, obtaining the feature points of the corresponding area by using sift, and obtaining the optimal pair of coordinates (x) in all the feature matching points by matchingi,yi)、(xi+1,yi+1);
Converting the coordinates (x, y) into (x ', y') of real distances through the homography conversion matrix H obtained in the step 2, obtaining time through a frame rate fps, calculating displacement through the real distances of two points, and calculating the actual displacement speed through removing the position and the time;
the formula for calculating the actual displacement speed is as follows, Vi is the instantaneous speed of the frame, i represents the ith frame of the vehicle, and 1/fps is the time of one frame;
set stop-line and calculate average velocity:
according to the vehicle running direction, setting a stop line as a position 15% away from the running-out interface in the running direction, stopping tracking when a target frame of the vehicle crosses the stop line, and calculating the average speed of the target frame as a record;
the average velocity is calculated as:
based on the above, when the motion vector MV is extracted from the video code stream, the macroblock size is normalized to 4 × 4, the macroblocks other than 4 × 4 are split into n macroblocks of 4 × 4, and the original MV size is used for the split macroblocks.
Based on the above, the method for analyzing the lane line and the lane area includes:
after the camera is fixedly installed, the camera shoots a period of time, the influence of a foreground region is eliminated through mean value filtering, a picture of a static background is obtained, and lane information is obtained through the static background;
extracting the lane line from the picture by using canny edge segmentation on the picture of the static background;
after the edge segmentation, the portion belonging to the straight line is analyzed by using Hough transform to obtain the lane portion.
Based on the above, the maximum vehicle speed V which can be analyzed by the learning cameramaxThe method comprises the following steps: the vehicle enters the camera in the first frame, the vehicle exits the camera in the second frame, and the maximum vehicle speed V is calculated according to the distance of the shooting area of the camera and the time in one framemax。
Based on the above, the method for detecting the moving target in the time-space domain in the step 3 comprises the following steps:
step a, spatial domain processing: according to the current macro block MB to be measuredcWhether the 8 neighborhoods have the motion vectors MV or not is determined to determine whether the macro block is an isolated point, if more than 5 macro blocks in the neighborhoods have the non-zero motion vectors MV, the macro block is considered as the non-isolated point, otherwise, the macro block is considered as the isolated point, and the motion vectors MV of the macro block is set to be zero;
step b, time domain processing:
let macroblock MBc be the macroblock to be analyzed and MVc be its motion vector MV; MBref is a macroblock MBcProjected to a macroblock in a reference frame, MVref is its motion vector MV;
b, processing the macro block MB processed in the step acThe following treatments were carried out:
synthesizing and back-projecting a macro block MBc to be analyzed and a corresponding MVc onto a reference frame to generate an MBref, wherein the MBref at most overlaps with four blocks of the reference frame, calculating the size of the MVref according to the MVs of the four overlapped blocks by the weight of the overlapped area, comparing the size of the MVref with the MVc, if the size of the MVref is similar to the size of the MVc, considering that MBc and the corresponding MVc truly reflect a moving vehicle, and marking MBc as MBreal.
Based on the above, the calculation method for forming the projection MV in step 4 is as follows:
MVi∈mvs
where n denotes the number of non-zero motion vectors MV.
Based on the above, the similarity calculation method in step 4:
wherein, MVcpro is a projection MV in a current target frame, MVnpro is a projection MV in another target frame, and Scos is cosine similarity;
setting a threshold value b, and if Scos > b, considering the values to be similar;
the proximity degree judgment method adopts a distance calculation method:
setting the coordinate of the first target frame as X1min X1max Y2min Y2max, and the coordinate of the second target frame as X1min X1max Y2min Y2 max;
dx=Min(|X1min-X2min|,|X1max-X2max|,|X1min-X2max|,|X1max-X2min|);
dy=Min(|Y1min-Y2min|,|Y1max-Y2max|,|Y1min-Y2max|,|Y1max-Y2min|);
dx is the x-axis distance of the two target frames, and dy represents the y-axis distance of the two target frames;
if dx is less than 75 and dy is less than 75, the distance is considered to be close, the corresponding length is enlarged, and the two target frames are combined into one target frame;
and after all the target frames are combined, calculating the projection MV in the target frames, wherein the final form of the target frames is { center, box, line, mvpro }.
The invention provides a camera which is deployed on a highway and used for measuring the speed of multiple vehicles, and the speed measuring method adopts the video compression domain-based speed measuring method of the multiple vehicles.
Compared with the prior art, the invention has prominent substantive characteristics and remarkable progress, and particularly has the following beneficial effects:
(1) the method combines the scene information, and solves the problem that the adjacent vehicles are easy to be divided into the same vehicle by dividing a plurality of vehicles in the compressed domain;
(2) according to the invention, the MV is obtained to analyze the vehicle position information by extracting the video code stream, so that the vehicle searching of the whole frame by full decoding is reduced, and the real-time performance is improved;
(3) the invention only uses the MV information in the compressed domain, and can be widely applied to different video coding standards;
(4) the invention calculates the vehicle speed by using the condition of the pixel domain matching points, does not need specific characteristics of the vehicle and is not influenced by the position of the vehicle in the image.
(5) The road area is learned in the camera initialization process, a large amount of non-road area information can be eliminated in subsequent data processing and identification, and the data volume and the processing complexity are greatly reduced.
Drawings
FIG. 1 is an illustration of macroblock normalization in the method of the present invention.
FIG. 2 is a lane segmentation chart (1, 2, 3 represent three different lane lines) in the method of the present invention.
FIG. 3 is a diagram illustrating a current frame projected onto a reference frame in the method of the present invention.
FIG. 4 is an exemplary diagram of a target block in the method of the present invention.
Fig. 5 is an exemplary diagram of a target frame divided by a lane line (left side is an original image, and right side is a divided image) in the method of the present invention.
FIG. 6 is a schematic diagram of a method for calculating the distance between target frames according to the present invention.
Fig. 7 is a schematic diagram of a stop line set in the method of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the following embodiments.
Example 1
The embodiment provides a multi-vehicle speed measuring method based on a video compression domain, which comprises the following steps:
step 1, extracting a motion vector MV from a video code stream;
as shown in fig. 1, when extracting a motion vector MV from a video stream, the macroblock size is normalized to 4 × 4, and a macroblock other than 4 × 4 is divided into n macroblocks of 4 × 4, and the divided macroblock uses the original MV size. This normalization facilitates computation and since all macroblocks have been transformed into 4 x 4 blocks, the location of the macroblocks is also better located. For example: a 16 × 16 macroblock is first split into 4 8 × 8 macroblocks, and each 8 × 8 macroblock is then split into 4 × 4 macroblocks, otherwise the same applies.
Step 2, initializing a camera; when the speed is actually measured, the step only needs to be operated once before use;
1) analyzing lane lines and lane areas;
after the camera is fixedly installed, the camera shoots a period of time, the influence of a foreground region is eliminated through mean value filtering, a picture of a static background is obtained, and lane information is obtained through the static background;
extracting the lane line from the picture by using canny edge segmentation on the picture of the static background; since the lane line is light white and the ground road is dark gray, the edge change is significant, and when canny edge segmentation is used, the low threshold value is set to be 100, and the high threshold value is set to be 300. In addition, due to the fact that other conditions such as stop lines, zebra crossings and the like exist in the roads, the positions of the lane lines are marked before division, and the influence of other lane lines on the lane lines is avoided;
after edge segmentation, the hough transform is used to analyze the parts belonging to the straight line, and since the lane lines may not be complete in the image, the straight line after fitting is extended to the whole image, these separated parts are considered as lane parts, and n lane lines may form n-1 lanes, as shown in fig. 2.
2) Calculating a homography conversion matrix H for converting the actual distance and the real distance of the pixel;
the distance between lane lines (horizontal distance) and the distance between lane lines and sidewalks (vertical distance) is obtained through road standards, and the distance of the corresponding area in reality is marked. Determining at least four non-collinear points through the horizontal distance and the vertical distance, and converting the actual distance and the actual distance of the pixel through a homography conversion matrix H;
homography (Homography) conversion is used for describing the position mapping relation of an object between a world coordinate system and a pixel coordinate system, and a Homography conversion matrix H is set as follows:
assuming that the real position coordinates are (x ', y', 1) and the pixel coordinates are (x, y,1), the conversion formula is:
general order H in the homography transformation matrix H33Is constantly equal to 1, such that the homography transformation matrix is normalized, thusThe other 8 parameters in the homography matrix can be solved by listing eight equations with at least two pairs of matching points.
3) Learning the maximum vehicle speed analyzable by the camera;
the maximum vehicle speed is set to VmaxIf the vehicle enters the camera in the first frame and exits the camera in the second frame, the vehicle speed is Vmax. Calculating the maximum speed V by the distance of the camera shooting area and the time in one framemax. V can be calculated by the distance of the shooting area of the camera and the time in one framemax. All less than VmaxThe moving object can be recorded to the motion state by the camera in more than 2 frames, namely the motion speed of the moving object can be analyzed and estimated and the tracking can be completed by the method.
Step 3, preprocessing a motion vector MV;
eliminating the motion vector MV not in the lane area, and only processing the macro block of the non-zero motion vector MV in the lane area;
detecting a moving target in a time-space domain:
judging whether a nonzero macro block in a lane area is a moving target or not according to the characteristic that the vehicle movement is continuous and smooth, and setting a threshold value to determine the current macro block MB to be detectedcWhether it is a macro block MBreal of a moving vehicle;
the specific method for detecting the moving target in the time-space domain comprises the following steps:
step a, spatial domain processing: according to the current macro block MB to be measuredcWhether the 8 neighborhoods have the motion vectors MV or not is determined to determine whether the macro block is an isolated point, if more than 5 macro blocks in the neighborhoods have the non-zero motion vectors MV, the macro block is considered as the non-isolated point, otherwise, the macro block is considered as the isolated point, and the motion vectors MV of the macro block is set to be zero;
step b, time domain processing:
as shown in fig. 3, the currently detected macroblock MB is divided intocBack-projected into its reference frame (e.g., previous frame) to analyze the MBcWhether the MV of (a) is a real MV due to object motion or a pseudo MV that needs to be generated due to encoding. Let macroblock MBc be the macroblock to be analyzed and MVc be its motion vector MV; MBref is a macroblock MBcProjected to a macroblock in a reference frame, MVref is its motion vector MV;
b, processing the macro block MB processed in the step acThe following treatments were carried out:
synthesizing and back-projecting a macro block MBc to be analyzed and a corresponding MVc onto a reference frame to generate an MBref, wherein the MBref at most overlaps with four blocks of the reference frame, calculating the size of the MVref according to the MVs of the four overlapped blocks by the weight of the overlapped area, comparing the size of the MVref with the MVc, if the size of the MVref is similar to the size of the MVc, considering that MBc and the corresponding MVc truly reflect a moving vehicle, and marking MBc as MBreal.
The calculation method of the time domain processing can be expressed as follows:
MBref=MBc-MVc/k
wherein si represents the overlapping area of the macroblock i in the reference frame and the projection MBref; MVi represents MV of macroblock i; k is a coefficient reflecting different pixel precisions of the MVs, namely, the MVs are converted into vectors based on the pixel sizes; TH is a threshold for verifying whether MVc and MVref are close, if TH < a, the MVc and MVref are considered to be similar, and a is a threshold, and a >0 can be set to be different for different scenes, and the smaller the threshold is, the stricter the threshold is.
Step 4, marking a moving target;
dividing vehicles with different lane lines:
when vehicles run side by side in adjacent lanes, two adjacent targets are easily connected together by virtue of compressed domain information, so that the condition that one target frame contains a plurality of targets occurs; and (3) the lane areas obtained in the step (2) are used for independently analyzing different lane areas, so that the condition of interference effect when two vehicles are arranged side by side is avoided. When one target frame crosses two lanes, dividing the target frame into two parts by lane lines, marking the target frame as shown in fig. 4 and 5, wherein the lane line where the gravity center of the target frame is located is 1, the lane line where the gravity center of the other target frame is located is 1, and the lane line where the gravity center of the other target frame is located is 2, and the two independent operations do not interfere with each other, so that the final target frame result is { center, box, lineum, mvs }; center is the center of gravity of the target frame, which contains the coordinates of the center of gravity; box is the size of the target frame, which comprises the coordinates of the upper left corner and the length and the width; linenum is the road; MVs is the set of non-zero motion vectors MV in the target frame;
merging the target frames:
since the macro blocks of the non-zero motion vectors MV are not necessarily continuous, there are faults between the white areas belonging to the same vehicle in the binary image, that is, the same vehicle may have a plurality of target frames, so the target frames are considered to be merged, the target frames are necessarily adjacent, and because they belong to the same object, the Motion Vectors (MV) also have similarity, and can be merged with distance and similarity, and different vehicles and their positions in the same frame can be distinguished after merging. Accumulating the non-zero motion vectors MV in the marked target frame to form a projection MV which is marked as MVpro;
judging the similarity and the proximity of the MVpro of each target frame; firstly, judging the similarity, if the similarity is close, continuously judging the proximity degree, and if the target frames are close, combining the target frames to form a target frame of a certain vehicle; and finally combined into a plurality of target frames to determine the positions of a plurality of vehicles.
Specifically, the calculation method for forming the projection MV is as follows:
MVi∈mvs
where n denotes the number of non-zero motion vectors MV.
The similarity calculation method comprises the following steps:
wherein, MVcpro is a projection MV in a current target frame, MVnpro is a projection MV in another target frame, and Scos is cosine similarity;
setting a threshold value b, and if Scos > b, the values can be considered similar.
The proximity degree determination method adopts a distance calculation method, as shown in fig. 6:
setting the coordinate of the first target frame as X1min X1max X2min X2max, and the coordinate of the second target frame as Y1min Y1max Y2min Y2 max;
dx=Min(|X1min-X2min|,|X1max-X2max|,|X1min-X2max|,|X1max-X2min|);
dy=Min(|Y1min-X2min|,|X1max-X2max|,|X1min-X2max|,|X1max-X2min|);
dx is the x-axis distance of the two target frames, and dy represents the y-axis distance of the two target frames;
if dx is less than 75 and dy is less than 75, the distance is considered to be close, the corresponding length is enlarged, and the two target frames are combined into one target frame;
and after all the target frames are combined, calculating the projection MV in the target frames, wherein the final form of the target frames is { center, box, line, mvpro }.
Step 5, tracking the moving target;
since the lens is still, only the moving object in the images of the two frames is changed, so that the intersection ratio can be used for determining the tracking condition. Projecting a target frame of one frame into an adjacent frame by MV (mean square) to be called a predicted target frame, calculating an intersection ratio of the predicted target frame and the target frame of the adjacent frame, recording a matching pair if the intersection ratio is greater than 0, not repeatedly judging the matching pair, calculating a lane line where a vehicle is located in the front, tracking according to each lane line, and not influencing each lane area.
The specific method comprises the following steps: moving and projecting the target frame obtained in the step 4 into an adjacent frame according to the projection MV (MVpro) of the target frame to be used as a prediction target frame, predicting the position of the target frame in the adjacent frame (the next frame can also be the previous frame), and moving the coordinate of the upper left corner in the target frame box by the size of the corresponding projection MV only because the position is moved;
marking the prediction target frame as A and the adjacent frame target frame as B, calculating the intersection ratio IOU of A and B, and calculating the intersection ratio formulaIf the intersection ratio is larger than 0, determining that the two frames are the same object, adding the two target frames into a target frame matching pair list, and deleting the two target frames from the list to be searched until the searching in the matching pair list is finished;
step 6, calculating the speed;
calculating the instantaneous speed:
firstly, the target frame matching pair list obtained in the step 5 is passed, because the compressed domain information is incomplete, the centroid of the vehicle target frame is not necessarily the centroid of the vehicle, if the speed error calculated by the centroid of the target frame is large, the matched pixel position of the corresponding vehicle is found by using a characteristic matching method; because the target area is selected, the area of one vehicle is calculated according to the road and occupies about 10 percent of the screen, and if the cameras are far away, the area is less, so that the calculation amount is greatly reduced compared with the method of searching the whole frame;
then calculating the feature matching points of the current target frame and the matching target frame, obtaining the feature points of the corresponding area by using sift, and obtaining the optimal pair of coordinates (x) in all the feature matching points by matchingi,yi)、(xi+1,yi+1) (ii) a In other embodiments, the lift method, the orb method, and the neural network method may also be used to calculate the neighboring frame detection matching points;
converting the coordinates (x, y) into (x ', y') of real distances through the homography conversion matrix H obtained in the step 2, obtaining time through a frame rate fps, calculating displacement through the real distances of two points, and calculating the actual displacement speed through removing the position and the time;
the formula for calculating the actual displacement speed is as follows, Vi is the instantaneous speed of the frame, i represents the ith frame of the vehicle, and 1/fps is the time of one frame;
set stop-line and calculate average velocity:
as shown in fig. 7, according to the vehicle driving direction, the stop line is set to a position 15% away from the exit interface in the driving direction, the target frame of the vehicle stops tracking when crossing the stop line and the average speed is calculated as a record;
the average velocity is calculated as:
example 2
The embodiment provides a camera which is deployed on a highway and used for measuring the speed of multiple vehicles, and the speed measuring method adopts the video compression domain-based multi-vehicle speed measuring method described in embodiment 1.
The camera provided by the embodiment analyzes the video code stream in the compressed domain through extracting the video code stream to obtain the corresponding information of the vehicle, only the corresponding vehicle information needs to be concerned, the vehicle is not required to be searched through the whole-frame decoding calculation, and all data are prevented from being transmitted to the background server for calculation.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention and not to limit it; although the present invention has been described in detail with reference to preferred embodiments, those skilled in the art will understand that: modifications to the specific embodiments of the invention or equivalent substitutions for parts of the technical features may be made; without departing from the spirit of the present invention, it is intended to cover all aspects of the invention as defined by the appended claims.