Intelligent weighing management system based on image processing

文档序号:8552 发布日期:2021-09-17 浏览:33次 中文

1. An intelligent weighing management system based on image processing, characterized in that the system comprises:

the system comprises a track acquisition unit, a data acquisition unit and a data processing unit, wherein the track acquisition unit is used for acquiring a side image comprising a vehicle body and a vehicle head, and mutually connecting corner key points of the vehicle head in the side image to obtain a plurality of connecting lines; based on an image coordinate system, respectively acquiring two head side lines and two head front side lines according to an included angle between the connecting line and the coordinate axis; constructing a vehicle information sequence by using the central point of the head front side line, the included angle between the head front side line and the coordinate axis, the length ratio between the head side lines and the weighing entry point to obtain an initial curve track corresponding to the side image;

the track updating unit is used for predicting a sub-curve track according to the variation of bounding boxes between vehicle bounding boxes in the adjacent side images, segmenting the initial curve track based on the sub-curve track to obtain a segmented curve track, and updating the initial curve track by using the sub-curve track and the segmented curve track to obtain a new curve track;

the image classification unit is used for obtaining a track parameter threshold value according to the position change of a pixel point between the initial curve track and the new curve track, and performing image set classification on the side images by combining the area intersection ratio of target detection areas in adjacent frames and the track parameter threshold value;

and the object detection unit is used for realizing corresponding target detection according to different image sets.

2. The system of claim 1, wherein the method for the trajectory acquisition unit to respectively acquire two head side lines and two head front side lines according to an included angle between the connecting line and the coordinate axis comprises:

and taking the two connecting lines with the smallest included angle with the longitudinal axis as the side lines of the headstock, and taking the two connecting lines with the smallest included angle with the transverse axis and parallel to each other as the front lines of the headstock.

3. The system of claim 1, wherein the image classification unit performs image set classification on the side images by combining the area intersection ratio of the target detection regions in adjacent frames and the trajectory parameter threshold, and comprises:

when the area intersection ratio is smaller than or equal to the track parameter threshold, storing the side image of the next frame in an image set where the side image of the previous frame is located; otherwise, a new image set is constructed by the side image of the next frame.

4. The system of claim 1 or 2, wherein the center point in the trajectory acquisition unit corresponds to a maximum ordinate value of center points of two front edges of the vehicle head.

5. The system of claim 1, wherein the target detection comprises: the interaction of the license plate, the driver and the card swiping machine and the interaction of the wheel and the weighing boundary.

6. The system of claim 1, wherein the method of the trajectory update unit for slicing the initial curvilinear trajectory based on the sub-curvilinear trajectory to obtain a sliced curvilinear trajectory comprises:

obtaining a length factor according to the length between the sub-curve track and the initial curve track, and segmenting the initial curve track by the length factor to obtain a plurality of first curve tracks;

and taking the first curve track closest to the sub-curve track as the segmentation curve track.

7. The system of claim 1, wherein the method for deriving a trajectory parameter threshold from a change in position of a pixel point between the initial curve trajectory and the new curve trajectory in the image classification unit comprises:

in a T time period, acquiring a first position change of each pixel point according to the initial curve track at each moment and a plurality of pixel point positions in the corresponding new curve track;

obtaining a second position change of each pixel point according to the first position change between adjacent framesForming a change matrix by changing the second positions of a plurality of pixel points according to a time sequence in a time period;

and obtaining the track parameter threshold value from the change matrix.

8. The system of claim 1, wherein the method of updating the initial curve trace to obtain a new curve trace using the sub-curve trace and the slicing curve trace in the trace update unit comprises:

and updating the initial curve track by combining a forgetting coefficient and a memory coefficient to obtain the new curve track.

Background

In the weighing process of the vehicle, corresponding images are selected according to different positions where the vehicle is located for processing, for example, license plate information of the vehicle needs to be concerned before the vehicle is weighed, vehicle body movement information of the vehicle needs to be concerned when the vehicle is weighed, and the like. When the prior art generally adopts the modes of a photoelectric gate and the like to perform trigger detection, the deployment and maintenance of the sensor in the mode are complex, and the cost is overhigh; when the detection is performed by adopting a visual trigger mode, the detection time can be determined only by processing continuous video frame images frame by frame, and the mode can cause more redundant calculation.

Disclosure of Invention

In order to solve the above technical problems, an object of the present invention is to provide an intelligent weighing management system based on image processing, which adopts the following technical solutions:

the embodiment of the invention provides an intelligent weighing management system based on image processing, which comprises:

the system comprises a track acquisition unit, a data acquisition unit and a data processing unit, wherein the track acquisition unit is used for acquiring a side image comprising a vehicle body and a vehicle head, and mutually connecting corner key points of the vehicle head in the side image to obtain a plurality of connecting lines; based on an image coordinate system, respectively acquiring two head side lines and two head front side lines according to an included angle between the connecting line and the coordinate axis; constructing a vehicle information sequence by using the central point of the head front side line, the included angle between the head front side line and the coordinate axis, the length ratio between the head side lines and the weighing entry point to obtain an initial curve track corresponding to the side image;

the track updating unit is used for predicting a sub-curve track according to the variation of bounding boxes between vehicle bounding boxes in the adjacent side images, segmenting the initial curve track based on the sub-curve track to obtain a segmented curve track, and updating the initial curve track by using the sub-curve track and the segmented curve track to obtain a new curve track;

the image classification unit is used for obtaining a track parameter threshold value according to the position change of a pixel point between the initial curve track and the new curve track, and performing image set classification on the side images by combining the area intersection ratio of target detection areas in adjacent frames and the track parameter threshold value;

and the object detection unit is used for realizing corresponding target detection according to different image sets.

Preferably, the method for respectively obtaining two head side lines and two head front side lines in the trajectory obtaining unit according to the included angle between the connecting line and the coordinate axis includes:

and taking the two connecting lines with the smallest included angle with the longitudinal axis as the side lines of the headstock, and taking the two connecting lines with the smallest included angle with the transverse axis and parallel to each other as the front lines of the headstock.

Preferably, the method for classifying the image sets of the side images in the image classification unit by combining the area intersection ratio of the target detection areas in the adjacent frames and the track parameter threshold includes:

when the area intersection ratio is smaller than or equal to the track parameter threshold, storing the side image of the next frame in an image set where the side image of the previous frame is located; otherwise, a new image set is constructed by the side image of the next frame.

Preferably, the center point in the trajectory acquisition unit corresponds to a maximum ordinate value in center points of the two head front sidelines.

Preferably, the target detection comprises: the interaction of the license plate, the driver and the card swiping machine and the interaction of the wheel and the weighing boundary.

Preferably, the method for segmenting the initial curve trajectory based on the sub-curve trajectory in the trajectory updating unit to obtain the segmented curve trajectory includes:

obtaining a length factor according to the length between the sub-curve track and the initial curve track, and segmenting the initial curve track by the length factor to obtain a plurality of first curve tracks;

and taking the first curve track closest to the sub-curve track as the segmentation curve track.

Preferably, the method for obtaining the trajectory parameter threshold from the position change of the pixel point between the initial curve trajectory and the new curve trajectory in the image classification unit includes:

in a T time period, acquiring a first position change of each pixel point according to the initial curve track at each moment and a plurality of pixel point positions in the corresponding new curve track;

acquiring second position change of each pixel point according to the first position change between adjacent frames, and forming a change matrix by the second position changes of a plurality of pixel points according to a time sequence in the T time period;

and obtaining the track parameter threshold value from the change matrix.

Preferably, the method for updating the initial curve trajectory by using the sub-curve trajectory and the segmentation curve trajectory in the trajectory updating unit to obtain a new curve trajectory includes:

and updating the initial curve track by combining a forgetting coefficient and a memory coefficient to obtain the new curve track.

The embodiment of the invention has at least the following beneficial effects: the method comprises the steps of optimizing an initial curve track by means of curve track segmentation and sub-curve track generation and combining a forgetting coefficient and a memory coefficient to obtain an accurate new curve track, classifying images according to position changes of pixel points between the initial curve track and the new curve track, and detecting corresponding targets according to different image sets.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.

Fig. 1 is a block diagram of an intelligent weighing management system based on image processing according to an embodiment of the present invention.

Detailed Description

To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and functions of an intelligent weighing management system based on image processing according to the present invention will be provided with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.

The following describes a specific scheme of the intelligent weighing management system based on image processing in detail with reference to the accompanying drawings.

The embodiment of the invention aims at the following specific scenes: under the road vehicle weighing scene, there is the high pole near the facility of weighing usually, and the camera is deployed on the high pole, can cover the facility of weighing and the near scene of the facility of weighing.

Preferably, the camera in the embodiment of the invention adopts an RGB camera and has a fixed pose.

Referring to fig. 1, an embodiment of the present invention provides an intelligent weighing management system based on image processing, which includes a trajectory acquisition unit 10, a trajectory update unit 20, an image classification unit 30, and an object detection unit 40.

The trajectory acquisition unit 10 is configured to acquire a side image including a vehicle body and a vehicle head, and interconnect corner key points of the vehicle head in the side image to obtain a plurality of connecting lines; based on an image coordinate system, respectively acquiring two head side lines and two head front side lines according to an included angle between the connecting line and the coordinate axis; and constructing a vehicle information sequence by using the central point of the front side line of the vehicle head, the included angle between the front side line of the vehicle head and the coordinate axis, the length ratio between the side lines of the vehicle head and the weighing entry point to obtain an initial curve track corresponding to the side image.

Specifically, the method comprises the steps of acquiring side images including a head and a body of a vehicle by using an RGB camera to obtain a video sequence of the vehicle, and firstly carrying out rough area division on a single-frame side image in the video sequence: dividing an interested area at one half of the height of the image, taking the interested area above the image as a first rough sensing area, and selecting a side image of the vehicle completely positioned in the first rough sensing area as a key frame image.

And (3) sending the key frame images into a vehicle information perception encoder and a vehicle information perception decoder, and outputting a vehicle key point thermodynamic diagram, wherein the vehicle key points are 4 corner point key points of the vehicle head because the vehicle head information is visible under different poses of the vehicle body. The method comprises the steps that six connecting lines can be obtained by mutually connecting key points of 4 corner points, based on an image coordinate system, two connecting lines with the smallest included angle between the key points and a longitudinal axis are used as side lines of a vehicle head, and two connecting lines with the smallest included angle between the key points and a transverse axis and parallel to each other are used as a positive side line of the vehicle head. Selecting the central point corresponding to the maximum longitudinal coordinate value in the two head front sidelines (x s y s ) As the initial point, the included angle between the positive side line of the headstock and the transverse axisθLength between two side lines of the headDegree ratioηAs orientation information, weighing entry points (x e y e ) As the target point, further using the center point (x s y s ) Included angleθLength ratio ofηAnd weighing entry points (x e y e ) Construction of the vehicle information sequence [ alpha ], [ alpha ] a x s y s θηx e y e ]. And obtaining an initial curve track by a single-frame key frame image and a corresponding vehicle information sequence through an initial track generation encoder and an initial track generation decoder.

It should be noted that, given the starting point, the target point and the orientation information, a curve trajectory can also be obtained by a bezier curve.

The trajectory updating unit 20 is configured to predict a sub-curve trajectory from a bounding box change between vehicle bounding boxes in adjacent side images, segment the initial curve trajectory based on the sub-curve trajectory to obtain a segmented curve trajectory, and update the initial curve trajectory using the sub-curve trajectory and the segmented curve trajectory to obtain a new curve trajectory.

Specifically, since it is difficult to predict the curve trace, especially the curve trace with a long distance, using the deep neural network, the prediction of the initial curve trace is not accurate, so the initial curve trace and the corresponding time stamp are input into the trace updating unit 20, and the trace updating unit 20 updates the initial curve trace according to the time stamp of the initial curve tracetSelect the firstt-1 key frame image andtprocessing the frame key frame image to realize forgetting optimization of the initial curve track, wherein the optimization method comprises the following steps:

1) and the image processing unit is used for acquiring the change information of the surrounding frame.

And (3) sending the key frame image into a semantic segmentation encoder and a semantic segmentation decoder, and outputting a semantic segmentation image, wherein the vehicle bounding box comprises a vehicle body bounding box and a vehicle head bounding box, so that semantic segmentation categories comprise vehicle body pixel points, vehicle head pixel points, roads and the like, and minimum circumscribed rectangles of the vehicle body pixel points and the vehicle head pixel points are respectively taken as the vehicle body bounding box and the vehicle head bounding box.

It should be noted that, since there are two parts of the vehicle body, such as the vehicle body side and the vehicle body upper side, the embodiment of the present invention only retains one largest vehicle body enclosure frame.

From the first t-1 key frame image andtacquiring the change information of the bounding box, wherein the change information of the bounding box comprises the change of the bounding box of the vehicle bodyChanges with surrounding frame of the vehicle headWhereinIn order to accommodate the size variations of the frame,is the change of the position of the center point of the surrounding frame.

2) And the track processing unit is used for acquiring the sub-curve track and the segmentation curve track according to the change information of the bounding box.

And sending the variation information of the bounding box into a fully-connected layer prediction sub-curve track, wherein the sub-curve track does not need to be from a starting point and a target point but from a current position point to an optimal prediction position point. The length of the sub-curve track is counted through the pixel points, the length factor closest to the length of the initial curve track is searched downwards according to the length of the sub-curve track, the initial curve track is segmented according to the length factor to obtain a plurality of first curve tracks, Euclidean distances between the first curve tracks and the sub-curve tracks are respectively calculated, and the first curve track closest to the sub-curve tracks is used as the segmented curve track.

3) And the forgetting unit is used for acquiring a forgetting coefficient.

Carrying out segmentation curve polynomial fitting on the segmentation curve track to obtain each polynomialCoefficients of the terms to form a first coefficient sequence, wherein the fixed number of terms of the polynomial is given bykAnd (4) respectively. Sending the first coefficient series corresponding to the sub-curve tracks, the segmentation curve tracks and the segmentation curve polynomial into a first full-connection layer, and outputting forgetting coefficients of each coefficient in the segmentation curve polynomial through a sigmoid function; and the first full-connection layer is jointly supervised by a forgetting unit, a memory unit and a prediction unit during training.

4) And the memory unit is used for acquiring a memory coefficient.

Similarly, a sub-curve polynomial fitting is performed on the sub-curve trajectory to obtain a coefficient of each term in the polynomial so as to form a second coefficient sequence. Sending the second coefficient series corresponding to the sub-curve tracks, the segmentation curve tracks and the sub-curve polynomials into a second full-connection layer, and outputting the memory coefficient of each coefficient in the sub-curve polynomials through a sigmoid function; and the second full-connection layer is jointly supervised by a forgetting unit, a memory unit and a prediction unit during training.

5) And the prediction unit is used for obtaining the updated new curve track.

And respectively taking the forgetting coefficient and the memory coefficient as the weight of each coefficient in the segmentation curve polynomial and the sub-curve polynomial, and performing weighted addition on the coefficients to obtain an updated new curve track.

The new curve track, the semantic segmentation image and the bounding box change information of the vehicle body are used as input and are sent into a bounding box prediction encoder and a bounding box prediction decoder, the predicted semantic segmentation image is output, and the predicted bounding box is obtained through the minimum circumscribed rectangle; constructing a joint loss function of the forgetting unit, the memory unit and the prediction unit by using the prediction bounding box, the forgetting coefficient and the memory coefficient, wherein the joint loss function is as follows:L=L bbox +L f . Wherein the content of the first and second substances,L f for loss of forgetting coefficients and memory coefficients, i.e.L f =|α+β-1|,αIn order to be a forgetting factor,βis a memory coefficient;L bbox to predict the loss of bounding boxes.

It should be noted that the first embodiment of the present invention employst+1 frame key frameThe real bounding box of the image is used as a label, and the intersection of the prediction bounding box and the real bounding box is compared as the loss of the prediction bounding box.

The image classification unit 30 is configured to obtain a trajectory parameter threshold from a position change of a pixel point between the initial curve trajectory and the new curve trajectory, and perform image set classification on the side images by combining an area intersection ratio of target detection regions in adjacent frames and the trajectory parameter threshold.

Specifically, since the trajectory updating unit 20 updates the initial curve trajectory in a split manner, the update position of the curve trajectory at each time is different, and there may be repeated updates of the positions. Marking the initial curve track with N pixel points by taking the initial point of the initial curve track as a starting point, selecting the maximum value of the number of the pixel points contained in the initial curve track corresponding to each moment in the T time period as N, and expressing the position change of the nth pixel point through the neighborhood of the nth pixel point, namely firstly according to the nth pixel pointt-1 initial curve track and corresponding new curve track of the key frame image of the frame to obtain the first position change of the nth pixel point, similarly according to the secondtObtaining the first position change of the nth pixel point according to the initial curve track and the corresponding new curve track of the frame key frame image, and further according to the nth pixel point t-1 first position change and a second position change of an nth pixel point in a frametAnd (3) obtaining second position change of the nth pixel point by first position change of the nth pixel point in the frame, forming second position change information of the N pixel points into a change matrix according to a time sequence in a T time period, wherein the size of the change matrix is (T-1) × N, sending the change matrix into a collection parameter encoder and a full connection layer, and outputting a predicted value of a track parameter.

The method comprises the steps of respectively taking curve track information in a key frame image and surrounding frame information of a vehicle as input, sending the curve track information and the surrounding frame information into an ROI extraction first encoder and an ROI extraction second encoder, combining feature tensors, and outputting an ROI area through an ROI extraction decoder, wherein the ROI area is a target detection area which is used for determining target detection when the vehicle is located at different positions, and the target detection comprises the following steps: when the vehicle does not reach the weighing area, detecting the target as a license plate; when the vehicle reaches the weighing area, detecting the target as the interaction between the driver and the card swiping machine; when the vehicle is during the weighing process, the target is detected as the interaction of the wheel with the weighing boundary, etc.

Taking the predicted value of the track parameter as a track parameter threshold, calculating the area intersection ratio of target detection areas in two adjacent frames, and storing the next frame of key frame image in the image set where the previous frame of key frame image is located when the area intersection ratio is less than or equal to the track parameter threshold; otherwise, a new image set is constructed by the next frame of key frame image, and then all the key frame images in the video sequence are subjected to image classification to obtain a plurality of image sets.

The object detection unit 40 is configured to implement corresponding object detection according to different image sets.

Specifically, a corresponding image set is called according to the requirements of a client to realize the detection of a corresponding target, for example, the image set with the target detected as a license plate is sent to a license plate recognition unit, and the license plate recognition unit realizes the license plate detection through optical character recognition; and sending the image set with the target detected as the interaction condition into an interaction detection unit, wherein the interaction detection unit CAN realize the detection of the interaction condition through an i-CAN network.

In summary, the embodiment of the present invention provides an intelligent weighing system based on image processing, in which a track obtaining unit 10 obtains a vehicle information sequence according to a collected side image to obtain an initial curve track corresponding to the side image; predicting a sub-curve track according to the variation of the bounding boxes between the vehicle bounding boxes in the adjacent side images in a track updating unit 20, segmenting the initial curve track based on the sub-curve track to obtain a segmented curve track, and updating the initial curve track by using the sub-curve track and the segmented curve track to obtain a new curve track; in the image classification unit 30, image set classification is performed on the side images according to the position change of the pixel points between the initial curve track and the new curve track; corresponding object detection is achieved at the object detection unit 40 from different sets of images. The method comprises the steps of optimizing an initial curve track by means of curve track segmentation and sub-curve track generation and combining a forgetting coefficient and a memory coefficient to obtain an accurate new curve track, classifying images according to position changes of pixel points between the initial curve track and the new curve track, and detecting corresponding targets according to different image sets, so that accuracy of image data can be guaranteed, and efficiency of image data interaction is improved.

It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.

The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:指针仪表读数的识别方法、系统、设备及计算机存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!