Transformer substation indicator lamp state identification method
1. A transformer substation indicator lamp state identification method is characterized by comprising the following steps:
collecting an original inspection image;
processing the original patrol inspection image based on a registration technology to obtain a training image;
performing data expansion on the training image;
performing deep learning training on the training image after data expansion to obtain a weight file and a network structure file;
carrying out quantization compression operation on the weight file and the network structure file to obtain a WK weight file;
transplanting the WK weight file into a camera for deep learning to obtain a deep learning network model;
and identifying the state of the substation indicator lamp through a deep learning network model.
2. The substation indicator lamp state identification method according to claim 1, wherein the training image is obtained by the following process:
screening and classifying the original inspection image;
and marking the position information of the indicator lamp in the screened and classified image by a registration technology.
3. The substation indicator lamp state identification method according to claim 1, wherein performing data expansion on the training image comprises:
carrying out random transformation processing on the training image to obtain a random transformation image;
and performing brightness transformation, contrast transformation or color transformation on the random transformation image to obtain a training image after data expansion.
4. The substation indicator lamp state identification method according to claim 3, wherein the random transformation process comprises a rotation transformation, a flipping transformation, a scaling transformation, a translation transformation and a miscut transformation.
5. The substation indicator lamp state identification method according to claim 1, wherein the deep learning training process is as follows:
carrying out image size transformation on all the training images subjected to data expansion;
training the training image after size transformation through a Caffe deep learning framework to obtain a model weight file and a network structure file.
6. The substation indicator lamp state identification method according to claim 5, wherein a backbone network model in the Caffe deep learning framework is an AlexNet network;
the activation function of the AlexNet network is
F(x)=x*sigmoid(β*x) (1)
Where x represents the convolution output and β represents the activation coefficient.
7. The substation indicator lamp state identification method according to claim 6, wherein the number of neurons of FC6 in the AlexNet network is 512, and the number of neurons of FC7 in the AlexNet network is 1024.
8. The substation indicator lamp state recognition method according to claim 1, further comprising testing a deep learning network model trained in a camera, comprising the steps of:
acquiring continuous frame images through a camera after deep learning;
marking an interested area in a to-be-predicted image;
inputting a to-be-predicted image into a registration system of a camera, and acquiring the position information of an indicator lamp in an interested area;
inputting the position information of the indicator light into a trained deep learning network model in the camera, and predicting the state of the indicator light;
and comparing the prediction result of the state of the indicator light with the real state, and judging whether the state is accurate or not.
9. A substation indicator lamp state identification system, the system comprising:
an acquisition module: the system is used for acquiring an original patrol image;
a first obtaining module: the original patrol inspection image is processed based on a registration technology to obtain a training image;
a data expansion module: for data expansion of the training images;
a second obtaining module: the system comprises a data expansion module, a data acquisition module, a data storage module, a data acquisition module and a data transmission module, wherein the data expansion module is used for carrying out deep learning training on a training image after data expansion to obtain a weight file and a network structure file;
a third obtaining module: the system comprises a weight file, a network structure file and a WK (Web Consumer K) weight file, wherein the weight file and the network structure file are used for carrying out quantization compression operation to obtain the WK weight file;
a deep learning module: the WK weight file is transplanted to a camera for deep learning, and a deep learning network model is obtained;
an identification module: the transformer substation indicator lamp state recognition method is used for recognizing the transformer substation indicator lamp state through the deep learning network model.
10. Computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
Background
In the power inspection system, an indicator light, a pressure plate and an air switch are very important inspection objects. The indicating lamp is used for judging whether the corresponding equipment works normally or not. At present, the work of patrolling and examining of pilot lamp still stops in the manpower patrols and examines the stage, because the pilot lamp is for clamp plate, air switch, its small and difficult the noticing, and is very high to patrolling and examining personnel's requirement. Meanwhile, the indication lamp occupies a small amount of abnormal conditions, so that the waste of human resources can be caused.
With the rapid development of artificial intelligence in recent years, the technology has been successfully applied in various industries. In the electric power system, automatic routing inspection can be realized based on the deep learning technology, the labor cost is reduced, the routing inspection efficiency is improved, and the stable and safe operation of the transformer substation is promoted.
In the prior art, when the indicator lamps are intelligently identified, a deep learning target detection network model is usually used for positioning and classifying all the indicator lamps on the whole switch cabinet. However, in practical situations, most of the indication lamps in a specific position on the switch cabinet are identified, and intelligent identification of the indication lamps in a specific area cannot be realized in the prior art.
In the prior art, a mobile device (including a mobile phone, a tablet and a USB camera) is used for collecting inspection images, the inspection images are uploaded to a server through an optical cable for model analysis, and then an analysis result is returned to the mobile device. The method is too dependent on network quality and transmission equipment, and cannot realize real-time detection.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a transformer substation indicator lamp state identification method to solve the problem of low identification efficiency in the prior art.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a transformer substation indicator lamp state identification method comprises the following steps:
collecting an original inspection image;
processing the original patrol inspection image based on a registration technology to obtain a training image;
performing data expansion on the training image;
performing deep learning training on the training image after data expansion to obtain a weight file and a network structure file;
carrying out quantization compression operation on the weight file and the network structure file to obtain a WK weight file;
transplanting the WK weight file into a camera for deep learning to obtain a deep learning network model;
and identifying the state of the substation indicator lamp through a deep learning network model.
Further, the process of acquiring the training image is as follows:
screening and classifying the original inspection image;
and marking the position information of the indicator lamp in the screened and classified image by a registration technology.
Further, performing data expansion on the training image, including:
carrying out random transformation processing on the training image to obtain a random transformation image;
and performing brightness transformation, contrast transformation or color transformation on the random transformation image to obtain a training image after data expansion.
Further, the random transformation process includes a rotation transformation, a flipping transformation, a scaling transformation, a translation transformation, and a miscut transformation.
Further, the process of the deep learning training is as follows:
carrying out image size transformation on all the training images subjected to data expansion;
training the training image after size transformation through a Caffe deep learning framework to obtain a model weight file and a network structure file.
Further, a backbone network model in the Caffe deep learning framework is an AlexNet network;
the activation function of the AlexNet network is
F(x)=x*sigmoid(β*x) (1)
Where x represents the convolution output and β represents the activation coefficient.
Furthermore, the number of neurons of FC6 in the AlexNet network is 512, and the number of neurons of FC7 is 1024.
Further, the method also comprises the step of testing the trained deep learning network model in the camera, and the method comprises the following steps:
acquiring continuous frame images through a camera after deep learning;
marking an interested area in a to-be-predicted image;
inputting a to-be-predicted image into a registration system of a camera, and acquiring the position information of an indicator lamp in an interested area;
inputting the position information of the indicator light into a trained deep learning network model in the camera, and predicting the state of the indicator light;
and comparing the prediction result of the state of the indicator light with the real state, and judging whether the state is accurate or not.
A substation indicator light status identification system, the system comprising:
an acquisition module: the system is used for acquiring an original patrol image;
a first obtaining module: the original patrol inspection image is processed based on a registration technology to obtain a training image;
a data expansion module: for data expansion of the training images;
a second obtaining module: the system comprises a data expansion module, a data acquisition module, a data storage module, a data acquisition module and a data transmission module, wherein the data expansion module is used for carrying out deep learning training on a training image after data expansion to obtain a weight file and a network structure file;
a third obtaining module: the system comprises a weight file, a network structure file and a WK (Web Consumer K) weight file, wherein the weight file and the network structure file are used for carrying out quantization compression operation to obtain the WK weight file;
a deep learning module: the WK weight file is transplanted to a camera for deep learning, and a deep learning network model is obtained;
an identification module: the transformer substation indicator lamp state recognition method is used for recognizing the transformer substation indicator lamp state through the deep learning network model.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method described above.
Compared with the prior art, the invention has the following beneficial effects:
in the prior art, usually, a locatable marker, such as a two-dimensional code, a reference point such as a black border line, and the like, is pasted on a switch cabinet, the position information of the switch cabinet is located by using the locatable marker in an original switch cabinet inspection image acquired by a camera, and offset correction is performed to obtain coordinate information of the whole switch cabinet, and then indicator lamp search is performed on the whole switch cabinet by using a target detection algorithm, such as SSD, YOLO, fast RCNN, to obtain position information of all indicator lamps; however, in practical situations, only a specific indicator light on the switch cabinet is usually located and the status thereof is judged. In view of the above problems, the image registration technology adopted in the invention is used for obtaining the position information of a specific indicator lamp to be predicted on the switch cabinet, so that the inspection efficiency is improved, and the labor cost is reduced. In the prior art, the original routing inspection image is usually obtained by using movable equipment such as a mobile phone, a tablet personal computer, a USB camera and the like, and is transmitted to a server side, and the prediction result is fed back.
Drawings
FIG. 1 is a flow diagram of model migration;
fig. 2 is a registration flow chart;
FIG. 3 is a model training flow diagram;
FIG. 4 is a flow chart of model prediction.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
As shown in fig. 1 to 4, a method for identifying the status of a substation indicator lamp includes the following steps: model training:
s1, collecting original inspection image data by using a camera.
And taking the image acquired by the inspection acquisition equipment as an original training image. The original patrol inspection image is usually an image of the whole switch cabinet area, and comprises a plurality of devices: indicator lights, pressure plates, operating buttons, knobs and the like. And positioning and cutting the indicator light to obtain the position information of the indicator light.
And S2, cutting the original patrol inspection image data based on the registration technology, and manually marking.
Based on the original patrol inspection image obtained in S1, manual screening is first performed. Cubical switchboard pictures about the same object or scene taken at different times, different sensors, different viewing angles and different shooting conditions are classified into one category. Manually marking by using a LabelImg marking tool to obtain the position information Z of the single indicator light1[X1,Y1,X2,Y2]. Wherein, X1,Y1Denotes the coordinate of the upper left corner of the indicator lamp A1, X2,Y2Indicating the coordinates of the lower right corner of indicator light a 1. According to [ X ]1,Y1,X2,Y2]The position information of the indicator lamp a1 in the original patrol image can be obtained. Similarly, the position information of the indicator lights A2 and A3 … … can be obtained. According to the category of images, the indication lamp A1 obtained by different acquisition angles and illumination can be obtained by adopting a registration technology and corresponds to the indication lamp A 'in the same category of images'1Position information Z'1[X’1,Y’1,X’2,Y’2]). Wherein, X'1,Y’1Denotes an indicator light A'1Coordinates of the upper left corner, X'2,Y’2Denotes an indicator light A'1The coordinates of the lower right corner. According to [ X'1,Y’1,X’2,Y’2]Can obtain pilot lamp A'1Location information in the image. Finally, it can be obtained that only indicator light A 'is included after being clipped'1And the training images are classified and stored according to the states of the indicator lamps. By adopting the registration technology, the workload of manual labeling of personnel can be reduced, and the labeling error caused by personnel error can be avoided.
And S3, performing data expansion on the original indicator light training data obtained in the S2.
Firstly, random rotation transformation, turnover transformation, scaling transformation, translation transformation and shear transformation are carried out on the basis of one image, and three images subjected to random transformation can be obtained. Then, the three images are subjected to random brightness conversion, contrast conversion and color conversion. The operation expands the original indicator light training images of thirty-one thousand stored in the S2 into twenty-five thousand, so that the data volume of the training set is greatly enriched, and the robustness and the stability of the model are improved.
And S4, deep learning training is carried out according to the two indicator light state training sets obtained in the S3.
D1. First, all training data in the training set is subjected to image size conversion, and the length H and the width W of an image are both converted into 128, that is, the input image size is 128 × 128.
D2. As the model weight file is to be transplanted to the camera end subsequently, the invention adopts the Caffe deep learning framework to train the model. Compared with the deep learning frames such as TensorFlow and PyTorch, the Caffe deep learning frame is clear, high in readability and fast, and meanwhile common camera carriers on the market support Caffe models.
The backbone network model adopts an AlexNet network, and the network model has five convolutional layers and three full-connection layers. Aiming at an original AlexNet network structure, the invention has two optimization schemes:
1. the Relu activation function is changed to a Swish activation function (equation (1) is a Swish activation function calculation equation). Through tests, under the condition that the training data set is not changed, the accuracy of the deep learning network in the verification set can be improved by 1.4% by replacing the Swish activation function with the Relu activation function in the network structure. Since there is no Swish activation function interface in the Caffe framework, the activation function needs to be implemented manually.
F(x)=x*sigmoid(β*x) (1)
2. The numbers of the neurons of FC6 and FC7 were changed to 512 and 1024, respectively. The reason for adopting the scheme is that the subsequent network weight file is transplanted to the camera end, and the memory of the camera end is limited. The size of the weight file finally generated by the original AlexNet network without modification is 64MB, after the neurons of FC6 and FC7 are replaced by 512 and 1024, the size of the weight file generated by the network model is changed into 16MB, the memory pressure of a camera chip is reduced, the model can stably run at the camera end subsequently, and meanwhile, the accuracy of a training set and a verification set is not greatly reduced.
And S5, carrying out quantization compression operation on the CaffeModel weight file and the network structure file generated in the S4.
The deep learning model acceleration engine in the camera adopts a parameter compression technology to reduce bandwidth occupation, in order to improve the compression rate, the full connection layer parameters are subjected to sparse processing, and meanwhile, a low bandwidth mode is adopted to carry out quantitative calculation, so that the bandwidth required by the system is minimized.
Meanwhile, the camera collected image is input in a BGR mode, normalization operation is carried out, and the parameter of numerical value normalization is set to be 1/255.0.
Based on the compression parameter setting, model quantization compression is carried out, the model quantization compression is converted into a weight file in a WK format, the size of the file is converted from 16MB to 4MB, and the occupancy rate of a camera memory is greatly reduced.
And S6, transplanting the WK weight file generated in the S5 to a deep learning acceleration engine in the camera.
H1. And loading the model and analyzing the network model. H2. And acquiring the size of each segment of auxiliary memory of the given network task. H3. And (3) CNN type network prediction of multi-node input and output. H4. A plurality of node feature map inputs. H5. And (5) unloading the model. H6. And inquiring whether the task is completed. H7. TskBuf address information is recorded. H8. The TskBuf address information is removed.
And (3) testing a model:
C1. the camera captures successive frame images for use. C2. And marking the interested area in the inspection image to be predicted by personnel. C3. Inputting the original patrol inspection image into a registration system embedded in the camera in S1, and extracting the position information of the interested indicator lamp region by using an image registration technology.
And M1, feature detection. Objects that are significantly unique are detected, including edges, contours, intersections, corners, and the like. Each key point is represented by a descriptor. M2. feature matching by invariant descriptors. The distance of the descriptors between corresponding keypoints in the two images is calculated, and the minimum distance of the K best matches of each keypoint is returned. And M3, estimating a conversion model by using the established correlation. M4. find its corresponding region in the image to be predicted based on the sample image.
C4. And inputting the position information of the indicator lamp to be predicted, extracted in the step C3, into a deep learning network model trained in the camera, and predicting and judging the state of the indicator lamp.
F1. And converting the size of the indicator light image obtained after cutting into 128 × 128 so as to ensure that the indicator light image is consistent with the size of the training image in the training set. Meanwhile, image normalization operation is carried out, and the parameter of numerical value normalization is set to be 1/255.0.
F2. Inputting the test image obtained by F1 into a trained network model, and obtaining probability values P of two states of the indicator lamp through the last full-connection layer in the deep learning network0And P1And respectively representing the probability that the test image prediction result is that the indicator light state is off and the probability that the test image prediction result is that the indicator light state is on. Taking the maximum value P of the twomaxAnd back.
C5. And combining the prediction result of the state of the indicator light to the video stream for return display, and uploading the result to an application system.
For the indicator light state recognition deep learning network model of the invention, on the basis of the technical scheme described above, the method may further comprise: VGG-16, GoogleNet, ResNet.
The core idea of VGG-16 is that: a small nucleus. I.e. a small convolution kernel of 3 x 3 is used throughout the network model. The design scheme has the greatest characteristic that compared with the use of a larger convolution kernel, the small convolution cascade has less parameter quantity and more nonlinear transformation. However, since the VGG-16 network model finally has three fully connected layers, the overall parameter amount is not small.
GoogleNet abandons the 'one-line' architecture of traditional famous networks such as AlexNet and VGG, adopts a brand-new deep learning architecture-inclusion, has no full connection layer, can save operation, and reduces a lot of parameters, and the parameter number of the weight file is one twelfth of AlexNet.
ResNet is inspired by inclusion in GoogleNet and adopts a multi-path architecture. The core module of the network model is a residual network, which does not need to fit the desired feature map directly with multiple stacked layers, but explicitly with them.
The invention discloses a transformer substation indicator lamp state identification method. The method is based on transplanting a trained deep learning network model into an industrial camera, polling images are input into the camera through an indicator lamp acquired by the camera, prediction results of the indicator lamp are returned through prediction of the network model, and the prediction results are uploaded to an application system through an MQTT or TCP protocol.
The invention transplants and embeds the trained image algorithm model into the camera end, and completes a series of operations of inspection image acquisition, image prediction, result feedback and the like in the camera end, thereby improving the characteristic of low real-time performance in the prior art. Meanwhile, the cost of the camera is far lower than that of the server, so the equipment cost is reduced.
The method combines target positioning and target identification, namely, a frame of a region to be predicted is manually marked in a image to be predicted, designated regions of a subsequent offset image are cut through an image registration technology to obtain position information of an indicator light of the designated regions, and then a deep learning network model is utilized to complete a target identification task. In the target positioning stage, the deep learning model is not used, so that the time for extracting the area positioning of the indicator lamp to be predicted is reduced relative to the prior art.
According to the method, firstly, a frame of a region to be predicted is manually marked in a to-be-predicted image, and designated region cutting is performed on a subsequent offset image through an image registration technology. This scheme can carry out the extraction of positional information to specific pilot lamp in the original image of patrolling and examining, has reduced the complexity of manpower work.
The invention provides a transformer substation indicator lamp state identification method, which reduces error operation caused by human factors by utilizing a deep learning technology, improves the transformer substation inspection efficiency and reduces the labor cost.
A substation indicator light status identification system, the system comprising:
an acquisition module: the system is used for acquiring an original patrol image;
a first obtaining module: the original patrol inspection image is processed based on a registration technology to obtain a training image;
a data expansion module: for data expansion of the training images;
a second obtaining module: the system comprises a data expansion module, a data acquisition module, a data storage module, a data acquisition module and a data transmission module, wherein the data expansion module is used for carrying out deep learning training on a training image after data expansion to obtain a weight file and a network structure file;
a third obtaining module: the system comprises a weight file, a network structure file and a WK (Web Consumer K) weight file, wherein the weight file and the network structure file are used for carrying out quantization compression operation to obtain the WK weight file;
a deep learning module: the WK weight file is transplanted to a camera for deep learning, and a deep learning network model is obtained;
an identification module: the transformer substation indicator lamp state recognition method is used for recognizing the transformer substation indicator lamp state through the deep learning network model.
A substation indicator light status identification system, the system comprising a processor and a storage medium;
the storage medium is used for storing instructions;
the processor is used for operating according to the instruction to execute the steps of the method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.