Equipment identification model combination method and system for machine room inspection robot
1. An equipment identification model combination method for a machine room inspection robot is characterized by comprising the following specific steps:
s1, performing recognition model training on all equipment in the machine room to obtain an equipment recognition model corresponding to each model;
s2, arranging the equipment identification models sequentially from top to bottom in a drag-type combination mode;
s3, combining a plurality of shot pictures of the same cabinet through an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
s4, the equipment identification model obtained by pulling the support corresponds to the actual area of the equipment;
s5, using the corresponding recognition model of the equipment to respectively recognize the states of the valuable equipment.
2. The method as claimed in claim 1, wherein the S1 implements layout of the device identification model by custom-defining the layout sequence of the devices from top to bottom in the cabinet through a dragging combination to obtain a device model layout result.
3. The method as claimed in claim 2, wherein the step S2 of arranging the device identification models sequentially from top to bottom by a drag-and-drop combination comprises:
s201, splicing the images to obtain a complete cabinet photo;
s202, area boundary identification is carried out on the complete cabinet photo, and each equipment area from top to bottom in the whole cabinet is preliminarily distinguished;
s203, each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
s204, the single equipment identification model in the cabinet equipment identification model combination respectively identifies the state of the corresponding equipment model.
4. The method as claimed in claim 3, wherein each area of S202 corresponds to a separate device, and the partitioned areas of the cabinet pictures and the different devices are identified in sequence from top to bottom.
5. The method as claimed in claim 4, wherein each device of said S204 performs individual identification, and the related status of the device is identified by combining with the rack identification model combination of the dragging combination.
6. The utility model provides an equipment identification model combined system for computer lab patrols and examines robot, characterized by the system specifically include the model establish module, pull arrangement module, combination identification module, model confirm module and state identification module:
a model building module: performing identification model training on all equipment in the machine room to obtain an equipment identification model corresponding to each model;
a dragging and arranging module: arranging the equipment identification models sequentially from top to bottom in a dragging type combination mode;
a combined identification module: combining a plurality of shot pictures of the same cabinet by an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
a model confirmation module: corresponding the equipment identification model obtained by pulling the support to the actual area of the equipment;
a state identification module: and respectively carrying out state recognition on the valuable equipment by using the recognition model corresponding to the equipment.
7. The system as claimed in claim 6, wherein the model building module customizes and compiles the placing sequence of the equipment from top to bottom in the cabinet by a dragging type combination to obtain an equipment model compiling result, thereby realizing the compiling of the equipment identification model.
8. The system according to claim 7, wherein the dragging and editing module specifically comprises an image stitching module, a region identification module, a sequence identification module, and an equipment identification module:
an image stitching module: splicing the images to obtain a complete cabinet photo;
an area identification module: carrying out area boundary identification on the complete cabinet photo, and preliminarily distinguishing each equipment area from top to bottom in the whole cabinet;
an order identification module: each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
an equipment identification module: and respectively carrying out state identification on the corresponding equipment models by using the single equipment identification model in the equipment cabinet identification model combination.
9. The system of claim 8, wherein each zone of the zone identification module corresponds to a separate device, and the zone identification module identifies a partitioned zone of the cabinet picture and different devices in sequence from top to bottom.
10. The system of claim 9, wherein the equipment identification module identifies each piece of equipment individually, and identifies the associated status of the equipment in combination with a combination of the rack identification models of the dragged combination.
Background
The robot is patrolled and examined to computer lab can replace the people now to carry out the high-efficient mode that the computer lab was patrolled and examined, and the rack photo of patrolling and examining the robot and taking is usually not complete rack photo, need make up many photos, can influence the rate of accuracy with single rack as training object, because equipment (server) outward appearance is very similar usually in the rack moreover, if can further reduce with the discernment rate of accuracy of whole rack training model. On the other hand, if a certain cabinet is used as a training object, the obtained model is not flexible enough, and the equipment in a certain cabinet in the machine room is changed frequently, so that the training cost of a single cabinet is higher; therefore, the invention provides a device identification model combination method and system for a machine room inspection robot, and aims to solve the problems.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a device identification model combination method and a device identification model combination system for a machine room inspection robot, and the adopted technical scheme is as follows: an equipment identification model combination method for a machine room inspection robot comprises the following specific steps:
s1, performing recognition model training on all equipment in the machine room to obtain an equipment recognition model corresponding to each model;
s2, arranging the equipment identification models sequentially from top to bottom in a drag-type combination mode;
s3, combining a plurality of shot pictures of the same cabinet through an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
s4, the equipment identification model obtained by pulling the support corresponds to the actual area of the equipment;
s5, using the corresponding recognition model of the equipment to respectively recognize the states of the valuable equipment.
And S1, self-defining the arrangement sequence of the equipment from top to bottom in the cabinet through a dragging type combination to obtain an equipment model arrangement result, and realizing arrangement of the equipment identification model.
The specific step of S2 arranging the device identification models sequentially from top to bottom in a drag-and-drop combination manner includes:
s201, splicing the images to obtain a complete cabinet photo;
s202, area boundary identification is carried out on the complete cabinet photo, and each equipment area from top to bottom in the whole cabinet is preliminarily distinguished;
s203, each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
s204, the single equipment identification model in the cabinet equipment identification model combination respectively identifies the state of the corresponding equipment model.
Each area of the S202 corresponds to a separate device, and the partitioned areas of the cabinet pictures and the different devices are identified from top to bottom.
And S204, each device is independently identified, and the relevant state of the device is identified by combining the dragged and combined cabinet identification model combination.
The utility model provides an equipment identification model combined system for computer lab patrols and examines robot, the system specifically include the model and establish the module, pull arrangement module, combination identification module, model confirmation module and state identification module:
a model building module: performing identification model training on all equipment in the machine room to obtain an equipment identification model corresponding to each model;
a dragging and arranging module: arranging the equipment identification models sequentially from top to bottom in a dragging type combination mode;
a combined identification module: combining a plurality of shot pictures of the same cabinet by an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
a model confirmation module: corresponding the equipment identification model obtained by pulling the support to the actual area of the equipment;
a state identification module: and respectively carrying out state recognition on the valuable equipment by using the recognition model corresponding to the equipment.
The model building module carries out custom editing on the placing sequence of the equipment from top to bottom in the cabinet through a dragging type combination to obtain an equipment model editing result, and the editing of the equipment identification model is realized.
The dragging and arranging module specifically comprises an image splicing module, an area identification module, a sequence identification module and an equipment identification module:
an image stitching module: splicing the images to obtain a complete cabinet photo;
an area identification module: carrying out area boundary identification on the complete cabinet photo, and preliminarily distinguishing each equipment area from top to bottom in the whole cabinet;
an order identification module: each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
an equipment identification module: and respectively carrying out state identification on the corresponding equipment models by using the single equipment identification model in the equipment cabinet identification model combination.
Each region of the region identification module corresponds to an independent device, and the device cabinet pictures are identified from top to bottom and the partition regions of different devices.
And each device of the device identification module is independently identified, and the relevant state of the device is identified by combining the dragged and combined cabinet identification model combination.
The invention has the beneficial effects that: the method utilizes a training method using a convolutional neural network to train each equipment in a machine room to obtain a corresponding model capable of accurately identifying the state of single equipment, including relevant information such as the corresponding position of an equipment indicator light and the meaning of the indicator light, and the obtained identification model corresponds to the actual equipment model; combining a plurality of shot pictures of the same cabinet by using an image splicing technology to obtain a complete picture of the cabinet; utilizing the obtained equipment identification model to correspond to the actual model of the equipment; when equipment in the cabinet is changed, such as replacement, displacement and the like, the equipment combination of the cabinet is rearranged only by a dragging mode; the establishment of the equipment identification model is more flexible, so that the training cost of the model is reduced, and the working efficiency of the model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of the method of the present invention; fig. 2 is a schematic diagram of the system of the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
The first embodiment is as follows:
an equipment identification model combination method for a machine room inspection robot comprises the following specific steps:
s1, performing recognition model training on all equipment in the machine room to obtain an equipment recognition model corresponding to each model;
s2, arranging the equipment identification models sequentially from top to bottom in a drag-type combination mode;
s3, combining a plurality of shot pictures of the same cabinet through an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
s4, the equipment identification model obtained by pulling the support corresponds to the actual area of the equipment;
s5, respectively identifying the states of the valuable equipment by using the identification model corresponding to the equipment;
further, the S1 self-defines the placing sequence of the equipment from top to bottom in the cabinet through a dragging type combination to obtain an equipment model arrangement result, so as to realize arrangement of the equipment identification model;
further, the specific step of S2 arranging the device identification models sequentially from top to bottom in a drag-and-drop combination manner includes:
s201, splicing the images to obtain a complete cabinet photo;
s202, area boundary identification is carried out on the complete cabinet photo, and each equipment area from top to bottom in the whole cabinet is preliminarily distinguished;
s203, each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
s204, respectively carrying out state identification on the corresponding equipment models by the single equipment identification model in the equipment cabinet identification model combination;
further, each area of S202 corresponds to an independent device, and the partitioned areas of the cabinet pictures and the different devices are identified in sequence from top to bottom;
further, each device in S204 is individually identified, and the relevant state of the device is identified by combining the dragged and combined cabinet identification model combination;
the method comprises the steps of firstly, training each device in a machine room by using a convolutional neural network training method to obtain a corresponding model capable of accurately identifying the state of a single device, including relevant information such as the corresponding position of a device indicator light and the meaning of the indicator light, and enabling the obtained identification model to correspond to the actual device model;
the foreground interface can realize self-defined arrangement of the arrangement sequence of the equipment from top to bottom in the cabinet in a dragging type combination mode, and realize arrangement of the equipment identification models according to the arrangement result of the equipment models, namely the equipment identification model combination in sequence in a certain cabinet;
combining a plurality of pictures of a certain cabinet shot by the machine room inspection robot through an image splicing technology to obtain a complete picture of the whole cabinet; identifying the complete spliced image of the cabinet by using a dragged combined model result, wherein in the first step, only the partitioned areas of different equipment from top to bottom in the cabinet picture are identified, each area corresponds to one independent equipment, and in the second step, according to the different equipment identified in the first step, each equipment is individually identified by combining the dragged and combined cabinet identification model combination, so that the relevant state of the equipment is identified;
when equipment in the cabinet is changed, such as replacement, shifting and the like, the equipment combination of the cabinet is rearranged only by a dragging mode.
Example two:
the utility model provides an equipment identification model combined system for computer lab patrols and examines robot, the system specifically include the model and establish the module, pull arrangement module, combination identification module, model confirmation module and state identification module:
a model building module: performing identification model training on all equipment in the machine room to obtain an equipment identification model corresponding to each model;
a dragging and arranging module: arranging the equipment identification models sequentially from top to bottom in a dragging type combination mode;
a combined identification module: combining a plurality of shot pictures of the same cabinet by an image splicing technology to obtain a complete picture of the cabinet; distinguishing each device area from top to bottom in the picture by using an image processing technology;
a model confirmation module: corresponding the equipment identification model obtained by pulling the support to the actual area of the equipment;
a state identification module: respectively carrying out state recognition on the valuable equipment by using a recognition model corresponding to the equipment;
furthermore, the model building module carries out custom editing on the placing sequence of the equipment from top to bottom in the cabinet through a dragging type combination to obtain an equipment model editing result, and the editing of the equipment identification model is realized.
Further, the dragging and arranging module specifically comprises an image splicing module, an area identification module, a sequence identification module and an equipment identification module:
an image stitching module: splicing the images to obtain a complete cabinet photo;
an area identification module: carrying out area boundary identification on the complete cabinet photo, and preliminarily distinguishing each equipment area from top to bottom in the whole cabinet;
an order identification module: each equipment area of the cabinet corresponds to a single model in the model combination corresponding to the cabinet in sequence;
an equipment identification module: respectively carrying out state recognition on the corresponding equipment models by using single equipment recognition models in the equipment cabinet equipment recognition model combination;
furthermore, each area of the area identification module corresponds to an independent device, and the equipment cabinet pictures and the partition areas of different devices are identified from top to bottom;
furthermore, each device of the device identification module is individually identified, and the relevant state of the device is identified by combining the dragged and combined cabinet identification model combination.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.