Joint evaluation method and device of dual-network model and storage medium

文档序号:8501 发布日期:2021-09-17 浏览:32次 中文

1. A joint evaluation method of a dual-network model is characterized by comprising the following steps:

acquiring a plurality of images shot by a vehicle-mounted BSD camera;

identifying each image by adopting a semantic segmentation model and a target detection model, and determining whether effective targets with the same category and corresponding positions exist;

according to whether the effective target exists in each image or not and whether the effective target is matched with the marked alarm target in the alarm area of the same image or not under the condition that the effective target exists, scoring is carried out on each image;

and summarizing the scores of the plurality of images, and determining the evaluation results of the semantic segmentation model and the target detection model according to the scores.

2. The method of claim 1, wherein scoring each image based on whether the valid target is present in each image and, if present, whether the valid target matches an alarm target tagged within an alarm region of the same image comprises:

if an effective target in an image is not matched with a labeled alarm target in an alarm area of the same image, giving a low score to the image;

if an effective target in an image is matched with an alarm target labeled in an alarm area of the same image, giving a high score to the image;

if the marked alarm target in the alarm area of an image is not determined as an effective target, giving a low score to the image;

all the scores assigned in each image are aggregated.

3. The method of claim 1, wherein scoring each image based on whether the valid target is present in each image and, if present, whether the valid target matches an alarm target tagged within an alarm region of the same image comprises:

dividing a plurality of regions in the alarm region of each image according to the danger degree, and setting different weights for each region;

and scoring each image according to whether an effective target exists in each area in each image, whether the effective target is matched with the marked alarm target in the corresponding area of the same image and the area weight under the condition that the effective target exists.

4. The method according to claim 3, wherein the scoring each image according to whether a valid target exists in each region in each image, whether the valid target matches with a labeled alarm target in a corresponding region of the same image in the case that the valid target exists, and a region weight comprises:

in a region of an image, if an effective target does not match an alarm target marked in the same region, giving the region a low score;

in a region of an image, if an effective target exists and an alarm target marked in the same region are matched, giving a high score to the region;

if the marked alarm target in a region of an image is not determined as a valid target, giving the region a low score;

in each image, a score is calculated for each image based on the regional weights and scores.

5. The method according to any one of claims 1-4, wherein the plurality of images are labeled with a category and a location of an alarm target;

before scoring each image, according to whether the effective target exists in each image and whether the effective target exists, and whether the effective target matches with an alarm target labeled in an alarm area of the same image, the method further includes:

if an effective target in one image is the same as the labeled alarm target in the alarm area of the same image in category and corresponding to the position, the effective target is matched with the alarm target;

and if an effective target in one image is different from the labeled alarm target in the alarm area of the same image or the position of the effective target does not correspond to the labeled alarm target, the effective target is not matched with the alarm target.

6. The method of claim 5, wherein if a valid target in an image is of the same type and corresponds in position to a tagged alarm target in an alarm region of the same image, the valid target matching the alarm target, comprising:

and if the type of an effective target in one image is the same as the type of the labeled alarm target in the alarm area of the same image and the intersection ratio of the detection frames exceeds a set value, matching the effective target with the alarm target.

7. The method according to any one of claims 1-4, further comprising, prior to identifying each image using the semantic segmentation model and the object detection model:

training through a training set to obtain a plurality of semantic segmentation models and a plurality of target detection models;

the semantic segmentation models and the target detection models are freely combined to obtain a plurality of groups of semantic segmentation models and target detection models;

and carrying out subsequent image identification operation on each group of semantic segmentation model and target detection model.

8. The method of claim 7, wherein after scoring each image according to whether the valid target exists in each image and whether the valid target matches an alarm target labeled in an alarm region of the same image, further comprising:

selecting a combination with an evaluation result meeting requirements from multiple groups of semantic segmentation models and target detection models;

and carrying out actual drive test on the combination to obtain a drive test evaluation result.

9. An electronic device, characterized in that the electronic device comprises:

a processor and a memory;

the processor is configured to perform the steps of the method for joint assessment of dual network models according to any of claims 1 to 8 by calling a program or instructions stored in the memory.

10. A computer-readable storage medium, characterized in that it stores a program or instructions for causing a computer to execute the steps of the joint evaluation method of a two-network model according to any one of claims 1 to 8.

Background

Vehicle-mounted BSD (Blind Spot Detection) cameras (hereinafter referred to as cameras) are installed on two sides of the rear of the vehicle and used for detecting Blind zones on two sides of the rear of the vehicle when the vehicle runs. When a target object, such as other vehicles or pedestrians, is detected in the image shot by the camera, an alarm prompt is given.

Currently, image processing is generally performed by a semantic segmentation model and an object detection model to identify a target object. The accuracy of these two models can significantly affect the accuracy of the recognition, resulting in false or false alarms. The conventional model evaluation method is drive test or by comparing the predicted bounding box and the labeled bounding box of the detected target to evaluate the accuracy and recall rate. The method for road test has the advantages of high labor cost and time cost, low reliability and easy occurrence of careless mistakes to cause deviation of the final evaluation result. The evaluation mode of the bounding box is only suitable for single model evaluation and cannot measure the results of double models, the evaluation of the single model does not provide visual information about correct/wrong objects of a detection target, and when the two models show different accuracies to different objects, the accuracy of the combination of any model is not ideal even if the comprehensive accuracy of any model is higher; single models are less flexible to evaluate, they are not able to accumulate and count test information for a region of interest on multiple images. Furthermore, quantitative and qualitative assessments are often mixed together, leading to a vague metric.

In view of the above, the present invention is particularly proposed.

Disclosure of Invention

In order to solve the technical problems, the invention provides a joint evaluation method, equipment and a storage medium of a double-network model, which realize joint evaluation of a semantic segmentation model and a target detection model.

The embodiment of the invention provides a joint evaluation method of a double-network model, which comprises the following steps:

acquiring a plurality of images shot by a vehicle-mounted BSD camera;

identifying each image by adopting a semantic segmentation model and a target detection model, and determining whether effective targets with the same category and corresponding positions exist;

according to whether the effective target exists in each image or not and whether the effective target is matched with the marked alarm target in the alarm area of the same image or not under the condition that the effective target exists, scoring is carried out on each image;

and summarizing the scores of the plurality of images, and determining the evaluation results of the semantic segmentation model and the target detection model according to the scores.

An embodiment of the present invention provides an electronic device, including:

a processor and a memory;

the processor is configured to execute the steps of the joint evaluation method of the dual network model according to any embodiment by calling a program or instructions stored in the memory.

Embodiments of the present invention provide a computer-readable storage medium, which stores a program or instructions for causing a computer to execute the steps of the joint evaluation method for dual-network models according to any embodiment.

The embodiment of the invention has the following technical effects:

in the embodiment, a plurality of images are respectively input into a semantic segmentation model and a target detection model, whether effective targets with the same category and corresponding positions exist is determined, that is, the output results of the two models are fused to obtain whether the effective targets exist, so that the two models can be screened out to identify correct objects, and wrong objects can be filtered out; and then, performing joint evaluation on the recognition capability of the two models on the alarm target in the alarm area according to whether the effective target is the alarm target labeled in the alarm area of the same image.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.

Fig. 1 is a flowchart of a joint evaluation method for a dual-network model according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of the alarm area division provided by the embodiment of the invention;

fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

The joint evaluation method of the double-network model provided by the embodiment of the invention is mainly suitable for the condition of joint evaluation of the semantic segmentation model and the target detection model under the scene of blind area early warning. The joint evaluation method of the dual-network model provided by the embodiment of the invention can be executed by the electronic equipment integrated in the vehicle-mounted BSD camera or the electronic equipment which is independent of the camera and is in communication connection with the camera.

Fig. 1 is a flowchart of a joint evaluation method for a dual-network model according to an embodiment of the present invention. Referring to fig. 1, the joint evaluation method of the dual-network model specifically includes:

and S110, acquiring a plurality of images shot by the vehicle-mounted BSD camera.

And starting the BSD system, and shooting a plurality of images through the vehicle-mounted BSD camera in a vehicle running state or a static state.

And S120, identifying each image by adopting a semantic segmentation model and a target detection model, and determining whether effective targets with the same category and corresponding positions exist.

The alarm targets in the alarm area in the multiple images are labeled (including category and position, the position is represented by a rectangular box) to form a test set. Here, the alarm target refers to a preset target that should trigger alarm information, such as a pedestrian and a rider. For the vehicle-mounted BSD camera, the vehicle-mounted BSD camera is mounted at the left and right rear sides of a vehicle body, the shooting view field is shown in FIG. 2, and the alarm area (or called an interested area or a blind area) is a rectangular area within a set distance (such as 1.5 meters) from the vehicle body, namely the area numbered from 1 to 9 in FIG. 2. The location, shape and size of the alarm area may be set according to the actual situation, fig. 2 being only an example. And mapping the alarm areas to the images from the world coordinate system to obtain the alarm areas in the images, wherein the alarm areas in the multiple images are consistent.

And sequentially inputting the plurality of images into the semantic segmentation model to obtain the category of each pixel output by the model, such as a pixel belonging to a pedestrian, a pixel belonging to a rider, a pixel belonging to a vehicle and a pixel belonging to a background. Meanwhile, a plurality of images are sequentially input into the target detection model, and the category and the position of the target object output by the model are obtained. The position of each target object refers to the position of a rectangular frame circumscribing the target object.

The present embodiment does not limit the structure of the semantic segmentation model and the target detection model, the semantic segmentation model may be U-Net, FCN and SegNet, and the target detection model may be Faster R-CNN, SSD and YOLO. The present embodiment is directed to a joint evaluation of two models to obtain a better performing set of models, which requires model sources of different precision to choose from. Specifically, before each image is input into the semantic segmentation model and the target detection model respectively, a plurality of semantic segmentation models and a plurality of target detection models are obtained through training of a training set. The semantic segmentation model and the target detection model can adopt different or same training sets, and a plurality of semantic segmentation/target detection models can be models with different structures or models with the same result but different iteration times. And then, freely combining the plurality of semantic segmentation models and the plurality of target detection models to obtain a plurality of groups of semantic segmentation models and target detection models. For example, 2 semantic segmentation models and 2 object detection models can be freely combined to obtain 4 groups of models. Then, subsequent image input operation is carried out on each group of the semantic segmentation model and the target detection model. And then carrying out joint evaluation on each group of models according to the output result of each group of models.

The position of the pixel point belonging to a certain category can be obtained through the category to which each pixel belongs, namely, the pixel point with the same color is used for representing an object. The target object uses a rectangular box to represent its position, and uses the category as an additional attribute. And judging whether the objects with the same category and corresponding positions can be obtained or not by matching the categories and the positions of the objects output by the two models. For convenience of description and distinction, the targets with the same category and corresponding positions in the output results of the two models are called effective targets.

In an actual application scene, a method of mapping the position of a target object to a semantic recognition result and calculating the proportion of pixels of the same category corresponding to the position can be adopted to accurately determine whether an effective target exists. Specifically, for any target object, the following operations are performed, and first, the categories of a plurality of pixel points covered by the position of one target object are obtained. And for each target object output by the target detection model, determining a plurality of pixel points covered by a rectangular frame of the target object and the category of the pixel points. For example, some of the pixel points are of the same type as the target object and some of the pixel points are of a different type from the target object. Then, screening target pixel points with the same category as the target object from the covered plurality of pixel points, for example, screening the target pixel points belonging to the same rider; if the target pixel point is not screened out, the target object is different from the covered any pixel point in category, and the target object is not an effective target. If the proportion of the target pixel points exceeds a set value, for example, the proportion of the target pixel points in all the pixel points in the rectangular frame exceeds 80%, determining the target object as an effective target; if the occupation ratio of the target pixel points does not exceed the set value, the target object is different from the covered most pixel points in category, and the target object is not an effective target.

Therefore, the effective target is an object which is identified by the semantic segmentation model and the target detection model together and is in the same position and the same category, and the determination of the effective target at least shows that the two models have a joint identification result and have certain precision.

S130, scoring each image according to whether the effective target exists in each image or not and whether the effective target is matched with the alarm target labeled in the alarm area of the same image or not under the condition that the effective target exists.

And S140, summarizing scores of the plurality of images, and determining evaluation results of the semantic segmentation model and the target detection model according to the scores.

According to the above description, some images determine that there is a valid target, and some images determine that there is no valid target. In the case of a valid target, it is necessary to further match the marked alarm target in the alarm area of the same image. The effective target may be an object which does not cause danger, such as a lane line, or an object which is located outside an alarm area of the vehicle, and does not cause danger, so that the effective target needs to be discriminated according to the marked alarm target in the alarm area. If the effective target in the same image is the same as the marked alarm target in the alarm area in category and the position of the effective target corresponds to the marked alarm target in the alarm area, the image is endowed with a high score, and the joint evaluation result of the two models is better; if the effective target is inconsistent with the alarm target in category/position or the alarm target is not determined to be the effective target, the image is given a low score, and the joint evaluation result of the two models is poor. In the embodiment, the scores of the multiple images are collected, the scores reflect the conditions of correct identification, missing identification and error identification of the effective target by the dual-network model, and the scores can be used as the evaluation result of the dual-network model.

The above is a qualitative assessment of the two models by determining whether a valid target exists and determining whether it matches an alarm target. The two models were evaluated quantitatively by calculating scores as follows. Optionally, if an effective target existing in an image does not match a labeled alarm target in an alarm area of the same image, that is, the category is different or the position is not corresponding, it indicates that the two models are identified incorrectly (that is, the models are identified incorrectly), and the image is assigned a low score; if an effective target of an image is matched with an alarm target marked in an alarm area of the same image, namely the image has the same category and the corresponding position, the two models are correctly identified, and the image is endowed with a high score; if the labeled alarm target in the alarm area of an image is not determined as an effective target, indicating that the alarm target is omitted from the two models (namely missing identification), giving a low score to the image; and summarizing all the scores assigned to each image so that each image has one score, and summarizing the scores of the multiple images as a joint evaluation result of the semantic segmentation model and the target detection model. The low score and the high score are relative, and may be both positive values, both negative values, or both positive and negative values. It can be seen that a higher score can only be obtained when both models can identify and fully identify the alarm target within the alarm region.

Optionally, on the basis that multiple groups of models participate in the joint evaluation, after S140, the method further includes: selecting a combination with an evaluation result meeting requirements from multiple groups of semantic segmentation models and target detection models; and carrying out actual drive test on the combination to obtain a drive test evaluation result.

The requirement can be met by a plurality of combinations of the ranking heads of the evaluation results, then the combinations are respectively applied to the BSD products, vehicles carrying the BSD products are driven to run on a road, objects which should or should not trigger an alarm are placed on the road in advance, whether alarm information, such as voice/light alarm information, is generated or not is tested, and therefore the road test evaluation results are obtained. The data volume participating in the drive test can be effectively reduced by performing the drive test on the basis of the joint evaluation, and the combination with the best performance can be more effectively selected by the joint evaluation and the drive test evaluation.

In the embodiment, a plurality of images are respectively input into a semantic segmentation model and a target detection model, whether effective targets with the same category and corresponding positions exist is determined, that is, the output results of the two models are fused to obtain whether the effective targets exist, so that the two models can be screened out to identify correct objects, and wrong objects can be filtered out; and then, performing joint evaluation on the recognition capability of the two models on the alarm target in the alarm area according to whether the effective target is the alarm target labeled in the alarm area of the same image.

In the embodiment and the following embodiments, differentiation requirements are provided for the performances of the two models in different areas, and the two models need to perform better in more dangerous areas to trigger alarm with high probability so as to ensure the driving safety of the vehicle; and the evaluation results of the two models are not influenced obviously if the evaluation results are poor in some relatively safe areas. The combination obtained by joint evaluation is relatively suitable for practical situations. Specifically, a plurality of regions are divided in the alarm region of each image according to the risk degree, and different weights are set for each region; and scoring each image according to whether an effective target exists in each area of the plurality of images, whether the effective target is matched with the marked alarm target in the corresponding area of the same image and the area weight under the condition that the effective target exists.

The degree of risk characterizes the probability of a traffic accident, and fig. 2 is a schematic diagram of the division of the alarm area provided by the embodiment of the invention. The closer to the vehicle body, the more easily traffic accidents occur, the higher the danger degree is, the inner wheel difference area is arranged in the vehicle head and part of the vehicles, the traffic accidents also easily occur, the higher the danger degree is, the alarm area (the vehicle head, the vehicle middle and the vehicle tail) can be divided into 9 parts according to the transverse distance and the longitudinal position of the vehicle body, the smaller the weight is set in the area with the larger transverse distance, the larger the weight is set in the area in the vehicle head and the vehicle middle, and the smaller the weight is set in the vehicle tail area. Alternatively, the positions of the collided targets in the images when traffic accidents happen historically can be counted. And setting higher weight for the positions with more traffic accidents. Then, in a region of an image, if an effective target does not match with the labeled alarm target in the same region, namely the category is different or the position does not correspond, giving a low score to the region; in a region of an image, if an effective target is matched with an alarm target labeled in the same region, namely the effective target has the same category and the position corresponds to the alarm target, giving a high score to the region; if the marked alarm target in a region of an image is not determined as a valid target, giving the region a low score; in each image, calculating the score of each image according to the regional weight and the score; and summarizing scores of a plurality of images to serve as a joint evaluation result of the semantic segmentation model and the target detection model. The low score and the high score are relative, and may be both positive values, both negative values, or both positive and negative values.

In one example, if the valid target does not match the alert target, a regional score p is assigned. If the valid target matches the alert target, a regional score t is assigned, and if the alert target is not determined to be a valid target, a regional score n is assigned. Suppose that the alarm area is divided into A, B, C3 areas, and the corresponding weights are respectivelyW A W B AndW C in region A there isj1An effective target matched with the alarm target, there arej2An active object not matching the alarm object, there arej3The individual alarm targets are not determined to be valid targets. In region B there isk1An effective target matched with the alarm target, there arek2An active object not matching the alarm object, there arek3The individual alarm targets are not determined to be valid targets. In region C there ism1An effective target matched with the alarm target, there arem2An active object not matching the alarm object, there arem3The individual alarm targets are not determined to be valid targets.S TP Is the total score of the correct alarms in the three regions of an image,S FP is the total score of misrecognitions in three regions of an image,S FN is the total score of missing recognitions in three regions of an image.S General assembly Is the total score of an image, Y is the total score of multiple images, and r is the number of images. The two models with the highest total score are optimal.

Referring to the description of the above embodiment, the alarm target in the alarm region is known in advance from a plurality of images input to the semantic segmentation model and the target detection model, is real information in the images, and can be obtained through manual labeling. If an effective target in one image is the same as the labeled alarm target in the alarm area of the same image in category and corresponding to the position, the effective target is matched with the alarm target; and if an effective target in one image is different from the labeled alarm target in the alarm area of the same image or the position of the effective target does not correspond to the labeled alarm target, the effective target is not matched with the alarm target. Under the condition that the effective target and the alarm target both adopt the rectangular frames to represent the positions, calculating an Intersection Over Unit (IOU) of two rectangular frames of the same category, and if the Intersection over Unit exceeds a set value, for example, 70%, indicating that the overlapping degree of the effective target and the alarm target is higher and the positions of the effective target and the alarm target correspond to each other.

Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 3, the electronic device 400 includes one or more processors 401 and memory 402.

The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 400 to perform desired functions.

Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 401 to implement the joint evaluation method for dual network models of any of the embodiments of the present invention described above and/or other desired functions. Various contents such as initial external parameters, threshold values, etc. may also be stored in the computer-readable storage medium.

In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other form of connection mechanism (not shown). The input device 403 may include, for example, a keyboard, a mouse, and the like. The output device 404 can output various information to the outside, including warning prompt information, braking force, etc. The output devices 404 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.

Of course, for simplicity, only some of the components of the electronic device 400 relevant to the present invention are shown in fig. 3, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 400 may include any other suitable components depending on the particular application.

In addition to the above-described methods and apparatus, embodiments of the present invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the joint evaluation method of a dual network model provided by any of the embodiments of the present invention.

The computer program product may write program code for carrying out operations for embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.

Furthermore, embodiments of the present invention may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps of the joint evaluation method of a dual network model provided by any of the embodiments of the present invention.

The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to limit the scope of the present application. As used in the specification and claims of this application, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element.

It is further noted that the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention. Unless expressly stated or limited otherwise, the terms "mounted," "connected," "coupled," and the like are to be construed broadly and encompass, for example, both fixed and removable coupling or integral coupling; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:指纹卡及其工作方法、电子设备及计算机可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!