Method and device for identifying style of ceramic sanitary appliance and storage medium
1. A style identification method of ceramic sanitary ware is characterized by comprising the following steps:
acquiring an image set of the ceramic sanitary appliance; the set of images includes a top view, a bottom view, and a side view of the ceramic plumbing fixture;
inputting the image set into a trained style recognition model to obtain a classification result;
the method for acquiring the style recognition model after training comprises the following steps:
inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance;
training a style recognition model according to the positioning position, the style type and a target loss function to obtain a trained style recognition model; the first weight of the style class in the objective loss function is greater than the second weight of the position location.
2. The method for identifying the style of the ceramic sanitary ware according to claim 1, wherein: the method for inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance comprises the following steps:
inputting the training data into a style recognition model for feature extraction to obtain a plurality of feature maps under different scales;
generating a plurality of default frames with different sizes on each feature map, and predicting each default frame to obtain a prediction result; the default frame is used for positioning the ceramic sanitary appliance, and the prediction result comprises a style category;
and performing compensation processing on the default frame to obtain a positioning position.
3. The method for identifying the style of ceramic sanitary ware according to claim 2, wherein: the compensating the default frame to obtain a positioning position includes:
acquiring a compensation error of the default frame; the compensation error comprises a central point abscissa compensation value of the default frame, a central point ordinate compensation value of the default frame, a width compensation value of the default frame and a height compensation value of the default frame;
determining the horizontal coordinate of the center point of the default frame after compensation according to the horizontal coordinate of the current center point of the default frame and the product of the current width of the default frame and the width compensation value of the default frame;
determining the center point vertical coordinate of the default frame after compensation according to the current center point vertical coordinate of the default frame and the product of the current height of the default frame and the center point vertical coordinate compensation value of the default frame;
determining the width of the default frame after compensation according to the current width of the default frame and the width compensation value of the default frame;
and determining the height of the default frame after compensation according to the current height of the default frame and the height compensation value of the default frame.
4. The method for identifying the style of ceramic sanitary ware according to claim 2, wherein: the predicting each default frame to obtain a prediction result includes:
predicting each default frame, and determining a plurality of category probabilities corresponding to each default frame; the class probability represents the probability of different styles;
and determining a finally reserved default frame through non-maximum suppression according to the category probability, and taking the style category corresponding to the maximum category probability in the finally reserved default frame as a prediction result of the finally reserved default frame.
5. The method for identifying the style of ceramic sanitary ware according to claim 2, wherein: the training data includes a first label and a second label, and the training of the style recognition model according to the positioning position, the style category and the target loss function to obtain the trained style recognition model includes:
determining a matching value according to the matching result of the positioning position and the first label; the first label comprises a true position of the ceramic plumbing fixture;
determining a first loss parameter by adopting a first loss function according to the style type and the matching value;
determining a second loss parameter by adopting a second loss function according to the matching value, the positioning position, the first label and the second label; the second label comprises a genuine style category of the ceramic plumbing fixture;
weighting the first loss parameter, the first weight, the second loss parameter and the second weight, and determining a loss value of the target loss function according to a weighting result;
and training the style recognition model according to the loss value, and updating model parameters of the style recognition model to obtain the trained style recognition model.
6. The method for identifying the style of ceramic sanitary ware according to claim 5, wherein: determining a matching value according to a matching result of the positioning position and the first tag, including:
when the intersection ratio of the positioning position and the first label is greater than or equal to an intersection threshold value, determining that the matching value is 1;
alternatively, the first and second electrodes may be,
and when the intersection ratio of the positioning position and the first label is smaller than an intersection threshold value, determining that the matching value is 0.
7. The method for identifying the style of ceramic sanitary ware according to claim 6, wherein: the determining a loss value of the target loss function according to the weighting result includes:
determining a total number of default boxes for which the match value is 1;
and determining the loss value of the target loss function according to the ratio of the weighting result to the total number.
8. A style identification device of ceramic sanitary ware is characterized by comprising:
the acquisition module is used for acquiring an image set of the ceramic sanitary appliance; the set of images includes a top view, a bottom view, and a side view of the product;
the classification module is used for inputting the image set to the trained style recognition model to obtain a classification result;
the method for acquiring the style recognition model after training comprises the following steps:
inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance;
training a style recognition model according to the positioning position, the style type and a target loss function to obtain a trained style recognition model; the first weight of the style class in the objective loss function is greater than the second weight of the position location.
9. The style recognition device of the ceramic sanitary appliance is characterized by comprising a processor and a memory;
the memory stores a program;
the processor executes the program to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a program which, when executed by a processor, implements the method according to any one of claims 1-7.
Background
The production of ceramic sanitary wares is the traditional industry, and along with the improvement of automation and semi-automation level of the manufacturing industry in recent years, the application of robots is also popularized to the production industry of ceramic sanitary wares. However, due to the particularity of the manufacturing process and the production flow of the ceramic sanitary ware, the robot still has certain limitation when being applied to the working procedures of ceramic glaze spraying, polishing and the like. The traditional ceramic sanitary ware production line has the characteristics of small batch, mixed production and the like, but the current robot glaze spraying and robot polishing technologies need to teach the motion track and the posture parameter of a manipulator well to a specific ceramic sanitary ware product in advance, and the corresponding teaching parameters need to be loaded when the style of the product is changed. Thus, for a particular ceramic sanitary product, the model of the product needs to be identified before the robot can perform an action.
The existing ceramic product style identification mode mainly comprises bar code scanning identification and RFID radio frequency identification. The bar code scanning identification and the RFID radio frequency identification are carried out by people before the process, and then the corresponding bar code or the RFID chip is bound to the ceramic product, so that the bar code scanning identification and the RFID radio frequency identification belong to the human identification category fundamentally, and are complex in operation and low in efficiency. In addition, the subsequent processes of changing the appearance of the ceramic product, such as kiln burning or glaze spraying, of the ceramic product can damage the bar code or the RFID chip, cause the identification failure of the subsequent process, and require continuous investment of enterprises for consumable cost, so that the cost is high.
Disclosure of Invention
In view of the above, in order to solve the above technical problems, the present invention provides a method, an apparatus and a storage medium for identifying styles of ceramic sanitary wares, which can improve classification efficiency and reduce cost.
The technical scheme adopted by the invention is as follows:
a style identification method of ceramic sanitary ware comprises the following steps:
acquiring an image set of the ceramic sanitary appliance; the set of images includes a top view, a bottom view, and a side view of the ceramic plumbing fixture;
inputting the image set into a trained style recognition model to obtain a classification result;
the method for acquiring the style recognition model after training comprises the following steps:
inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance;
training a style recognition model according to the positioning position, the style type and a target loss function to obtain a trained style recognition model; the first weight of the style class in the objective loss function is greater than the second weight of the position location.
Further, the inputting of the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance includes:
inputting the training data into a style recognition model for feature extraction to obtain a plurality of feature maps under different scales;
generating a plurality of default frames with different sizes on each feature map, and predicting each default frame to obtain a prediction result; the default frame is used for positioning the ceramic sanitary appliance, and the prediction result comprises a style category;
and performing compensation processing on the default frame to obtain a positioning position.
Further, the performing compensation processing on the default frame to obtain a positioning position includes:
acquiring a compensation error of the default frame; the compensation error comprises a central point abscissa compensation value of the default frame, a central point ordinate compensation value of the default frame, a width compensation value of the default frame and a height compensation value of the default frame;
determining the horizontal coordinate of the center point of the default frame after compensation according to the horizontal coordinate of the current center point of the default frame and the product of the current width of the default frame and the width compensation value of the default frame;
determining the center point vertical coordinate of the default frame after compensation according to the current center point vertical coordinate of the default frame and the product of the current height of the default frame and the center point vertical coordinate compensation value of the default frame;
determining the width of the default frame after compensation according to the current width of the default frame and the width compensation value of the default frame;
and determining the height of the default frame after compensation according to the current height of the default frame and the height compensation value of the default frame.
Further, the predicting each default frame to obtain a prediction result includes:
predicting each default frame, and determining a plurality of category probabilities corresponding to each default frame; the class probability represents the probability of different styles;
and determining a finally reserved default frame through non-maximum suppression according to the category probability, and taking the style category corresponding to the maximum category probability in the finally reserved default frame as a prediction result of the finally reserved default frame.
Further, the training data includes a first label and a second label, and the training of the style identification model according to the positioning position, the style category, and the target loss function to obtain the trained style identification model includes:
determining a matching value according to the matching result of the positioning position and the first label; the first label comprises a true position of the ceramic plumbing fixture;
determining a first loss parameter by adopting a first loss function according to the style type and the matching value;
determining a second loss parameter by adopting a second loss function according to the matching value, the positioning position, the first label and the second label; the second label comprises a genuine style category of the ceramic plumbing fixture;
weighting the first loss parameter, the first weight, the second loss parameter and the second weight, and determining a loss value of the target loss function according to a weighting result;
and training the style recognition model according to the loss value, and updating model parameters of the style recognition model to obtain the trained style recognition model.
Further, the determining a matching value according to the matching result between the positioning location and the first tag includes:
when the intersection ratio of the positioning position and the first label is greater than or equal to an intersection threshold value, determining that the matching value is 1;
alternatively, the first and second electrodes may be,
and when the intersection ratio of the positioning position and the first label is smaller than an intersection threshold value, determining that the matching value is 0.
Further, the determining the loss value of the target loss function according to the weighting result includes:
determining a total number of default boxes for which the match value is 1;
and determining the loss value of the target loss function according to the ratio of the weighting result to the total number.
The invention also provides a style identification device of the ceramic sanitary appliance, which comprises:
the acquisition module is used for acquiring an image set of the ceramic sanitary appliance; the set of images includes a top view, a bottom view, and a side view of the product;
the classification module is used for inputting the image set to the trained style recognition model to obtain a classification result;
the method for acquiring the style recognition model after training comprises the following steps:
inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance;
training a style recognition model according to the positioning position, the style type and a target loss function to obtain a trained style recognition model; the first weight of the style class in the objective loss function is greater than the second weight of the position location.
The invention also provides a device, comprising a processor and a memory;
the memory stores a program;
the processor executes the program to implement the method.
The present invention also provides a computer-readable storage medium storing a program which, when executed by a processor, implements the method.
The invention has the beneficial effects that: according to the method, the top view, the bottom view and the side view of the ceramic sanitary appliance are obtained, the classification result of the ceramic sanitary appliance is obtained by utilizing the trained style identification model, and the ceramic sanitary appliance is classified by combining the images of the ceramic sanitary appliance at multiple angles, so that the ceramic sanitary appliance with single style and large appearance difference can be identified, the method is also suitable for identifying the ceramic sanitary appliances with unobvious appearance difference or various styles, the artificial identification is not needed, the bar code or the RFID chip is different, the efficiency is improved, and the cost is reduced; in addition, the style identification model is trained according to the positioning position, the style type and the target loss function to obtain the trained style identification model, and the first weight of the style type in the target loss function is greater than the second weight of the positioning position, so that the style type is more concerned in the training of the style identification model, and the classification performance of the trained style identification model is improved.
Drawings
FIG. 1 is a schematic flow chart illustrating steps of a method for identifying styles of ceramic sanitary wares according to the present invention;
FIG. 2 is a diagram illustrating steps of obtaining a trained style recognition model according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating steps for obtaining a positioning location and a style type during a training process according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a regression classification section in a pattern recognition model according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
As shown in fig. 1, an embodiment of the present invention provides a method for identifying a style of a ceramic sanitary appliance, including steps S100-S200:
s100, acquiring an image set of the ceramic sanitary appliance.
In an embodiment of the invention, the set of images includes a top view, a bottom view, and a side view of the ceramic plumbing fixture. Specifically, three cameras are respectively placed on the right upper side, the front side and the back side of the ceramic sanitary appliance on a production line, and the top view, the bottom view and the side view of the ceramic sanitary appliance are respectively collected so as to collect images of different visual senses of the ceramic sanitary appliance in real time.
And S200, inputting the image set into the trained style recognition model to obtain a classification result.
Specifically, top view, bottom view and side view in the image set are input into the trained style recognition model, and the trained style recognition model outputs a classification result which represents the style class of the current ceramic sanitary appliance. It should be noted that, after the style categories are obtained, the ceramic sanitary wares can be further classified and placed or subjected to different treatments according to the obtained style categories. It should be noted that the trained style recognition model can be arranged on an industrial personal computer in communication connection with the three cameras, and after the classification result is obtained, the industrial personal computer controls corresponding equipment to classify the ceramic sanitary wares, so that style division of the ceramic sanitary wares is realized.
In the embodiment of the invention, the image set can be denoised before being input. Optionally, the denoising process is to denoise the image set through a gaussian filter kernel of a certain size (e.g., 3 × 3).
As shown in fig. 2, optionally, the step of acquiring the trained style recognition model includes steps S300-S400:
s300, inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance.
Specifically, the training data is an image sample set including image samples of a top view, a bottom view, a side view and the like of the ceramic sanitary appliance, the style identification model refers to a model capable of positioning the ceramic sanitary appliance in the image sample set and outputting the style type of the ceramic sanitary appliance, and after the training data is input into the style identification model, the style identification model outputs the positioning position and the style type of the ceramic sanitary appliance.
Optionally, the image sample may be data washed and data expanded after it is acquired. For example, the data cleansing specifically is: finding out images which do not belong to the ceramic sanitary ware product in the image sample, images which are difficult to judge the product style due to poor shooting quality and images which are shot by the same target repeatedly under the same posture at the same time, and deleting the images; the data expansion specifically comprises the following steps: and the image sample is expanded through strategies such as brightness adjustment, contrast adjustment, noise addition and the like. It can be understood that after the image samples are subjected to data cleaning and data expansion, the image samples are classified to obtain training data containing consistent characteristic sample sets and style sample sets. Specifically, the consistent feature sample set comprises image samples with consistent appearances among ceramic sanitary wares of different styles and types at the same view angle, and is marked as a first image sample, wherein the first image sample is provided with a labeled first label, and the first label comprises a minimum circumscribed rectangular frame (representing a real position) and a feature name of the ceramic sanitary ware; the style sample set comprises the rest image samples except the consistent characteristic sample set, which are marked as second image samples, and second labels with labels are provided, wherein the second labels comprise minimum circumscribed rectangle frames (representing real positions) of the ceramic sanitary ware and real style categories. Wherein the first label and the second label constitute label information of the training data.
For example: when the side appearances of A, B two types of ceramic sanitary wares are completely the same, namely the side views are the same, the side views have no distinguishing features relative to A, B two types of ceramic sanitary wares, at this time, the side views of A, B two types of ceramic sanitary wares are classified into a consistent feature sample set, and the feature name of the first label is marked as an "AB side", so that the recognition accuracy of the style recognition model can be improved through the classification, and the risk of model misrecognition is reduced, because: if the side views of A, B two models of ceramic sanitary wares are divided into "a side" and "B side", the model parameters of the style recognition model become confused in the training process, and the style recognition model tries to distinguish two identical images, which leads to severe overfitting of the model parameters, and finally leads to the reduction of the recognition accuracy of the style recognition model.
As shown in fig. 3, specifically, step S300 includes the following steps S311 to S313:
s311, inputting the training data into the style recognition model for feature extraction to obtain a plurality of feature maps under different scales.
Specifically, in the pattern recognition model, the feature extraction part may provide a basis for fast migration learning by using a VGG16 network, and the regression classification part is shown in fig. 4, and the feature map with the size of 14 × 512 obtained after being processed by the VGG16 network is convolved by using four groups of 3 × 3 convolution kernels, so as to finally obtain feature maps with five scales, namely 14 × 512, 7 × 1024, 5 × 256, 3 × 256, and 1 × 256.
And S312, generating a plurality of default frames with different sizes on each feature map, and predicting each default frame to obtain a prediction result.
Specifically, the feature graphs are located and classified through a location classifier, a plurality of default boxes with different sizes, for example, four default boxes with different sizes, are generated for each feature graph under each scale, and then prediction is performed on each default box to determine a prediction result. Wherein the default frame is used for positioning the position of the ceramic sanitary appliance in the image, and the prediction result comprises the style class of the ceramic sanitary appliance. Optionally, the sizes of the default boxes of the four different sizes are: (scale (i) × y, scale (i) × 1), (scale (i) × x, scale (i) × y,scale(i)*1)、(scale(i)*x,scale(i)*y,scale(i)*1,)、(scale(i)*x,scale(i)*y, ) Wherein x and y represent the abscissa and ordinate of the coordinate point of the ith feature map, the four items in the () above represent the abscissa of the center point of the default frame, the ordinate of the center point of the default frame, the width of the default frame and the height of the default frame, respectively, and scale (i) represents the ratio of the width and height of the VGG16 input layer to the width and height of the current feature map, taking the width and height dimension as 224 as an example, at this time, the feature map is displayed
Optionally, the predicting each default frame in step S312 to obtain a prediction result includes the following steps S3121-S3122:
s3121, predicting each default frame, and determining a plurality of category probabilities corresponding to each default frame.
In the embodiment of the invention, the class probability represents the probabilities of different styles and the probability of the background. Specifically, each default box is predicted, and each default box corresponds to a plurality of category probabilities. For example, taking three types of style as an example, each default box gets a class probability C0,C1,C2,C3In which C is0Probability of representing background, C1,C2,C3Probabilities of three different styles, respectively.
And S3122, determining a finally reserved default frame through non-maximum suppression according to the category probability, and taking the style category corresponding to the maximum category probability in the finally reserved default frame as a prediction result of the finally reserved default frame.
Specifically, according to the category probability obtained in step S3121, a default frame to be finally retained among the four default frames in each feature map is determined by non-maximum suppression (NMS), and a style category corresponding to a maximum category probability in the finally retained default frame is used as a prediction result of the finally retained default frame, that is, the maximum category probabilityWill be provided withThe corresponding style class serves as a prediction result of the default box which is finally reserved.
And S313, performing compensation processing on the default frame to obtain a positioning position.
Specifically, the premise of the compensation processing in this step is thatIs not equal to C0。
Specifically, step S313 includes the following steps S3131-S3135:
s3131, obtaining a compensation error of the default frame.
In the embodiment of the invention, the acquired compensation error comprises a central point abscissa compensation value d of the default framexLongitudinal coordinate compensation value d of center point of default frameyWidth compensation value d of default framewAnd height compensation value d of default frameh。
S3132, determining the center point abscissa of the default frame after compensation according to the center point abscissa of the default frame and the product of the current width of the default frame and the width compensation value of the default frame.
Specifically, assume that the current center point abscissa of the default frame is PxThe vertical coordinate of the current center point of the default frame is PyDefault box current width is PwDefault box current height is PhDetermining the horizontal coordinate of the center point after the default frame compensationThe formula of (1) is:
s3133, determining the center point ordinate of the default frame after compensation according to the center point ordinate of the default frame and the product of the current height of the default frame and the center point ordinate compensation value of the default frame.
Determining the vertical coordinate of the center point after the default frame compensationThe formula of (1) is:
s3134, determining the width of the default frame after compensation according to the current width of the default frame and the width compensation value of the default frame.
Determining default box compensated widthThe formula of (1) is:
and S3135, determining the height of the default frame after compensation according to the current height of the default frame and the height compensation value of the default frame.
Determining default frame compensated heightThe formula of (1) is:
it should be noted that the positioning position includes the compensated abscissa of the center pointCompensated longitudinal center pointCompensated widthAnd compensated height
S400, training the style recognition model according to the positioning position, the style type and the target loss function to obtain the trained style recognition model.
Specifically, the style recognition model is trained by using a target loss function set in the style recognition model, and model parameters are updated in the training process, so that the trained style recognition model is obtained. In the embodiment of the invention, the first weight of the style type in the target loss function is greater than the second weight of the positioning position.
Specifically, step S400 includes the following steps S411-S415:
s411, determining a matching value according to the matching result of the positioning position and the first label.
Specifically, the step S411 of determining a matching value according to a matching result between the positioning location and the first tag includes the following steps S4111 or S4112:
s4111, when the intersection ratio of the positioning position and the first label is larger than or equal to the intersection threshold, determining that the matching value is 1.
Alternatively, when the intersection ratio IoU of the positioning position and the first tag is greater than or equal to the intersection threshold, the matching value δ is determined to be 1.
S4112, when the intersection ratio of the positioning position and the first label is smaller than the intersection threshold, determining that the matching value is 0.
Alternatively, when the intersection ratio of the positioning position and the first tag is less than the intersection threshold, the matching value δ is determined to be 0.
And S412, determining a first loss parameter by adopting a first loss function according to the style type and the matching value.
Specifically, the first loss function includes, but is not limited to, a Softmax cross-entropy loss function, and the Softmax cross-entropy loss function is adopted to determine the first loss parameter as L according to the style class and the matching valueconf(δ, c), where δ is the match value and c is the style class.
And S413, determining a second loss parameter by adopting a second loss function according to the matching value, the positioning position, the first label and the second label.
Specifically, the second loss function includes, but is not limited to, a Smooth L1 loss function, and the second loss parameter L is determined using the Smooth L1 loss function according to the matching value, the location, the first tag, and the second tagloc(δ, l, g), where δ is the match value, l is the location position, and g is the tag information comprising the first tag and the second tag.
And S414, weighting the first loss parameter, the first weight, the second loss parameter and the second weight, and determining the loss value of the target loss function according to the weighting result.
Specifically, the first weight W1Greater than a second weight W2E.g. first weight W1Is 2, the second weight W2Is 1.
Specifically, the step S414 of determining the loss value of the target loss function according to the weighting result includes the following steps S4141 or S4142:
s4141, determining the total number N of the default frames with the matching value of 1.
Optionally, the total number N is the total number of default boxes with a matching value of 1 on the feature map at different scales after NMS processing.
S4142, determining the loss value of the target loss function according to the ratio of the weighting result to the total amount.
Specifically, the target loss function L (δ, c, L, g) is:
from the formula, the loss value of the target loss function is determined by the ratio of the total number N to the weighted result, which is
First loss parameter Lconf(δ, c), first weight W1A second loss parameter Lloc(delta, l, g) and a second weight W2And carrying out weighting.
And S415, training the style recognition model according to the loss value, and updating model parameters of the style recognition model to obtain the trained style recognition model.
Specifically, a loss value is determined by using a target loss function, the pattern recognition model is trained according to the loss value, model parameters are updated in the training process, and when the loss value is less than or equal to a loss threshold value, the trained pattern recognition model is determined by using the currently updated model parameters. Optionally, the model parameters include, but are not limited to, data processing (or pre-processing) related parameters, training process and training related parameters, or network related parameters. For example, data processing (or pre-processing) related parameters include, but are not limited to, rich database parameters (enrich data), feature normalization and scaling parameters (feature normalization and scaling) and BN processing parameters (batch normalization); parameters related to training in the training process include but are not limited to training momentum, learning rate, attenuation function, weight initialization and regularization related methods; network-related parameters include, but are not limited to, selection parameters of the classifier, the number of neurons, the number of filters, and the number of network layers.
To sum up, the multi-camera view angle method is used for acquiring images and identifying the styles of the ceramic sanitary ware products based on the deep convolutional neural network technology, so that the convenience, identification range and accuracy of the styles of the ceramic sanitary ware products are greatly improved, and the method comprises the following advantages:
1) the use is convenient: because the machine vision technology is adopted, the problems that methods such as bar code scanning identification, RFID radio frequency identification and the like need to be manually identified and used for one time are solved, only manual marking and model training are needed to be carried out on a new style product before production, and manual marking is not needed to be carried out on each ceramic sanitary appliance product in the production process; the camera, the industrial personal computer and the identification model can be repeatedly used, and the disposable articles are not used.
2) The identification range is wide: the method uses three cameras to shoot the ceramic sanitary ware products from three different visual angles, is suitable for identifying more products with slight differences, breaks through the limitation that a single-camera identification mode can only identify one side of the products, and enables the automatic identification of the styles of the ceramic sanitary ware products to be flexibly and reliably applied to actual production lines.
3) The identification is accurate: the method is based on a self-defined target detection network, uses a weighted target loss function, and sets different first weights and second weights, so that the classification performance is more emphasized on the basis of meeting the basic positioning function, and the accuracy of product identification is ensured.
The invention also provides a style identification device of the ceramic sanitary appliance, which comprises:
the acquisition module is used for acquiring an image set of the ceramic sanitary appliance; the image collection includes top, bottom, and side views of the product;
the classification module is used for inputting the image set into the trained style recognition model to obtain a classification result;
the method for acquiring the style recognition model after training comprises the following steps:
inputting the training data into the style recognition model to obtain the positioning position and the style type of the ceramic sanitary appliance;
training the style recognition model according to the positioning position, the style type and the target loss function to obtain a trained style recognition model; the first weight of the style class in the objective loss function is greater than the second weight of the localized position.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
The embodiment of the invention also provides a style identification device of the ceramic sanitary appliance, which comprises a processor and a memory;
the memory is used for storing programs;
the processor is used for executing programs to realize the style identification method of the ceramic sanitary appliance of the embodiment of the invention. The device provided by the embodiment of the invention can realize the function of style identification of the ceramic sanitary appliance. The device can be any intelligent terminal such as a mobile phone, a tablet Personal computer, a Personal Digital Assistant (PDA for short), a Point of Sales (POS for short), a vehicle-mounted computer, and the like.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
Embodiments of the present invention further provide a computer-readable storage medium, which stores a program, where the program is executed by a processor to implement the method for identifying the style of the ceramic sanitary appliance according to the aforementioned embodiments of the present invention.
Embodiments of the present invention also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of style identification of a ceramic sanitary fixture of the aforementioned embodiments of the invention.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.