Label recommendation method and device, training method and medium of label recommendation model

文档序号:7778 发布日期:2021-09-17 浏览:51次 中文

1. A tag recommendation method for multimedia data, comprising:

acquiring a reference label and at least two candidate labels;

determining semantic similarity between the reference label and the candidate label;

determining a co-occurrence probability between the reference tag and the candidate tag, the co-occurrence probability being used to describe a probability that the reference tag and the candidate tag belong to tags of the same multimedia data;

and selecting a target label from the at least two candidate labels according to the semantic similarity and the co-occurrence probability.

2. The method of claim 1, wherein the selecting a target tag from the at least two candidate tags according to the semantic similarity and the co-occurrence probability comprises:

inputting the semantic similarity and the co-occurrence probability into a pre-trained label recommendation model, and obtaining a function value of an objective function output by the pre-trained label recommendation model, wherein the pre-trained label recommendation model learns the corresponding relation among the semantic similarity, the co-occurrence probability, and the function value of the objective function corresponding to the semantic similarity and the co-occurrence probability;

and selecting a target label from the at least two candidate labels according to the function value.

3. The method of claim 2, wherein the selecting a target tag from the at least two candidate tags according to the function value comprises:

sorting the function values according to the size sequence relation;

and taking the candidate tags with the preset number ranked at the top as the target tags.

4. The tag recommendation method for multimedia data according to claim 2, further comprising, before said obtaining the reference tag and the at least two candidate tags:

acquiring at least two sample words from a news corpus and acquiring at least two sample labels of the multimedia data;

analyzing semantic similarity among the sample words to serve as sample semantic similarity, and analyzing semantic similarity among the sample labels to serve as the sample semantic similarity;

analyzing the co-occurrence probability among the sample labels and taking the co-occurrence probability as a sample co-occurrence probability;

training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of a target function output by the initial label recommendation model meets a set condition, and finishing training of the label recommendation model.

5. The tag recommendation method for multimedia data according to claim 4, wherein said initial tag recommendation model comprises a first tag recommendation model and a second tag recommendation model, said objective function comprises: a first function corresponding to the first tag recommendation model and a second function corresponding to the second tag recommendation model, the first tag recommendation model and the second tag recommendation model sharing a word embedding layer.

6. The method of claim 5, wherein the training of the initial tag recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of an objective function output by the initial tag recommendation model meets a set condition, the training of the tag recommendation model being completed includes:

training the sample semantic similarity to the first label recommendation model, and acquiring a first function value of a first function output by the first label recommendation model;

training the second label recommendation model according to the sample co-occurrence probability, and acquiring a second function value of a second function output by the second label recommendation model;

and taking the sum of the first function value and the second function value as the target value until the loss value between the target value and the calibration value meets the set condition, and finishing the training of the label recommendation model.

7. The method of claim 4, further comprising, before the obtaining at least two sample words from the corpus of news,:

acquiring mass news data;

obtaining a corresponding news text according to the news data;

performing word segmentation processing on the news text to obtain at least two word segments;

and constructing the news corpus according to the at least two word segmentations.

8. The tag recommendation method for multimedia data according to any of claims 1-7, wherein said determining a co-occurrence probability between said reference tag and said candidate tag comprises:

and taking point mutual information between the reference label and the candidate label as the co-occurrence probability, wherein the point mutual information is used for describing the correlation between the reference label and the candidate label.

9. A training method of a label recommendation model is characterized by comprising the following steps:

acquiring at least two sample words and acquiring at least two sample labels of the multimedia data;

analyzing semantic similarity among the sample words to serve as sample semantic similarity, and analyzing semantic similarity among the sample labels to serve as the sample semantic similarity;

analyzing the co-occurrence probability among the sample labels and taking the co-occurrence probability as a sample co-occurrence probability;

and training a label recommendation model according to the sample semantic similarity and the sample co-occurrence probability.

10. The method for training the label recommendation model according to claim 9, wherein the training the label recommendation model according to the sample semantic similarity and the sample co-occurrence probability comprises:

training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of a target function output by the initial label recommendation model meets a set condition, and finishing training of the label recommendation model.

11. The method for training the tag recommendation model according to claim 10, wherein the initial tag recommendation model comprises a first tag recommendation model and a second tag recommendation model, and the objective function comprises: a first function corresponding to the first tag recommendation model and a second function corresponding to the second tag recommendation model, the first tag recommendation model and the second tag recommendation model sharing a word embedding layer.

12. The method for training the label recommendation model according to claim 11, wherein the training of the initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of an objective function output by the initial label recommendation model satisfies a set condition, the training of the label recommendation model being completed includes:

training the sample semantic similarity to the first label recommendation model, and acquiring a first function value of a first function output by the first label recommendation model;

training the second label recommendation model according to the sample co-occurrence probability, and acquiring a second function value of a second function output by the second label recommendation model;

and taking the sum of the first function value and the second function value as the target value until the loss value between the target value and the calibration value meets the set condition, and finishing the training of the label recommendation model.

13. The training method of the label recommendation model according to any one of claims 9-12, wherein the analyzing the co-occurrence probability between the sample labels and as the sample co-occurrence probability comprises:

and taking point mutual information among the sample labels as the sample co-occurrence probability, wherein the point mutual information is used for describing the correlation among the sample labels.

14. A tag recommendation apparatus for multimedia data, comprising:

the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a reference label and at least two candidate labels;

a first determining module for determining semantic similarity between the reference label and the candidate label;

a second determining module, configured to determine a co-occurrence probability between the reference tag and the candidate tag, where the co-occurrence probability is used to describe a probability that the reference tag and the candidate tag belong to tags of the same multimedia data;

and the recommendation module is used for selecting a target label from the at least two candidate labels according to the semantic similarity and the co-occurrence probability.

15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for tag recommendation for multimedia data according to any one of claims 1 to 8, or the steps of the method for training a tag recommendation model according to any one of claims 9 to 13.

Background

With the arrival of the video era, in addition to the traditional self-media public account platform mainly based on pictures and texts, video numbers such as WeChat video numbers, tremble short video numbers, fast-hand short video numbers and the like, which can be shared by individual users at any time, are provided, and the main content expression mode of the short video and short text information streams (feeds) is emphasized instead of text content. To improve the information environment and group information by topic, users can add topic tags (hashtags) to the published information stream. Currently, the theme tags on the video number platform are basically manually labeled by a user when uploading a video number information stream, and in order to improve the labeling efficiency of the theme tags, the platform provides an automatic theme tag recommendation capability, that is, when the user inputs a theme tag, the platform automatically recommends the relevant theme tag.

When recommending the theme tags, in the related technology, each theme tag is represented as an N-dimensional word vector by using open news corpus-based google word2vec (a correlation model for generating word vectors), then cosine (cosine) distances between word vectors of different theme tags are calculated, the correlation degree between different theme tags is determined according to the distances, and further, potential theme tags are recommended according to the correlation degree.

However, since the topic label word vector represented by the native google word2vec is a byproduct obtained when the language model is trained on the unsupervised news corpus, it learns the grammatical similarity of words, so the words with higher similarity may only often appear together in the news corpus, or may be similar in grammar, even may be an antisense word, a hyponym word, or a homonym word, for example, in the native google word2vec, the word vectors of the two words "beijing" and "shanghai" may be very similar, but in the same video number information stream, "beijing" and "shanghai" may not often appear together. In short, the word vector directly obtained by using the native google word2vec has a distribution assumption inconsistent with a distribution assumption between the topic labels of the video number information stream on one hand, and on the other hand, has only weak constraint of a language model loss function on the other hand, and cannot effectively control the obtained word vector distribution to be consistent with the co-occurrence distribution required by the topic labels of the video number information stream, so that the accuracy is low when the topic label recommendation is performed by adopting the above method.

Disclosure of Invention

Therefore, it is necessary to provide a tag recommendation method, a tag recommendation device, a tag recommendation model training method, and a medium, which can effectively improve the tag recommendation accuracy, in view of the above technical problems.

A tag recommendation method for multimedia data, comprising:

acquiring a reference label and at least two candidate labels;

determining semantic similarity between the reference label and the candidate label;

determining a co-occurrence probability between the reference tag and the candidate tag, wherein the co-occurrence probability is used for describing the probability that the reference tag and the candidate tag belong to the tags of the same multimedia data;

and selecting a target label from the at least two candidate labels according to the semantic similarity and the co-occurrence probability.

A training method of a label recommendation model comprises the following steps:

acquiring at least two sample words and acquiring at least two sample labels of the multimedia data;

analyzing semantic similarity among all sample words to serve as sample semantic similarity, and analyzing semantic similarity among all sample labels to serve as sample semantic similarity;

analyzing the co-occurrence probability among the sample labels and taking the co-occurrence probability as a sample co-occurrence probability;

and training a label recommendation model according to the sample semantic similarity and the sample co-occurrence probability.

A tag recommendation apparatus for multimedia data, comprising:

the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a reference label and at least two candidate labels;

the first determining module is used for determining semantic similarity between the reference label and the candidate label;

the second determining module is used for determining the co-occurrence probability between the reference label and the candidate label, wherein the co-occurrence probability is used for describing the probability that the reference label and the candidate label belong to the same label of the multimedia data;

and the recommendation module is used for selecting the target label from the at least two candidate labels according to the semantic similarity and the co-occurrence probability.

A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned tag recommendation method for multimedia data, or carries out the steps of the above-mentioned training method of a tag recommendation model.

According to the label recommendation method, the label recommendation device, the label recommendation model training method and the label recommendation model training medium, the reference label and the at least two candidate labels are obtained, the semantic similarity between the reference label and the candidate labels is determined, the co-occurrence probability between the reference label and the candidate labels is determined, and the target label is selected from the at least two candidate labels according to the semantic similarity and the co-occurrence probability. Because the semantic similarity represents the similarity of the labels in the semantics, and the co-occurrence probability represents the probability of the labels appearing in the same multimedia data, the target labels obtained based on the semantic similarity and the co-occurrence probability are more consistent with the distribution hypothesis of the multimedia data, and the obtained target labels are more accurate.

Drawings

FIG. 1 is a diagram of an embodiment of an application environment of a tag recommendation method for multimedia data;

FIG. 2 is a flow diagram of a tag recommendation method for multimedia data in one embodiment;

FIG. 3 is a diagram illustrating tagging of a hashtag in a video number stream according to an embodiment;

FIG. 4 is a schematic flow chart illustrating the process of selecting a target tag from at least two candidate tags according to an embodiment;

FIG. 5 is a schematic flow chart illustrating training of a tag recommendation model in one embodiment;

FIG. 6 is a diagram illustrating a tag recommendation model in one embodiment;

FIG. 7 is a flowchart illustrating a method for training a tag recommendation model according to an embodiment;

FIG. 8 is a block diagram showing the structure of a tag recommendation apparatus for multimedia data according to an embodiment;

fig. 9 is a block diagram showing the construction of a tag recommendation apparatus for multimedia data according to another embodiment;

fig. 10 is a block diagram showing a configuration of a tag recommendation apparatus for multimedia data according to still another embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. Machine Learning (ML) is an important direction of artificial intelligence, and mainly studies how a computer simulates or realizes human Learning behaviors to acquire new knowledge or skills and reorganize an existing knowledge structure to continuously improve the performance of the computer.

The label recommendation can be used as a branch of machine learning, when the label recommendation is carried out, each topic label is represented as an N-dimensional word vector by using open google word2vec (a correlation model used for generating word vectors) based on news corpus, then the cosine distance between word vectors of different topic labels is calculated, the correlation degree between different topic labels is determined according to the distance, and further, potential topic labels are recommended according to the correlation degree. However, since the topic label word vector represented by the native google word2vec is a byproduct obtained when the language model is trained on the unsupervised news corpus, and it learns the grammatical similarity of words, the obtained words with higher similarity may only often appear together in the news corpus, or may be similar in grammar, and may even be an antisense word, a hyponym word, a homonym word, or the like, thereby resulting in low accuracy of topic label recommendation. Based on the above, the application provides a tag recommendation method for multimedia data, which can effectively improve the accuracy of topic tag recommendation. The tag recommendation method for multimedia data provided by the application can be applied to the application environment shown in fig. 1. The terminal 102 and the server 104 communicate through a network, the server 104 obtains a reference tag input by a user on the terminal 102, obtains at least two candidate tags from a database, determines semantic similarity between the reference tag and the candidate tags, and determines a co-occurrence probability between the reference tag and the candidate tags, wherein the co-occurrence probability is used for describing the probability that the reference tag and the candidate tags belong to tags of the same multimedia data, and selects a target tag from the at least two candidate tags according to the semantic similarity and the co-occurrence probability, sends the target tag to the terminal 102, and displays the target tag on the terminal 102 for the user to select. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, and tablet computers, and the server 104 may be implemented by an independent server or a server cluster formed by at least two servers.

In one embodiment, as shown in fig. 2, a tag recommendation method for multimedia data is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:

in step 202, a reference tag and at least two candidate tags are obtained.

The reference label refers to a label provided by a user for the current multimedia data; the candidate tag refers to a tag of all multimedia data that already exists before, and may specifically be based on a tag corresponding to the already existing multimedia data that can be provided by the server, which is not limited herein.

Take the example that the user issues the video number information stream by using the WeChat video number. Referring to fig. 3, after the user uploads a short video about a pocket pie through a WeChat video signal of a WeChat APP in the terminal 102, the user may input a short text about the short video below the short video, such as "pocket pie is too good to eat", and also input a subject tag, such as "pocket pie", for the short video, which may serve as a reference tag. It is understood that, in order to distinguish the short words from the theme labels, a separator such as "#" may be added to both ends of the theme label to separate the short words and different theme labels, so that the system and the user can quickly and accurately learn the short words and the respective theme labels corresponding to the short videos.

After the user inputs a theme tag such as "pocket pie" for the short video, the terminal 102 sends the theme tag "pocket pie" as a reference tag to the server 104. After receiving the reference tag, such as the theme tag "pocket cake", the server 104 obtains the theme tags corresponding to all the existing multimedia data, such as "home dishes", "breakfast cake", "gourmet food", "Chinese hamburger", "lunch", "dinner", and the like, from the database and uses them as candidate tags.

Step 204, determining semantic similarity between the reference label and the candidate label.

Semantic similarity refers to the degree to which two words can be used in a mutual replacement in different contexts without changing the syntactic semantic structure of the text, so the higher the possibility that two words can be used in a mutual replacement in different contexts without changing the syntactic semantic structure of the text, the higher the similarity of the two words, otherwise the lower the similarity.

In the application, two different labels are respectively expressed into an N-dimensional word vector, then the cosine distance between the word vectors of the different labels is calculated, and the semantic similarity between the two different labels is determined according to the cosine distance. Of course, the semantic similarity between two different tags may also be obtained in other manners, and the prior art may be specifically used, which is not limited herein.

After obtaining the reference tag, such as the theme tag "pocket cake", and at least two candidate tags, such as the theme tag "home dish", "breakfast cake", "gourmet", "Chinese hamburger", "lunch", "dinner", etc., the server 104 may first represent the reference tag and each candidate tag as a N (e.g., 50) -dimensional word vector, then calculate the cosine distance between the reference tag and each candidate tag, and determine the semantic similarity between the reference tag and each candidate tag according to the cosine distance.

At step 206, a co-occurrence probability between the reference tag and the candidate tag is determined.

The co-occurrence probability is used to describe the probability that the reference tag and the candidate tag belong to tags of the same multimedia data, i.e., the probability of being simultaneously present in the same multimedia data. When the reference label is the pocket cake and the candidate label is the food, the pocket cake belongs to the food, so that the probability that the pocket cake and the food are simultaneously present in the same multimedia data is high; when the reference label is the pocket pie and the candidate label is the modeling, the pocket pie and the modeling hardly appear in the same multimedia data at the same time because the pocket pie and the modeling do not have a direct relationship, that is, the probability of appearing in the same multimedia data at the same time is close to zero.

In one embodiment, determining a co-occurrence probability between a reference tag and a candidate tag comprises: and taking point mutual information between the reference label and the candidate label as a co-occurrence probability, wherein the point mutual information is used for describing the correlation between the reference label and the candidate label.

The PMI is derived from Mutual Information in an Information theory, the Mutual Information is used for measuring the correlation between two random variables, the Mutual Information is used for measuring the correlation between two things such as two words, and the Mutual Information is used for measuring the correlation between a reference label and a candidate label. Since the co-occurrence probability describes the probability that the reference tag and the candidate tag belong to the same multimedia data, and the point-to-point information can quantitatively give the degree of correlation between the reference tag and the candidate tag, the point-to-point information between the reference tag and the candidate tag can be used as the co-occurrence probability.

The basic principle of the point-to-point information is simple, and the corresponding formula is as follows:

wherein p (x, y) represents the probability of co-occurrence of object x and object y, p (x) represents the probability of occurrence of object x alone, p (y) represents the probability of occurrence of object y alone, and if object x and object y are not correlated, p (x, y) ═ p (x) p (y), corresponding to the point mutual information PMI (x; y) between object x and object y is 0, and the greater the correlation between object x and object y, the greater p (x, y) is compared with p (x) p (y), the greater the point mutual information PMI (x; y) corresponding to object x and object y is. As can be seen from equation (1), the mutual point information PMI (x; y) between object x and object y substantially means a conditional probability p (x | y) of occurrence of object x when object y occurs divided by a probability p (x) of occurrence of object x itself, or means a conditional probability p (y | x) of occurrence of object y when object x occurs divided by a probability p (y) of occurrence of object y itself, and the mutual point information between object x and object y has a symmetric relationship, that is, PMI (x; y) ═ PMI (y; x).

In the present application, mutual point information between two different tags can be obtained by:

in the formula, PMI (X; Y) represents mutual point information between tag X and tag Y, N represents the frequency of co-occurrence of tag X and tag Y in all multimedia data already existing, N represents the number of tags of all multimedia data already existing, N1 represents the frequency of individual occurrence of tag X in all multimedia data already existing, and N2 represents the frequency of individual occurrence of tag Y in all multimedia data already existing.

For example, take two subject labels of "Chinese hamburger" and "food". Assuming that the Chinese hamburgers and the food in all the existing multimedia data are simultaneously present in 10 multimedia data, "the Chinese hamburgers" are present in 15 multimedia data, "the food" is present in 20 multimedia data, and all the existing multimedia data have N different subject labels, the mutual point information between the Chinese hamburgers and the food is the sameThe larger the calculated point mutual information is, the more often two topic tags appear together, namely the larger the co-occurrence probability is, namely, when a plurality of video number owners issue video number information streams, the owners are marked with video number information streamsThe title label of Chinese hamburgers can be marked with food with high probability.

After obtaining the reference tag such as the theme tag "pocket cake" and at least two candidate tags such as the theme tag "house dish", "breakfast cake", "gourmet", "Chinese hamburger", "lunch", "dinner", etc., the server 104 may first obtain, from the database, the frequency of occurrence of the reference tag, the frequency of occurrence of each candidate tag, and the frequency of occurrence of the reference tag in the same multimedia data with each candidate tag, respectively, and then calculate, according to the frequency of occurrence of the reference tag, the frequency of occurrence of each candidate tag, the frequency of occurrence of the reference tag in the same multimedia data with each candidate tag, and the number of tags of all multimedia data, the co-occurrence probability between the reference tag and each candidate tag through the above formula (2).

And 208, selecting a target label from the at least two candidate labels according to the semantic similarity and the co-occurrence probability.

After obtaining the semantic similarity and the co-occurrence probability between the reference tag such as the topic tag "pocket cake" and each candidate tag such as the topic tag "family dish", "breakfast cake", "gourmet", "Chinese hamburger", "lunch", "dinner", etc., the server 104 can determine, according to the semantic similarity and the co-occurrence probability, the candidate tag that is semantically most similar to the reference tag such as the topic tag "pocket cake" and often co-occurs in the same multimedia data as the target tag such as the topic tag "family dish", "breakfast cake", "gourmet", "Chinese hamburger".

According to the tag recommendation method for the multimedia data, the target tag is selected from at least two candidate tags according to the semantic similarity between the reference tag and the candidate tags and the co-occurrence probability between the reference tag and the candidate tags, wherein the semantic similarity represents the semantic similarity of the tags, and the co-occurrence probability represents the probability of the tags appearing in the same multimedia data, so that the target tag obtained based on the semantic similarity and the co-occurrence probability is more consistent with the distribution hypothesis of the multimedia data, and the obtained target tag is more accurate.

In one embodiment, referring to fig. 4, selecting a target tag from at least two candidate tags according to semantic similarity and co-occurrence probability includes:

step S402, the semantic similarity and the co-occurrence probability are input into the pre-trained label recommendation model, and a function value of the objective function output by the pre-trained label recommendation model is obtained. The pre-trained label recommendation model learns semantic similarity, co-occurrence probability and the corresponding relation between the functional values of the objective functions corresponding to the semantic similarity and the co-occurrence probability.

And step S404, selecting a target label from the at least two candidate labels according to the function value.

That is, after obtaining the semantic similarity and the co-occurrence probability between the reference tag, such as the topic tag "pocket cake", and each candidate tag, such as the topic tag "family dish", "breakfast cake", "gourmet", "Chinese hamburger", "dinner", etc., the server 104 may select the target tag from at least two candidate tags through the tag recommendation model obtained through pre-training according to the semantic similarity and the co-occurrence probability. Specifically, the server 104 may input the obtained semantic similarity and the co-occurrence probability between the reference tag and each candidate tag into a tag recommendation model obtained by pre-training, output a corresponding function value after calculating through a target function in the tag recommendation model, and then select a target tag from at least two candidate tags according to the function value.

In one embodiment, selecting the target label from the at least two candidate labels according to the function value includes: sorting the function values according to the magnitude sequence relation; and taking the candidate tags with the preset number ranked at the top as target tags.

For example, after the semantic similarity and the co-occurrence probability between the reference label, such as the topic label "pocket cake", and each candidate label, such as the topic label "house dish", "breakfast cake", "gourmet", "Chinese hamburger", "dinner", etc., are input to the label recommendation model obtained by the pre-training, the server 104 calculates the objective function of the label recommendation model obtained by the pre-training, and then outputs the function values of 80%, 90%, 95%, 60%, etc., in turn, and then ranks all the function values, and takes the top preset number of candidate labels as the target labels, such as the topic labels "Chinese hamburger", "breakfast cake", "cate", and "house dish". Then, the server 104 sends the obtained target tags to the terminal 102 and displays the target tags on the terminal 102, so that the user can directly select from the target tags, thereby improving the tagging efficiency of the tags.

In one embodiment, referring to fig. 5, before obtaining the reference tag and the at least two candidate tags, the method further includes:

step 502, at least two sample words are obtained from the corpus of news, and at least two sample labels of the multimedia data are obtained.

The corpus is a basic resource for linguistic research of the corpus and a main resource of an empirical language research method, and is applied to the aspects of lexicography, language teaching, traditional language research, statistics or example-based research in natural language processing and the like. The language material bank stores the language material which is actually appeared in the practical use of the language, and is the basic resource which takes an electronic computer as a carrier to bear language knowledge, and the real language material needs to be processed (analyzed and processed) to become useful resource.

In the present application, the corpus is a news corpus, the server 104 may obtain at least two sample words from the news corpus, and train the tag recommendation model by using the sample words as part of training data of the tag recommendation model, and meanwhile, the server 104 also obtains a large number of existing tags of all multimedia data as at least two sample tags, and trains the tag recommendation model by using the sample tags as part of training data of the tag recommendation model.

In one embodiment, before obtaining at least two sample words from the corpus of news, the method further comprises: acquiring mass news data; obtaining a corresponding news text according to the news data; performing word segmentation processing on the news text to obtain at least two word segments; and constructing a news corpus according to the at least two word segmentations.

For example, the server 104 may obtain a large amount of news data from an internet platform, including text data, voice data, or video data in many aspects such as history, geography, science and technology, culture, education, entertainment, society, and legal system, then perform format conversion on all the news data, such as text conversion, to obtain a large amount of news texts, perform word segmentation on the news texts, convert an original character stream into one entry, and finally obtain at least two segments.

And step 504, analyzing the semantic similarity among the sample words to be used as the sample semantic similarity, and analyzing the semantic similarity among the sample labels to be used as the sample semantic similarity.

After the server 104 obtains at least two sample words and at least two sample labels, the semantic similarity between the sample words can be obtained in the manner described above and is used as the sample semantic similarity, and the semantic similarity between the sample labels can be obtained in the manner described above and is used as the sample semantic similarity. That is to say, the semantic similarity of the samples includes two aspects, on one hand, the semantic similarity between sample words, and on the other hand, the semantic similarity between sample labels, and since the semantic similarity of the news corpus and the label corpus is adopted to train the label recommendation model, the label recommendation model can be generalized to any word without limitation, and thus the identification accuracy of the label recommendation model is improved.

And step 506, analyzing the co-occurrence probability among the sample labels and taking the co-occurrence probability as the sample co-occurrence probability.

After obtaining at least two sample labels, the server 104 may further obtain a co-occurrence probability between each sample label in the foregoing manner, and use the co-occurrence probability as the sample co-occurrence probability.

And step 508, training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of the objective function output by the initial label recommendation model meets a set condition, and finishing training the label recommendation model.

After obtaining the sample semantic similarity and the sample co-occurrence probability, the server 104 inputs the sample semantic similarity and the sample co-occurrence probability to the initial label recommendation model to train the initial label recommendation model until a loss value between a target value and a calibration value of a target function output by the initial label recommendation model meets a set condition, and the label recommendation model is trained completely.

In one embodiment, the initial tag recommendation model includes a first tag recommendation model and a second tag recommendation model, the objective function includes a first function corresponding to the first tag recommendation model and a second function corresponding to the second tag recommendation model, and the first tag recommendation model and the second tag recommendation model share a word embedding layer.

That is to say, the initial tag recommendation model may include two tag recommendation models, which are a first tag recommendation model and a second tag recommendation model respectively, where the first tag recommendation model is used as a main model and may be a word vector model, and the second tag recommendation model is used as an auxiliary model, and shares a word embedding layer with the main model to perform joint optimization, that is, a function value output by the auxiliary model is used as a joint supervision signal of the main model to perform joint training on the main model, so that the tag recommendation model obtained through training can learn co-occurrence probability distributions of different tags in existing tags of multimedia data. The first tag recommendation model corresponds to a first function, when the first tag recommendation model is a word vector model, the first function is an objective function of the word vector model, the second tag recommendation model corresponds to a second function, the second function may be a loss function for solving a difference between co-occurrence probabilities, that is, a difference between mutual information of points, and specifically may be a hinge loss function or a mean square error loss function, and the like, and the objective function includes the first function and the second function, for example, the objective function is the sum of the first function and the second function, or the absolute value of the first function and the absolute value of the second function, or the multiplication of the first function and the second function, and the like.

In one embodiment, training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of an objective function output by the initial label recommendation model meets a set condition, and the training of the label recommendation model is completed, including: training a first label recommendation model according to the sample semantic similarity, and acquiring a first function value of a first function output by the first label recommendation model; training a second label recommendation model according to the sample co-occurrence probability, and acquiring a second function value of a second function output by the second label recommendation model; and taking the sum of the first function value and the second function value as a target value until the loss value between the target value and the calibration value meets the set condition, and finishing the training of the label recommendation model.

After obtaining the sample semantic similarity and the sample co-occurrence probability, the server 104 trains the first tag recommendation model by using the sample semantic similarity, obtains a function value of a first function corresponding to the first tag recommendation model and records as a first function value, trains the second tag recommendation model by using the sample co-occurrence probability, obtains a function value of a second function corresponding to the second tag recommendation model and records as a second function value, sums the first function value and the second function value, and uses the sum result as a target value until a loss value between the target value and the standard value meets a set condition, and finishes training the tag recommendation model. The first function is used for solving the maximum value of the semantic similarity, the second function is used for solving the maximum value of the co-occurrence probability, the objective function is used for solving the maximum value of the semantic similarity and the co-occurrence probability simultaneously, and when the objective function is the sum of the maximum values of the first function and the second function, the objective function is used for solving the sum of the maximum values of the semantic similarity and the co-occurrence probability, namely the sum of the maximum values of the semantic similarity and the co-occurrence probability is solved.

As a specific example, referring to fig. 6, the initial tag recommendation model may include a first tag recommendation model (left side in the figure) as a main model, which may be a word vector model, specifically a skip-gram model or a glove model, and a second tag recommendation model (right side in the figure), which serves as an auxiliary model, and shares a word embedding layer with the main model for joint optimization.

When the first tag recommendation model is a skip-gram model, the target function corresponding to the model, namely the first function, can be expressed by the following formula (3):

in the formula (I), the compound is shown in the specification,an objective function corresponding to the skip-gram model, specifically a maximum semantic similarity likelihood function, that is, a maximum value for finding semantic similarity, where T represents the total number of sample words input to the skip-gram model, and p (w)t+j|wt) Word w represented in the center sampletSample word w in occurrencet+jAnd c is the window size of the skip-gram model.

The second function, which is an objective function corresponding to the second tag recommendation model, can be expressed by the following formula (4):

in the formula (I), the compound is shown in the specification,recommending an objective function corresponding to the model for the second tag, specifically, a maximum co-occurrence probability likelihood function, that is, obtaining a maximum value of the co-occurrence probability, specifically, obtaining a difference value between different co-occurrence probabilities by using a hinge loss function or a mean square error loss function, and the like, and determining the maximum value of the co-occurrence probability according to the difference value, for example, when PMI (w) is determinedi,wj)>PMI(wi,wk) When the maximum value of the co-occurrence probability is PMI (w)i,wj),PMI(wi,wj) Label w for indicating a sampleiAnd a sample label wjMutual point information between, i.e. co-occurrence probability, PMI (w)i,wk) To representSample label wiAnd a sample label wkThe mutual point information between them is the co-occurrence probability.

The objective function corresponding to the initial tag recommendation model can be expressed by the following formula (5):

in the formula (I), the compound is shown in the specification,and recommending an objective function of the model for the initial label, specifically maximizing the sum of the semantic similarity and the co-occurrence probability.

It should be noted that, in this example, the first function corresponding to the first tag recommendation model includes an analysis process of semantic similarity, meanwhile, a second function corresponding to the second label recommendation model comprises an analysis process of co-occurrence probability, so when the initial label recommendation model is trained, the sample words and sample labels may be directly input as training parameters to the initial label recommendation model and trained to obtain a pre-trained label recommendation model, and the server 104 selects a target label from at least two candidate labels based on the pre-trained label recommendation model, the reference label and at least two candidate labels can be directly input into a pre-trained label recommendation model, a target label is obtained through learning, thus eliminating the need to separately determine semantic similarity and co-occurrence probability between the reference label and the candidate label.

The target value obtained by learning the label recommendation model not only can make the similarity of the words such as Chinese hamburgers and food higher in the cosine distance represented by the word vector, but also can make words which are more likely to appear together in other potential multimedia data more frequently and have closer word vectors. For example, "beijing" and "shanghai" in the background art are unlikely to co-occur in the video number information stream scene at the same time, and thus the word vector distance of two words learned through the above tag recommendation model will be significantly longer, i.e., the similarity is lower.

It should be understood that although the various steps in the flowcharts of fig. 2, 4-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2, 4-5 may include at least two sub-steps or at least two stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.

In summary, in the above tag recommendation method for multimedia data, a reference tag and at least two candidate tags are obtained, semantic similarity between the reference tag and the candidate tags is determined, a co-occurrence probability between the reference tag and the candidate tags is determined, and a target tag is selected from the at least two candidate tags according to the semantic similarity and the co-occurrence probability. Because the semantic similarity represents the similarity of the labels in the semantics, and the co-occurrence probability represents the probability of the labels appearing in the same multimedia data, the target labels obtained based on the semantic similarity and the co-occurrence probability are more consistent with the distribution hypothesis of the multimedia data, and the obtained target labels are more accurate.

In one embodiment, as shown in fig. 7, a training method for a tag recommendation model is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:

step S702, at least two sample words are obtained, and at least two sample tags of the multimedia data are obtained.

The server 104 may obtain at least two sample words from the corpus of news and train the label recommendation model by using the sample words as part of training data of the label recommendation model, and meanwhile, the server 104 also obtains a large number of labels of all existing multimedia data as at least two sample labels and trains the label recommendation model by using the sample labels as part of training data of the label recommendation model.

Step S704, analyzing the semantic similarity between the sample words as the sample semantic similarity, and analyzing the semantic similarity between the sample labels as the sample semantic similarity.

In the application, two different sample words are respectively expressed into an N-dimensional word vector, then the cosine distance between the word vectors of the different sample words is calculated, and the semantic similarity between the two different sample words is determined according to the cosine distance. Of course, the semantic similarity between two different sample words may also be obtained in other manners, and the prior art may be specifically used, which is not limited herein. The semantic similarity between two different sample labels can be obtained by referring to the semantic similarity between two different sample words.

After the server 104 obtains at least two sample words and at least two sample labels, the semantic similarity between the sample words can be obtained in the manner described above and is used as the sample semantic similarity, and the semantic similarity between the sample labels can be obtained in the manner described above and is used as the sample semantic similarity. That is to say, the semantic similarity of the samples includes two aspects, on one hand, the semantic similarity between sample words, and on the other hand, the semantic similarity between sample labels, and since the semantic similarity of the news corpus and the label corpus is adopted to train the label recommendation model, the label recommendation model can be generalized to any word without limitation, and thus the identification accuracy of the label recommendation model is improved.

And step S706, analyzing the co-occurrence probability among the sample labels as the sample co-occurrence probability.

In one embodiment, analyzing and as sample co-occurrence probabilities between respective sample tags comprises: and taking point mutual information among the sample labels as sample co-occurrence probability, wherein the point mutual information is used for describing correlation among the sample labels.

After obtaining the at least two sample tags, the server 104 may first obtain the frequency of occurrence of each sample tag in all multimedia data already existing in the database, and the frequency of occurrence of every two different sample tags in the at least two sample tags in the same multimedia data, and then calculate and obtain the co-occurrence probability between every two sample tags in the at least two sample tags according to the frequency of occurrence of each sample tag, the frequency of occurrence of every two different tags in the at least two sample tags in the same multimedia data, and the number of the at least two sample tags by using the above formula (2).

Step S708, training a label recommendation model according to the sample semantic similarity and the sample co-occurrence probability.

In one embodiment, training a label recommendation model according to sample semantic similarity and sample co-occurrence probability comprises: and training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value of the target function output by the initial label recommendation model and a calibration value meets a set condition, and finishing training of the label recommendation model.

After obtaining the sample semantic similarity and the sample co-occurrence probability, the server 104 inputs the sample semantic similarity and the sample co-occurrence probability to the initial label recommendation model to train the initial label recommendation model until a loss value between a target value and a calibration value of a target function output by the initial label recommendation model meets a set condition, and the label recommendation model is trained completely.

In one embodiment, the initial tag recommendation model includes a first tag recommendation model and a second tag recommendation model, and the objective function includes: the word embedding method comprises a first function corresponding to a first tag recommendation model and a second function corresponding to a second tag recommendation model, wherein the first tag recommendation model and the second tag recommendation model share a word embedding layer.

That is to say, the initial tag recommendation model may include two tag recommendation models, which are a first tag recommendation model and a second tag recommendation model respectively, where the first tag recommendation model is used as a main model and may be a word vector model, and the second tag recommendation model is used as an auxiliary model, and shares a word embedding layer with the main model to perform joint optimization, that is, a function value output by the auxiliary model is used as a joint supervision signal of the main model to perform joint training on the main model, so that the tag recommendation model obtained through training can learn co-occurrence probability distributions of different tags in existing tags of multimedia data. The first tag recommendation model corresponds to a first function, when the first tag recommendation model is a word vector model, the first function is an objective function of the word vector model, the second tag recommendation model corresponds to a second function, the second function may be a loss function for solving a difference between co-occurrence probabilities, that is, a difference between mutual information of points, and specifically may be a hinge loss function or a mean square error loss function, and the like, and the objective function includes the first function and the second function, for example, the objective function is the sum of the first function and the second function, or the absolute value of the first function and the absolute value of the second function, or the multiplication of the first function and the second function, and the like.

In one embodiment, training an initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value and a calibration value of an objective function output by the initial label recommendation model meets a set condition, and the training of the label recommendation model is completed, including: training a first label recommendation model according to the sample semantic similarity, and acquiring a first function value of a first function output by the first label recommendation model; training a second label recommendation model according to the sample co-occurrence probability, and acquiring a second function value of a second function output by the second label recommendation model; and taking the sum of the first function value and the second function value as a target value until the loss value between the target value and the calibration value meets the set condition, and finishing the training of the label recommendation model.

After obtaining the sample semantic similarity and the sample co-occurrence probability, the server 104 trains the first tag recommendation model by using the sample semantic similarity, obtains a function value of a first function corresponding to the first tag recommendation model and records as a first function value, trains the second tag recommendation model by using the sample co-occurrence probability, obtains a function value of a second function corresponding to the second tag recommendation model and records as a second function value, sums the first function value and the second function value, and uses the sum result as a target value until a loss value between the target value and the standard value meets a set condition, and finishes training the tag recommendation model. The first function is used for solving the maximum value of the semantic similarity, the second function is used for solving the maximum value of the co-occurrence probability, the objective function is used for solving the maximum value of the semantic similarity and the co-occurrence probability simultaneously, and when the objective function is the sum of the maximum values of the first function and the second function, the objective function is used for solving the sum of the maximum values of the semantic similarity and the co-occurrence probability, namely the sum of the maximum values of the semantic similarity and the co-occurrence probability is solved.

As a specific example, referring to fig. 6, the initial tag recommendation model may include a first tag recommendation model (left side in the figure) as a main model, which may be a word vector model, specifically a skip-gram model or a glove model, and a second tag recommendation model (right side in the figure), which serves as an auxiliary model, and shares a word embedding layer with the main model for joint optimization. When the first tag recommendation model is a skip-gram model, the target function corresponding to the model, i.e., the first function, can be expressed by the formula (3). The second function, which is an objective function corresponding to the second tag recommendation model, can be expressed by the above formula (4). The objective function corresponding to the initial tag recommendation model can be expressed by the above formula (5). It should be noted that, in this example, the first function corresponding to the first tag recommendation model includes an analysis process of semantic similarity, and the second function corresponding to the second tag recommendation model includes an analysis process of co-occurrence probability, so when the initial tag recommendation model is trained, the sample word and the sample tag may be directly input to the initial tag recommendation model as training parameters and trained to obtain the tag recommendation model.

It should be noted that, for details that are not disclosed in the training method for the tag recommendation model, reference may be made to the contents of the aforementioned tag recommendation method for multimedia data regarding model training.

In one embodiment, as shown in fig. 8, there is provided a tag recommendation apparatus for multimedia data 100 including: a first obtaining module 101, a first determining module 102, a second determining module 103 and a recommending module 104.

The first obtaining module 101 is configured to obtain a reference tag and at least two candidate tags; the first determining module 102 is configured to determine semantic similarity between the reference tag and the candidate tag; the second determining module 103 is configured to determine a co-occurrence probability between the reference tag and the candidate tag, where the co-occurrence probability is used to describe a probability that the reference tag and the candidate tag belong to tags of the same multimedia data; the recommendation module 104 is configured to select a target tag from the at least two candidate tags according to the semantic similarity and the co-occurrence probability.

In one embodiment, the recommendation module 104 is specifically configured to input the semantic similarity and the co-occurrence probability into a pre-trained tag recommendation model, and obtain a function value of an objective function output by the pre-trained tag recommendation model, where the pre-trained tag recommendation model has learned a correspondence between the semantic similarity and the co-occurrence probability and a function value of the objective function corresponding to the semantic similarity and the co-occurrence probability; and selecting a target label from at least two candidate labels according to the function value.

In one embodiment, the recommending module 104 is specifically configured to sort the function values according to a magnitude order relationship; and taking the candidate tags with the preset number ranked at the top as target tags.

In one embodiment, referring to fig. 9, the tag recommendation apparatus for multimedia data further includes: a second acquisition module 105, a first analysis module 106, a second analysis module 107, and a training module 108. The second obtaining module 105 is configured to obtain at least two sample words from the news corpus and obtain at least two sample labels of the multimedia data; the first analysis module 106 is configured to analyze semantic similarity between sample words and use the semantic similarity as sample semantic similarity, and analyze semantic similarity between sample labels and use the semantic similarity as sample semantic similarity; the second analysis module 107 analyzes the co-occurrence probability among the sample labels and takes the co-occurrence probability as the sample co-occurrence probability; the training module 108 is configured to train the initial label recommendation model according to the sample semantic similarity and the sample co-occurrence probability until a loss value between a target value of the objective function output by the initial label recommendation model and a calibration value meets a set condition, and the label recommendation model is trained completely.

In one embodiment, the initial tag recommendation model includes a first tag recommendation model and a second tag recommendation model, and the objective function includes: the word embedding method comprises a first function corresponding to a first tag recommendation model and a second function corresponding to a second tag recommendation model, wherein the first tag recommendation model and the second tag recommendation model share a word embedding layer.

In an embodiment, the training module 108 is specifically configured to train the sample semantic similarity to a first label recommendation model, and obtain a first function value of a first function output by the first label recommendation model; training a second label recommendation model according to the sample co-occurrence probability, and acquiring a second function value of a second function output by the second label recommendation model; and taking the sum of the first function value and the second function value as a target value until the loss value between the target value and the calibration value meets the set condition, and finishing the training of the label recommendation model.

In one embodiment, referring to fig. 10, the tag recommendation apparatus for multimedia data further includes: the third acquiring module 109, the converting module 110, the word segmentation module 111 and the constructing module 112, wherein the third acquiring module 109 is used for acquiring massive news data; the conversion module 110 is configured to obtain a corresponding news text according to the news data; the word segmentation module 111 performs word segmentation processing on the news text to obtain at least two word segments; the construction module 112 constructs a news corpus from the at least two participles.

In an embodiment, the second determining module 103 is specifically configured to obtain mutual point information between the reference tag and the candidate tag, and use the mutual point information as a co-occurrence probability.

In one embodiment, the first tag recommendation model is a word vector model.

In one embodiment, the second function is a hinge loss function or a mean square error loss function.

For specific limitations of the tag recommendation apparatus for multimedia data, reference may be made to the above limitations of the tag recommendation method for multimedia data, which are not described herein again. The modules in the tag recommendation device for multimedia data may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In some embodiments of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the methods provided in various alternative implementations provided by the embodiments of the present invention, such as the tag recommendation method for multimedia data shown in fig. 2, 4-5, or the training method for the tag recommendation model shown in fig. 7. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

完整详细技术资料下载
上一篇:石墨接头机器人自动装卡簧、装栓机
下一篇:视频标签的处理方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!